F/OSS is like GNU/Linux; the second part is the part which matters, but the first part keeps getting pushed by noisy people so we put up with it to keep them happy.
The exploit is injecting environment variables, but yes, close enough. You need someone to call execve as root in order to become root, but you don't need a setuid binary.
"When the timing aligns, the trigger's buggy memmove causes K+1 to self-overwrite, replacing sshd-session's real environment with the preseed payload. sshd-session's exec_copyout_strings copies LD_PRELOAD=/tmp/evil.so to the new process's stack, the runtime linker loads evil.so, and its constructor copies /bin/sh to /tmp/rootsh and sets it suid root. My human's unprivileged user runs /tmp/rootsh -p and gets a root shell."
... so at the very end of the exploit chain, is /tmp/rootsh required to be suid root before it is finally run to get the root shell ?
... or is the exploit already achieved and /tmp/rootsh is just an arbitrary indicator ?
Reducing clock speeds, even if they could do that -- and I'm not sure they can, given how Nitro is designed -- would be problematic since a lot of customer workloads assume homogeneous nodes.
But they did load-shed. Perhaps not soon enough, but the reason this is publicly known is because they reduced the amount of heat being produced.
Right, exactly, I highly doubt the facility went into any kind of actual uncontrolled thermal rise. This is news because they had to take such drastic actions. I'm sure its common that they force spot prices up (probably way up) to compensate for reduced capacity due to events, I'm sure they even sometimes fake no capacity for similar reasons. No capacity means "I don't want to turn on your node" not merely "I don't have any more physical servers I could turn up for you".
This is news because they powered off some non-preemptible customer loads, which actually makes me wonder if you saw that chain of events occur here.
spot prices rise -> new instance availability goes to 0 -> preemptible instances go dark -> normal instances go dark.
Alternatively, switch to an operating system like FreeBSD which doesn't take a YOLO approach to security. Security fixes don't just get tossed into the FreeBSD kernel without coordination; they go through the FreeBSD security team and we have binary updates (via FreeBSD Update, and via pkgbase for 15.0-RELEASE) published within a couple minutes of the patches hitting the src tree. (Roughly speaking, a few seconds for the "I've pushed the patches" message to go out on slack, 10-30 seconds for patches to be uploaded, and up to a minute for mirrors to sync).
I'm somewhat skeptical here, because I notified the FreeBSD security team of a vulnerability a few years ago, and I never got a response, even after a follow-up email a few weeks later. To be fair, my report was about a non-core component, and the vulnerability wouldn't be very easy to exploit, but Debian, OpenBSD, SUSE, and Gentoo all patched it within a week [0].
That being said, I'm not suggesting that anyone should judge an entire OS based off of how they handle a single minor report, since everything else that I've seen suggests that FreeBSD takes security reports quite seriously. But then you could also use this same argument for the Linux kernel bug, since it's pretty rare for a patch to be mismanaged like this there too :)
While not receiving a response isn't ideal, I note that we actually have two secteams: secteam@ and ports-secteam@; something like luatex should go to the latter, but their level of activity has been kind of hit or miss in my experience. Curating security issues in ports is kind of hard due to the size of it and we probably more often than not end up getting hit with patching things a little after disclosure because of it.
Linux Kernel doesn’t differentiate between security bugs and other bugs, which is the main complaint here I think. They have the same process.
So the issue is bigger than the mishandling of a single issue, it’s a fundamental process issue around security for one of the most impactful projects in the entire space.
FreeBSD didn’t have user land ASLR until 2019 and, amongst other mitigations, still doesn’t have kASLR. It’s not a serious operating system for people who care about security. If you want FreeBSD and security take Shawn Webb’s HardenedBSD.
>Last I read, ASLR is a good thing to have, but overall is usually not difficult to defeat.
For local attackers there may be easier avenues to leak the ASLR slide, but for remote attackers it's almost universally agreed it significantly raises the bar.
>I don't think it's reasonable to say that an OS that lacks it isn't "serious" about security.
When they implemented it in 2019 it had been an 18-year-old mitigation. If you are serious about security, you implement everything that raises the bar. The term "defense-in-depth" exists for a reason, and ASLR is probably one of the easiest and most effective defense-in-depth measures you can implement that doesn't necessarily require changes from existing code other than compiling with -pie.
They exploited a linear stack buffer overflow. Not a write-what-where or arb write. A linear stack buffer overflow in 2026! There are at least two distinct failures there:
If you are switching to a BSD for security reasons, why FreeBSD? Isn't OpenBSD the super secure one? Sorry, it's been a while since I've looked at those projects
The person suggesting FreeBSD is a FreeBSD developer (Colin Percival - actually according to Wikipedia FreeBSD engineering lead), would be weird for him to suggest openbsd.
Also hilarious to see Drew Houston responding a bit later on the same thread:
> we're in a similar space -- http://www.getdropbox.com (and part of the yc summer 07 program) basically, sync and backup done right (but for windows and os x). i had the same frustrations as you with existing solutions.
> let me know if it's something you're interested in, or if you want to chat about it sometime.
It was, yes. I was trying to figure a way to bring it up but I didn't want to imply that the comment here was ignorant for not knowing the account. It's the opposite, HN accounts have so little fanfare and we all talk in the same threads, it's fun!
Okay. But the question isn't about him, the question is about the actual merits. And he's a good person to ask for a compelling argument about the merits.
If you ask Bill Gates why you should use MS-DOS in particular, not DR DOS, the answer is not "it would be weird for Bill Gates to suggest DR DOS".
There’s always a guy. It’s great that your favorite distro is definitely safer. An order of magnitude fewer exploits will mean only a few thousand or so, I suppose. Ozymandis used Gentoo.
Less laconically, distros generally refer to the userland parts of the operating systems rather than the actual kernel. FreeBSD does not use the Linux kernel so calling it a distro, which typically refer specifically to Linux distros, wouldn't be accurate.
Where are you messing with userland-only options? In my experience a Linux distro not only comes with a kernel, it's almost always a kernel specific to the distro. So I don't understand that reason.
As far as Linux versus not Linux, "distro" feels fine to me for Unix systems.
FreeBSD is not a distro. It's not even Linux; it's a completely different kernel and operating system that traces back to even before Linux. It's honestly closer to Darwin than it is to Linux; macOS is technically a BSD. (Not FreeBSD though.)
That's more of a historical artifact. The BSDs started as just "BSD": a set of patches for AT&T Unix that were _distributed_ by Berkeley. Eventually the patches became complete enough to be an entire operating system. _Then_ the various BSDs that we know today (FreeBSD, OpenBSD, NetBSD, DragonflyBSD) all forked and became completely independent operating systems. For decades, FreeBSD's kernel and userland has been developed independently from the OpenBSD kernel and userland which is developed independently from NetBSD's kernel and userland, etc. You could not take an OpenBSD program and run it on FreeBSD. Even recompilation from source isn't necessarily enough since the BSDs support different syscalls.
They are completely independent operating systems with a distant shared history.
Whereas on Linux, the distros are taking a common Linux kernel source, and combining it with their choice of common userlands like GNU. Debian has the same kernel and GNU userland that Arch and Fedora use. You could take a program compiled for Debian and run it on Arch, which is common these days due to Docker where you're pulling another distro's userland and running it on your distro's kernel. That is how Linux distros are "distros" whereas the BSDs are independent operating systems.
Do you honestly think stackghost doesn't know what the "D" stood for? They were making a point, not seeking information. My answer directly responded to the point they were making.
While you’re correct that FreeBSD is not a Linux distribution, the word “distro” is literally short for distribution. It doesn’t have a different meaning like smart and smartass, it’s more like repo and repository.
Debian is probably the best of all the Linuxes, but still suffers from split-brain: If patches are sent upstream first, Debian can't start digesting them until they're already public.
With FreeBSD there's never any question of "who should this get reported to".
> Debian can't start digesting them until they're already public
Not sure what you mean by this. Debian is able to handle coordinated disclosures (when they're actually coordinated), and get embargoed security updates out rapidly without breaking the embargo.
Is there some other aspect of this that you're referencing?
The key words there are "when they're actually coordinated". Debian doesn't own the Linux kernel, and the kernel developers don't bother with coordinated disclosure, so the happy path of coordinated disclosure only happens when reporters make the non-obvious choice of reporting vulnerabilities to people other than the maintainers.
Been constructing a lot of infrastructure servers recently, almost all of them FreeBSD VMs running under bhyve on FreeBSD physical hosts. It's a very simple, clean, pleasant environment to work in. And they all run tarsnap. ;-)
I've kept hearing about BSD recently, how hard is it to actually switch to? I'm guessing Linux executables don't work on it since it's not Linux, do all your packages have to be made specifically for BSD?
My experiences from dabbling with it a few months ago:
In general everything needs to be compiled for FreeBSD, but the ports collection is quite extensive. For example you will find Firefox, wayland, GNOME, KDE, xfce, … even dotnet was on there.
Problems arise with properietary stuff like Spotify, Widevine DRM etc. However, FreeBSD has a Linux emulation layer (providing syscalls), dubbed ‘Linuxulator’. I managed to run the Spotify Linux desktop client but the Spotify website wouldn’t let me log in, didn’t research further. AFAIK the emulator is limited though, not implementing all syscalls.
There is also podman for FreeBSD and in addition to running FreeBSD containers (using Jails under the hood I guess?) it can run Linux containers as well (using the Linuxulator in addition then?).
It also comes with a hypervisor called bhyve if you want to run VMs
There is a handbook on their website describing how to set up a system (including desktop environment) if you want to give it a go.
curl | sh is more prevalent in Linux where you can expect a stable ABI from the kernel and sometimes GNU libc. No such things in BSD land. Packages are built against a release always. They don't maintain binary compatibility.
I don't get it. Is this a parody of poor design decisions?
Sure, it's possible to write bugs in C. And if you really want to, you can disable the compiler warnings which flag tautologous comparisons and mixed-sign comparisons (a common reason for doing this is to avoid spurious warnings in generic-type code).
But, uhh, "people can deliberately write bugs" has got to be the weakest justification I've ever seen for changing a language feature -- especially one as fundamental as "sizes of objects can't be negative".
The C language does not have any data type that has the property "can't be negative".
Signed integers can be negative. The so-called "unsigned" integers of C are integer residues modulo 2^N, which are neither positive nor negative, i.e. these concepts are not applicable to "unsigned" integers.
An alternative view is that any C "unsigned" is both positive and negative. For example the unsigned short "1" is the same number as "65537" and as "-65535".
So any sizeof value in C is negative (while also being positive).
In contradiction with what you say, the change described in TFA, by making sizes 64-bit signed integers, is the only method to guarantee that the sizes are non-negative in a language that does not have dedicated non-negative integers.
Other programming languages have non-negative integers, but C and C++ and many languages derived from them do not have such integers.
The arithmetic operations with non-negative integers differ from the arithmetic operations of C. On overflows and underflows, they either generate exceptions or have saturating behavior.
Leaving aside the fact that, yes, unsigned integer types are definitely not negative -- my point wasn't about types at all. Objects cannot take up a negative number of bytes of memory!
> An alternative view is that any C "unsigned" is both positive and negative. For example the unsigned short "1" is the same number as "65537" and as "-65535".
This can be disproven by the fact that dividing by `unsigned e = 1U` is well defined and always yields the starting number. If the unsigned numbers were really modular numbers as you suggest, division could not be defined.
This does not demonstrate anything. It is just additional evidence that the C standard contains contradictory rules about "unsigned" integers.
The oldest parts of the C language are all consistent with "unsigned" numbers being non-negative integers. The implicit conversions between different sizes of "unsigned", the sizeof operator, the relational operators and division are consistent with non-negative integers.
However the first C standard, instead of defining the correct behavior has left undefined many corner cases of the arithmetic operations, allowing the implementation of "unsigned" as either non-negative integers or integer residues.
Eventually, the undefined behaviors for addition, subtraction and multiplication have been defined to be those of integer residues, not those of non-negative integers.
These contradictory properties are the cause of many confusions and bugs.
In extensible languages, like C++, it is possible to define proper non-negative integers and integer residues and bit strings and to always use those types instead of the built-in "unsigned".
In C, it is better to always use signed numbers and avoid unsigned, by casting unsigned to bigger sizes of signed before using such a value.
On the topic of bitcoin millionaires... I'm getting some sponsorship for my FreeBSD release engineering work from https://opensats.org/ . Have you asked them?
I haven't, honestly the application processes, paperwork, project reporting expectations and eligibility criteria for these kind of funds just gives me too much anxiety. I never know how to navigate it as a Canadian either, most of them are US or Europe based.
I guess I'm naive to hold out for the anonymous bitcoin millionaires to donate "no strings" until I find something a bit more frictionless.
Thanks anyway for the suggestion, glad to hear you're getting sponsored for your FreeBSD work.
Necessary stuff (houses, healthcare, education) have outpaced CPI, and generally it is becoming more expensive.
Unnecessary stuff (electronics, appliances, other tech) did not, and generally it is becoming cheaper (Planned obsolescence is another topic though...)
reply