Glibc has been a source of breakage for proprietary software ever since I started using Linux. How many codebases had to add this line around 2014 (the year I brought my first laptop)?
dlopen and dlmopen no longer make the stack executable if a shared library requires it
I'm not counting intentional breakage to improve system security. I'm not even sure I'd call it an ABI breakage; by a wide definition I guess it is a "change in glibc that makes things not work anymore". You also can't execute a.out binaries anymore, eh. And I don't think I would call something that affects primarily closed source binaries (and mono on i386) a "huge" issue either.
The fix is trivial ("execstack -s <binary>") and doesn't involve any change to installed versions of anything.
> __asm__ (".symver memcpy, memcpy@GLIBC_2.2.5");
A 12 year old forward compatibility issue that is fixed by upgrading glibc, okay. (Note this is the same timeframe as the s390 thing I linked.) I guess people shipping binaries need to be aware of it if they want to support glibc pre-2.2.14. That said, the general rule of shipping binaries is that you need to assume whatever you build against becomes the minimum required version, anything else is a gift.
I think my point about never pinning glibc stands, and how many other things do you know where you need to go back 12 years to find an ABI break?
A few recent examples: many distros changed the flag for emuTLS on Windows, which gcc implemented as a hard ABI break for no essential reason. If you compile for enough platforms, it will print various notes that various alignments have changed in the struct, which has caused grief numerous times in the past few years for people I work with (many of the changes don’t come with notes, they just change alignment since new hardware comes out that has new features which need more and/or there were bugs previously, so they silently change it). There’s also win32 or posix threading, and sjlj vs structured exceptions, all of which are mutually incompatible. Oh, and then don’t get me started on ARM v6 atomics, which is some weird decided-based-on-compiler-flags-to-record-heuristics-detected-during-configure-of-some-auxiliary-header time, instead of using the current target’s ABI
Sure, the ABI does not change at all, if you know in advance where the landmines are. Though it does have quite few, especially for a large project with a huge surface area
I was talking about glibc; none of the things you mention relate to that. (Note glibc is not used on Windows.) I'm painfully aware of some of the breakage GCC occasionally causes (the atomics fall into that — but to be fair ARMv6 is also >10 years ago at this point, unless you're doing embedded, in which case you generally build the whole system anyway and don't care about these breaks.)
Also, just in case it's not clear, glibc has nothing to do with GCC.
You can have one of "they're so serious about forward and backward compat you should remove version pins because things won't ever break" and "well, I'm not counting intentional breaks" but not both.
I don't know much about glibc's specifics, but in general: Security comes before a lot of things but not before accidental breakage. If you say "our compat guarantees are so strong you shouldn't keep track of versions at all", I expect this to mean that APIs and ABIs will never change, or at least not without a massive awareness campaign and transition plan being announced before - because such a breakage would take out my project or might even introduce new vulnerabilities.
A security update that also does breaking changes is sort of the worst case, because dependents are essentially damned if they apply it and damned if they don't. They can't always be avoided if an API is so insecure it's beyond repair - but then dependents will have to update in their own time because they will also have to fix their own implementations. So this would be an argument for version pins in that case.
> Security comes before a lot of things but not before accidental breakage.
It didn't really sound accidental, though the bugzilla report sounds like they underestimated the impact. But I'm quite sure they were aware this would break at least some old software, it's not like it's hard to understand what exactly is happening here. Old software (or rather, builds) used executable stacks by default, and some edge cases used them for some time after that. I'll say they probably should've done a warning period.
> If you say "our compat guarantees are so strong you shouldn't keep track of versions at all",
What I said was you should never pin a glibc version, not to not keep track of versions at all :). I will admit this is hazy if you build binaries to distribute; in that case you probably want to intentionally use an old glibc. But that's not exactly a "pin", that's a "what's the oldest you want to support". The software being built basically makes no difference; glibc will do what it does, the only limit will be placed by features added in newer glibc versions. The binaries produced from that, running on a new system with any newer glibc, have AFAIK last broken during that S390 incident I linked. (e.g. you wouldn't have gotten executables requiring executable stacks out of a normal GNU tool chain for quite some time by now. It's still possible to intentionally mangle your code and build options to get that result, but you need to either be trying, or do relatively cursed things [like Mono probably some binary loading shenanigans]).
Either way — never pin glibc. And if your cmake or whatever complains about a glibc version conflict, remove the pins and use the newest version involved.
(And about the execstack thing — I'm happy to hear your complaints if and only if you show up with an actual report of breakage that you encountered. No 3rd party phoning. Because what's at question in this case is the actual impact size, which to me seems quite limited.)
> A security update that also does breaking changes is sort of the worst case, because dependents are essentially damned if they apply it and damned if they don't. They can't always be avoided if an API is so insecure it's beyond repair - but then dependents will have to update in their own time because they will also have to fix their own implementations. So this would be an argument for version pins in that case.
That's not how DSO versioning works. Unless it was a horrible disaster, the old functions would remain available, but you'd be required to jump through hoops to build new software against it.
Because glibc ships basically their entire version history in the library binary. That's the thing with DSO versioning.
I do wonder if they could've done some versioning trick with the execstack thing, the problem there is that it's more about global system behavior than actual exposed ABIs. (By the way, libc isn't even what you would be "pinning" there, it's libdl. Splitting hairs though, as that's part of glibc.)
This is the attitude that will always prevent GNU/Linux from becoming prevalent for personal computing. There is a such a lack of empathy for users' hardships and it shows.
What? There was a huge breakage literally last year: https://sourceware.org/bugzilla/show_bug.cgi?id=32653
Glibc has been a source of breakage for proprietary software ever since I started using Linux. How many codebases had to add this line around 2014 (the year I brought my first laptop)?