If you want to go even further than that, you can use APE (αcτµαlly pδrταblε εxεcµταblε) which lets you compile a single binary that can run on Linux, MacOS, Windows, FreeBSD, OpenBSD, and NetBSD.
> Don’t try to statically link libc or libstdc++, this may cause trouble with other system libs you’ll use (because the system libs dynamically link libc and maybe libstdc++ - probably in versions different from what you linked statically. That causes conflicts.)
> Linux is not oriented toward supporting binary compatibility beyond the most rudimentary level
Why do you say this? I am truly curious.
My understanding is that the Linux ABI is very stable ("we never break user space" -- Linux Torvald). If you static link the libc, your binary should work on any Linux.
I mean, if we were to judge Linux it should be with regards to other operating system of equivalent complexity and use. My understanding is that Windows or macos do not offer this kind of stability over time.
The kernel ABI is very stable, but applications typically need to interact with parts of the operating system besides the kernel, and those parts are not as fanatical when it comes to providing stable interfaces.
For example, applications that need to do hostname and username resolution have to interface with NSS, and that breaks if you statically link glibc.
Another example is playing audio - this requires interfacing with some userspace audio system like JACK, PulseAudio, or now Pipewire. If you statically link libSDL, it won't work in 5-10 years with whatever the latest fad in audio playback is. But if you dynamically link libSDL, then you can't statically link glibc because libSDL may be linked with a newer version of glibc, causing a conflict.
It is possible to statically link everything except for libnss and link it dynamically. I used this trick to create recovery shells on Solaris back in the day that didnt need /usr mounted. Same thing worked on Linux from what I recall.
> My understanding is that Windows or macos do not offer this kind of stability over time.
This definitely torpedoed your argument (ask any enterprise Windows user). First, "Linux" in this sense means what a normal user sees as Linux, which is a GNU/Linux desktop with a DE such as Gnome or KDE. You're correct that the Linux kernel ABI is stable (at least in userland), it would be simply wrong to say that "Linux" (as would normal users encounter) is ABI-stable.
Well, it is now 2023. Perhaps some things finally improved since the 90s. Or so I would hope (apparently not).
If you run HPC jobs, you are stuck with the Linux version your cluster supports (probably not too recent), and you probably won’t be able to build directly there. This is when you need all these hacks.
I'm not a real "glibc hater" apart from my (limited) experience of it being the no. 1 roadblock in getting linux binaries to just work.
Also, a lot of forum posts like to imply there is some kind of technical issue with statically linking it. As opposed to musl, which even provides a handy wrapper that just works.
glibc is far from the no1 roadblock, if a program doesn't use undocumented/private features it will work for decades. This example [0] shows a program i wrote compiled on Red Hat linux from 1997 (ignore the broken colors, the program assumed 24bit/8bcc X server) dynamically linking against libX11 and glibc running on 2017 Debian (i took the shot in 2017), showing two decades of backwards compatibility with both the GNU C library and X11.
If a program that links dynamically to the GNU C library doesn't work the reason is either some other library or the program doing something screwy with glibc that it shouldn't do - but in that case that program wouldn't compile with musl either since it'd be touching glibc internals.
However yes, this is an issue though AFAIK that is more of an ld issue than a glibc issue - you'd have the same problems with other libraries too.
From the post i replied, i understood that problem as a users' problem for running existing binaries (most binaries are using some older version to avoid that issue and in general people who use Linux tend to have the latest version of stuff due to how non-intrusive updates are), which shouldn't be a problem. It can be a bit of a PITA from the developer side though.
For other libraries I simply YOLO-ed it and statically linked everything. Can't do the same for glibc.
Yes, I ship security vulnerability to my users (and am responsible to fix it after every OpenSSL/curl/libpng/... advisory). Still MUCH better than shipping non-working broken shit to users.
I maintained a sysroot with glibc 2.17 (it was 2.12 before when there are still CentOS 6 presence), built with latest GCC, for $DAYJOB. You can then use whatever modern toolchain you like (be it Clang+LLD or latest GCC+binutils), just point sysroot to it. Runs everywhere, don't need a container with old distro.
Wasn't too bad, but definitely takes ~0.2 SDE to maintain.
Yes but that's ok, the article even mentions it with Debian 7 as an example. 1997 is indeed a bit extreme but going through the hassle once for your build setup should be acceptable pain.
musl doesn't support NSS properly as far as I can see, and glibc won't support it when statically linked.
So it's fine if you're running an app on a system or a container where everything is in /etc/passwd, /etc/services, /etc/hosts etc. Meanwhile back in the real world people who need their applications to work for NSS sources other than any of those will probably stick to dynamically linking glibc.
Because you won’t be able to run your program on a HPC cluster that runs CentOS 7 with an ancient glibc, if you wrote it in C++17 on your Fedora machine. Or basically anywhere else.
This hits close to home. We have a handful of versions of GCC and library paths for this reason.
ROCKS never upgraded to CentOS 8, new hardware support is becoming an issue, so moving to a new cluster manager and a newer Rocky/Alma/? has been a long time coming.
I can't speak for other use cases, but trying a musl based distro on restricted hardware convinced me how good it is. Memory/storage consumption is reduced by a a lot compared to glibc, and it's damn fast. Now I try to run Alpine wherever I need compactness and simplicity.
If you're interested in this type of thing take a look at Holy Build box. It has a much more thorough write up on this stuff and an actual solution to boot.
For everyone else, just maintain a few docker images targeting the top linux distributions and move on with life.
IMHO this is kind of silly to use very old versions of gcc, libc, etc. Instead use the kernel's namespacing, chroot, etc. features to ship your dependencies. I.e. build a flatpak and forget all these archaic tricks and machinations.
Flatpak is awful, i tried install software from it a few times and not only it always in a ton of unnecessary stuff (practically an entire distro!) but also the software doesn't integrate seamlessly with the rest of the system. GUI software looks wrong (themes do not apply, font rendering is wrong), command line software doesn't show up in PATH. Example in [0] for the GUI bits (Bless for theme, Notepadqq for font rendering) as well as the disk usage for two simple programs like a hex editor and a text editor.
Even worse, while there is an option to install things in the user's directory only (so i can make a separate user to try some things that i can easily delete later) not everything installs with that options and wants root access to pollute the rest of my system.
Nowadays i simply avoid anything related to flatpak. If something doesn't provide normal binaries and i really want it, i'd rather compile it from source (and if the source language is something exotic or the program needs a ton of dependencies i'd just skip it).
I understand your issues but I don't think they're as bad as you're making them out to be. Moreover, I don't see how you could resolve them without either (A) all the distros agreeing on overarching UI and desktop frameworks and sticking them or (B) every software developer customizing their app for each individual distribution.
On the "unnecessary stuff" side, do note that Flatpak uses a diff'ing algorithm for the runtimes such that if you install several, they won't take up duplicate space. This is a a reason I prefer them to AppImage.
Open-source developers are not supposed to customize their applications for distributions. Leave that to distro's maintainers, they know more about that.
Closed source? Your users are paying you for the headache of making your software run wherever the users need it to run. The differences aren't really that big unless you're shipping something deep in the stack or have to support ancient systems. And you still have to test software everywhere.
> Open-source developers are not supposed to customize their applications for distributions. Leave that to distro's maintainers, they know more about that.
That's great when the distro maintainers are packaging the software I want—and a problem when they aren't!
> Closed source? Your users are paying you for the headache of making your software run wherever the users need it to run.
"Okay, then I'll ignore Linux because the users who need it there are too much effort to support."
They are as bad as i am making them out to be, at least for me :-P.
The core of the issue really - and Flatpak is just an attempt to work around it, not solve it, but just work around it - is that unlike the kernel the majority of foundational libraries do not provide stable ABIs.
As i wrote in another comment in this post, you technically can run software from the late 90s as long as that software is either statically linked or dynamically links only to the GNU C library and X11 (for GUI stuff), since the kernel and those two have stable ABIs since the 90s. However you do not get much functionality from those libraries even for a notepad-like application and anything beyond that tends to not provide stable ABIs.
And that is a cultural problem, not a technical one as there are several examples (including Windows as one of the most obvious) showing that you can have backwards compatibility and stable ABIs while also being able to extend the functionality of the underlying libraries (e.g. an app i wrote and compiled in WinVista got support for an emoji input box automatically in Windows 10 even though i never did anything for that beyond just using the functionality for text input that Windows provided since the late 80s).
Because of this the "(A)" solution you mentioned wouldn't even work since, say that everyone agreed to use -for example- GNOME. But that would only work until the Gtk developers decided to break Gtk's ABI yet again for their newest version, forcing everyone that decided to depend on them to waste time "upgrading" (the issues of which can be seen with XFCE or even better, Gimp that still depends on Gtk2 until at some point in the future when the Gtk3 is done - an move that a Gtk developer once described as upgrading from an abandoned version to a deprecated version).
I'm not singling out GNOME/Gtk though, the same issues apply with KDE/Qt since Qt also breaks their ABI every major version. If anything KDE/Qt are at a worse position by relying on C++ since despite how many promises GCC developers might make, with the C++ standard not providing a stable ABI promise, there is always the sword of ABI breakage looming over the head of any program that rely on dynamic libraries written in C++ (or any other language that doesn't have a stable ABI). Technically this could be solved by providing proxy dynamic libraries that use the old ABIs to forward calls to the new libraries but i am not aware of this being done in practice (i think Haiku does this to support BeOS binaries compiled with gcc2 but i'm not sure). I do not expect any C++ library to do that on Linux though.
So that is the root cause. It requires a cultural shift from often not even acknowledging the problem exists to actually considering ABI breakage something to avoided by all foundational libraries.
Flatpak is just a workaround that relies on the kernel at least not breaking its ABI, but even as a workaround it isn't that good for the reasons i mentioned. As i wrote in another reply, it could be improved - as a workaround, not a real solution - by only installing the libraries and software that doesn't already exist in the system and having better integration with it (with a large part of the latter being solved by doing the former). Fortunately so far most libraries do not break their ABIs every year, so at least chances are when i decide to download Notepadqq, it will use the Qt5 installation i already have in my OS instead of downloading its own copy.
Practically most container images literally are their own distro install. In the case of flatpak, this becomes obvious when looking at flathub and the provided "runtimes". At that point they just reinvented packages.
Flatpak is much worse than packages because if i install a package from my distro it doesn't make duplicates for libraries and other software i already have installed.
If Flatpak checked to see what i have already available and downloaded only the dependencies i do not have already installed while also providing seamless integration (a large part of which wouldn't be an issue if it used the system libraries/software), it'd actually be very useful and a solution to every distro having to package every program under the sun and keep it up to date (which IMO is also a bad thing because obviously not all distros can do that, it is just that Flatpak is a worse solution).
To my understanding Nix solves this pretty well: just install a flat list of every dependency + version combination any of your software needs. No duplication, all dependencies.
If Flatpak wasn't designed or intended for services, I fail to see how it can be recommended as a relevant tool. I get what you're saying about containerization, but Flatpak is as much of a 'solution' to package management as a cork in a bullet wound.
CoreOS (not specifically Fedora CoreOS) used to do that with Docker. I believe the OS ran your standard systemd services and all through interconnected containers. As far as I can tell, the project has been abandoned, though.
I don't think Flatpak is the right tool for the job, it's clearly optimized for UI tools and the occasional command line invocation. Snap may be better for composing a working systems, though that comes with the obvious Snap downsides. Systemd has some great tools to accomplish image/container based operating systems, though I'm not sure how common it's used.
Because that exe is a gigabyte in size, needs a special globally installed runtime for some reason and conflicts with my firewall rules. Even AppImages somehow manage to extract themselves to locations that conflict with other tools or earlier versions of the same tool.
I have one server with a complex firewall configuration that simply won't work well when I install Docker on it. I end up finding a lot of alternatives for what would otherwise be obvious solutions because projects stopped shipping binaries and replaced them with Docker containers.
I'll gladly make use of the compilation approach thanks to software available in the AUR. The whole "let's ship a copy of Debian to make my 400kB binary run" approach is as tiresome as "let's ship a copy of Chromium so I don't need to learnddesktop UI".
I know how terrible the dependency management situation on Linux is (I myself have had to resort to building software in chroots because of glibc being outdated) but the "modern" approach seems to add more trouble than it solves when I try to run programs like this.
On many distros these problems have luckily already been solved. On Ubuntu that means I'll often be using an outdated version of the program that works just as well and on Arch (based distros) that means waiting for a compile every other update, but I've given up on tools that come with an entire ecosystem.
That isn't the experience with flatpak though, flatpak is the same as using apt-get or any other package manager except instead of getting stuff from your distro's repositories you get them from some flatpak repository you have to configure (some distros that provide flatpak might have flathub preconfigured, though not all of them do that).
AppImage does provide that "download an exe and double click" experience but it assumes you have made said portable Linux binaries (and they still have the drawback of dragging in a bunch of libraries you most likely already have in your distro).
> If they don't want the container, they can typically compile the source too.
Why not do it in reverse? Offer binaries for the people running regular systems, and then let people with irregular configurations compile or containerize the program themselves.
Seems like you'd save on a lot of redundant packaging and wasted data that way.
flatpak is basically useless for any context where the application is expected to load and execute arbitrary 3rd party shared libraries (aka "plugins"). This rules out most sophisticated audio applications.
Are audio apps the only type of app to use the plugin pattern? I've never seen any other app category mentioned when plugins (or an equivalent descriptor) come up.
Does dl_open crash if it can't find an open ALSA connection or something?
dlopen("plugin.so"); /* Load that super cool reverb plugin */
....
/* oops, the .so is not part of the flatpak ... fail */
Most other systems that do "plugins" tend to use a domain specific language and load the "plugin" as a normal file, rather than via the dynamic linker.
No like the post mentions you need to target a libc your users have on their system, which they show is best handled by linking to one that's a decade plus old (as a least common denominator most likely to be available).
If that sounds silly... then ship your libc and everything you depend on in a flatpak or similar contained system image.
It will never truly be the year of the Linux desktop until this is fixed. Even assuming the developers of the major pieces of proprietary software were willing to put in the work to port, say, Photoshop or Word, packaging the result is made intentionally asinine.
It's literally easier to run old Windows binaries on a Linux than to run old Linux binaries on Linux.
Windows applications usually ship their own copy of all the libraries they use, instead of relying on the system copy of those libraries; and Microsoft is better at keeping system APIs backward-compatible (for some definition of system). Both approaches have advantages and disadvantages.
MS is almost perfect in this matter. You can still run Win 2k binaries.
Most proper Windows applications do ship many libraries but they are often really special purpose or actually part of the application via plugin interfaces.
On Windows you don't ship the equivalent of libc, libcurl, libssl, libX.. or SDL. You don't have to deal with any of the pains in the article. Those are all in the OS and they have a stable ABI. Moreover this approach makes your binaries more secure by updating more critical points of failure (i.e. web interfaces, crypto) with the OS rather than you as the developer being forced to do so.
Windows' trick to fixing DLL conflicts is to make a huge tree of DLL hard links that basically allow any permutation of common Windows DLL versions to be used together.
With many Linux distros, the expectation is that you run software from your distros maintainers which will do all the hard work of mixing and matching dependencies for you.
The trouble begins when you go outside of your distribution's supported software. Some solve this by building everything from source on unsupported platforms, others will just ship their preferred distro with their binary and run the entire thing as a container. Similar to what Windows does, except less optimised for storage.
The "old" way (putting the correct .so libraries next to your executable and using those) still works well but it's being replaced by static compilation in many newer projects.
It works if whoever created the binary statically linked it or ships the deps with it. This isn't the typical experience for rpm and deb packages but it's still doable.
"Linux" is only the kernel - i'm not trying to be pedantic, this is the actual issue actually. The kernel has a stable ABI, you can run software from the late 90s on modern kernels and it'll work, but it'll work only as long as the program either makes these calls directly or all the libraries it links to dynamically also provide a stable ABI.
A "Linux distribution" or "Linux OS" or whatever people install in their computers and call "Linux" is not just the kernel though, it is the kernel and a bunch of additional software and -most importantly- libraries on top of it that provide functionality the kernel itself doesn't provide. This includes stuff like getting a desktop environment, which is often made up of -among others- several dynamic libraries that provide functionality such as opening windows, providing elements like buttons, checkboxes, inputboxes, etc and of course handling graphics and user input.
Very few of these libraries however provide a stable ABI. On average, you can expect a program from the late 90s that uses the GNU C Library and X11 to work because those libraries provide stable ABIs (as long as you are not trying to abuse them and do not use undocumented features) but anything beyond becomes harder. cURL does provide a stable ABI since the 2005 and they promise not to break it, but there are very few libraries that make such promises (for many that haven't broken their ABI it seems to be more of a consequence of their development than an explicit goal). It is really up to the developers of each library to keep their ABIs stable - assuming they care of that in the first place (IME with talking with some developers, even here on Hacker News, many do not care or even seem to actively refuse to understand what the problem is in the first place).
All the above means that Linux distributions come with a lot of different libraries and library versions (so even if a binary depends on libfoo and libfoo exists on the system, it will still not work if it depends on libfoo version 1 but the system has libfoo version 2 which broke compatibility with version 1) just to provide a regular desktop environment.
On the other hand on Windows you get a ton of functionality out of the box - not just for making GUI applications, but also for networking, graphics, audio, video encoding and decoding, etc. Even if that stuff aren't the best in class, they're there and applications can simply depend on them.
More importantly, Microsoft promises to not break the Windows API ABI and that promise has held for literal decades.
In the case where an application needs more/different stuff, it doesn't need to provide everything of the above, it only needs to provide the stuff that it needs - e.g. a video encoding application can use the GUI functionality that Windows provides but still use ffmpeg for performing that encoding. While it'd be duplicating functionality that technically Windows already provides, it'd only be functionality related to encoding, not everything.
So basically, Windows applications tend to pretty much always work because they can rely on Windows to both provide a lot more functionality out of the box and remain compatible than Linux applications do.
Note that this is not a technical problem but a cultural problem - most Linux developers of libraries that would be the equivalent of parts of the Windows API that has remained stable over the decades do not believe that such backwards compatibility is important or even possible (assuming you can get them to accept that there is such a problem in the first place).
> Don’t try to statically link libc or libstdc++, this may cause trouble with other system libs you’ll use (because the system libs dynamically link libc and maybe libstdc++ - probably in versions different from what you linked statically. That causes conflicts.)
I guess linking libstdc++ statically is fine if it doesn't show up in ldd ?
An AppImage is bascially a zip file that extracts itself and then runs a binary in the extracted dir. It's still up to you to make sure the binary inside the AppImage is actually portable.
(you probably know this and were simplifying, but for the benefit of others who may not ...)
Its a filesystem, not an archive. So its contents aren't extracted anywhere on your drives. It stays within the appimage.
The squashfs fs used is optimised for use as a read-only, fast access fs. Entire distro's have been built and used (for over a decade), where everything is contained in 1 or more overlaying squashfs containers. Its close to the most optimal method of distributing AND ACCESSING read-only stuff.
All the appimage does is it mounts its internal squashfs, and the runs its internal AppDir program (appimages are essentially "squashed" appdirs)
Appdirs themselves are a sadly underrated agnostic pkg'ing format, which have great flexibility & utility (imho).
I don't even need that much. For me it would be great if companies just specify which distro they use to compile/test their binaries. Why is it so hard to give that info?
More often than not I get some really long lists instead: We support distro A in version X,Y and Z, ..., distro B in Version X, X2, X3, ..., ... I try 2 or 3 of those combination and they don't really work. Eventually after 2 or 3 more tries, something works but if I had above information, I'd just start a VM with the known working distro after the first failure.
You can be sure that any software that starts listing off specific supported distros is going to be a nightmare to run. Usually because they've dynamically linked some outdated and obscure lib from the distro repos.
> glibc ... To work with all (or at least most) Linux distributions released in the last years, you need to link against reasonably old versions of those libs.
Well, except for Alphine which is a frequent source for tiny Docker containers.
The syscall interface is stable enough that you can create binaries that will run against Linux kernels from decades ago, if needed.