Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Dynamic linking doesn’t work unless you can live inside a distro maintainer’s special bubble for all your software. If you can exist in that bubble, great—I really like Debian for certain use-cases—but if you can't, the benefits of dynamic linking everything are clearly outweighed by the drawbacks.

Good luck patching that security vulnerability in all those static binaries without proper dependency tracking ;). Not that I am on a particular side of the fence, both have their downsides.

To me the problem are package managers from the '90ies that use a single global namespace, only allows UID 0 to install packages, and do not really provide reusable components.

Modern packaging systems like Nix and Guix allow users to install packages. Packages are non-conflicting, since they do not use a global namespace (so, you can have multiple versions or different build options). They provide a language and library that allows third-parties to define their own packages.

Not to say that they are the final say in packaging, but there is clearly a lot of room for innovation.

Snap and Flatpak are copying the packaging model of macOS, iOS, and Android. This is a perfectly legitimate approach (and IMO the execution of Flatpak is far better). But it is not for everyone -- e.g. if you prefer a more traditional Unix environment.



Rolling out shared library updates to resolve security vulnerabilities is not without its own issues.

The big one, which surprisingly places still manage to fumble due to poor process controls or simple mistakes, is that you have to restart all running processes that use the library after you update it.

I actually prefer to deploy static builds of critical services for this reason, because you already have to know that you're running version 1 build 5 everywhere -- and if everything is build 5, then they all have the fix. You don't also have to check if the process was started after May 5th.


> without proper dependency tracking

Well, there is nothing that says you can't have proper dependency tracking just because something is statically linked. The infrastructure isn't currently there [x], but it definitely is something that languages and language package managers could with each other and provide.

[x] But can be built, now that more and more languages have access to language package managers with proper dependency tracking. One way would be to create a standard for how to query a binary for what it depends on. Then a computer could have a central database of the dependencies of static binaries that is installed.


Well, there is nothing that says you can't have proper dependency tracking just because something is statically linked.

Didn't say so. It is just easier with dynamic linking, because you can see what libraries (and versions) a binary is linked against.

But can be built, now that more and more languages have access to language package managers with proper dependency tracking.

Actually, approaches such as Nix' buildRustCrate (where every transitive crate dependency is represented as as a Nix derivation) + declarative package management offer this today.

But with curl | bash or traditional package managers, which are most widely used today, this is kind of dependency tracking hard/ad-hoc.

But can be built, now that more and more languages have access to language package managers with proper dependency tracking.

But then a static C library is used and nobody knows where it came from. Even if you look at the Rust ecosystem, which generally does things well when it comes to dependency handling, crates are all over the place when it comes to native libraries. I have seen everything from crates that use a system library (or something discoverable via pkg-config), via crates that have the library sources as a git submodule and build them as part of the build-script, to crates that download precompiled library from some shared Dropbox link.

Another fun example from another language ecosystem. numpy uses OpenBLAS. They compile their binary wheels on CI. However, OpenBLAS itself is retrieved as a precompiled binary from another project [1]. However, the rabbit hole goes deeper. In case OpenBLAS is built for macOS, a precompiled disk image is retrieved from yet another repository [2]. This disk image is added to that repository, but comes from yet another place.

This is all sort of the opposite the lessons to take from Reflections on Trusting Trust and the bootstrapping that the Guix folks try to do.

Anyway, with the mindset that most developers have, we will never have proper dependency tracking.

[1] https://github.com/MacPython/openblas-libs

[2] https://github.com/MacPython/gfortran-install/tree/d430fe6e3...

[3] http://coudert.name/software/gfortran-4.9.0-Mavericks.dmg.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: