My first experience with the boehm-gc was a long time ago, when I was using a very performance-intensive AI library. As an experiment, I modified it to use the boehm-gc and, surprisingly, it actually became faster.
I've since learned that such a speed improvement when manual memory management is replaced with the boehm-gc is not uncommon.
'Performance' isn't just raw averaged throughout, but often includes worse case latency. There's a lot of perf sensitive applications where a GC isn't a great fit.
This is true, but a lot of those latency sensitive applications are hard latency requirements, where you have to do something in X time, but doing it faster than X isn't actually useful.
There are realtime GCs that can meet these hard requirements.
Those hard real time GCs everytime I've seen them come with overall throughput compromises. Many times real time constraints and overall throughput aren't mutually exclusive.
And even on the soft real time side, like rendering modern GUI, GC pauses causing frameskips (you have ~16ms to render each frame) makes your app look janky.
Totally, that could absolutely be the case. But it probably won't be. The reality is that most C programs do use manual memory management, by a very wide margin.
My first experience with the boehm-gc was a long time ago, when I was using a very performance-intensive AI library. As an experiment, I modified it to use the boehm-gc and, surprisingly, it actually became faster.
I've since learned that such a speed improvement when manual memory management is replaced with the boehm-gc is not uncommon.