Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thank you.

> I don't know Nim, but I believe it has a garbage collector so it could be tricky to use for kernel programming.

You're right. Still good for libraries though (or apps, but that may be outside of "system").



you're right, it is confusing, but it is optional: some toy kernels already work in nim , and with latest work on memory, you should be able to use most of the language for kernel development ! not the perfect language for that yet though, but i hope we should see more nim os examples


Are you saying that the GC is optional? If you don't use it how do you allocate/free memory?


You call malloc/free, or if working with GPU cudaMalloc/free. You can write your own memory pool or object pools, you can use destructors, and even implement your own reference counting scheme.

This is what I use for my own multithreading runtime in Nim and the memory subsystem makes it faster and more robust than any runtime (including OpenMP and Intel TBB) that I've been benchmarking against, see memory subsystem details here: https://github.com/mratsim/weave/tree/master/weave/memory

Example of atomic refcounting in this PR here: https://github.com/mratsim/weave/blob/025387510/weave/dataty...

Also one important thing, Nim current GC is based on TLSF (http://www.gii.upv.es/tlsf/) which is a memory reclamation scheme for real-time system, it provides provably bounded O(1) allocations. You can tune Nim GC with max-pauses for latency critical applications.


Does the standard library use malloc/free or does it depend on the GC? This is the part that's puzzling to me, if the stdlib depends on GC then it's harder to say that GC is optional. Technically optional but not super practical.


The majority of stdlib modules does not depend on GC.

Also, the new ARC memory manager replaces GC and can run in a kernel.


No that's not true.

As soon as you use sequences or strings or async you depend on the GC.

You can however compile with gc:destructors or gc:arc so that those are managed by RAII.


I used various modules with gc:none

I meant that new ARC GC, that will replace the current one, can be used for the kernel.

It's still a GC, technically, but, quoting Araq on ARC:

Nim is getting the "one GC to rule them all". However calling it a GC doesn't do it justice, it's plain old reference counting with optimizations thanks to move semantics.


overally i think for small os-es you can write easily a micro libc core in C or even in Nim where you define malloc/free etc and just use them directly as in C and zig.

otherwise you should be able to use something like destructors eventually


Didn't test this but some clues: https://nim-lang.org/docs/gc.html

Then I guess you would use new/dealloc



Nim's GC is optional


How can it be optional if there is lots of code that assumes you are using GC? For example, as far as I can tell the stdlib doesn't do its own allocations. Does this mean you can't use the stdlib with GC disabled? Or am I missing something here?


A friend of mine wrote a DSL for audio using Nim with GC off.

https://github.com/vitreo12/omni

I don't know enough to comment but it may be useful to look at things in the wild. This project also heavily relies on calling in C to interface with environments and the SuperCollider scsynth.


overally if you write an os you need to write your malloc/free .. and then you should be able (i think..) to use many of the gc-s anyway and more of stdlib.

But you can also think of Nim as macro-able higher level C and write it like that (but indeed you probably still need this minimal allocation support)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: