Doesn’t 42 bits of address space seem terribly shortsighted? I know most current x64 hardware can only address 48 bits of physical RAM, but at $dayjob we already own servers with 2TB of RAM.
Baking a 4TB limit into new GC code seems... unwise.
So interestingly, they are storing extra metadata in the object pointers, which means no more compressed oops (ie 32 bit pointers). Curious what the effect of that on heap sizes is considering most JVMs run with <32gb heaps
I think other people do this as well. As I recall, windows encodes r/w/execute memory permission into a mask at the top of the pointer so that they avoid a table look up when they take a fault.
This seems to be the same as what Red Hat is working on with Shenandoah. I don't understand how the goals differ and from a 1000 ft, incomplete view, the basic design seem similar too.
- Shenandoah could in theory run on Windows, AFAIK this is a non-goal for ZGC.
- Shenandoah tries to return unused heap to the OS wheres currently the recommendation for ZGC seems to be -Xms == -Xmx. In addition ZGC tripple maps the heap which can lead to interesting challenges in resource usage accounting.
- None of them are generational although Shenandoah allows for custom policies.
- Both of them seem to disable biased locking by default. I would guess the latency for deoptimizing is simply too large.
- Shenandoah supports pinning objects in JNI criticals without disabling the GC.
- Somewhat unsurprisingly ZGC introduces several HotSpot latency optimizations that will also benefit Shenandoah (thread-local handshakes, concurrent reference processing, ...).
Baking a 4TB limit into new GC code seems... unwise.