It would be fair to mention that this is happening for you with a decade old GPU using an EOL driver, which sucks, but is unlikely to be a common experience.
> It would be fair to mention that this is happening for you with a decade old GPU using an EOL driver, which sucks, but is unlikely to be a common experience.
The drivers are still getting maintained by NVIDIA until August 2026. They also got classified as "legacy" on paper 1 day before I installed them.
The compositor memory leak is affecting a lot of people. Since COSMIC and niri both use the same one (smithay), there's threads on GitHub with people using modern GPUs, both NVIDIA and AMD who experience it. There's a lot of replies across all of the different open issues.
The GPU allocation issue on Wayland (separate from the memory leak) also has hundreds of replies on the NVIDIA developer forums with people using new NVIDIA cards with the latest drivers.
The thing is, most people don't talk about either of them because if you have 8+ GB of GPU memory and turn your computer off every night then you won't experience this problem since all GPU memory allocations get reset on shutdown. It happens to be more of a direct problem for me because I have 2 GB of GPU memory but that doesn't mean the problem isn't common. The root cause is still there. Even if I switched to an AMD GPU the niri / smithay memory leak would be present. Instead of rebooting twice a day, if the GPU had 8 GB of memory I'd have to reboot every 2 days (x4 basically).
Since I opened that issue on GitHub NVIDIA did acknowledge it and suggested I try their experimental egl-wayland2 library. I did try that and it hasn't fixed it fully but it has made GPU memory allocations more stable. It even fixed 1 type of leak in niri. This library is decoupled from the drivers themselves as far as I know. I mean, this same library could still be used for the 590 series, it's not 580 specific which means it's not dependent on your GPU model.
What this does on typical extent-based file systems is split the extent of the file at the given location (which means these operations can only be done with cluster granularity) and then insert a third extent. i.e. calling INSERT_RANGE once will give you a file with at least three extents (fragments). This, plus the mkfs-options-dependent alignment requirements, makes it really quite uninteresting for broad use in a similar fashion as O_DIRECT is uninteresting.
Well, better an uninteresting solution than a solution which is actively terrible: appending changes to a PDF, which will inflate its size and cause data leakage.
In reality many 90s cars are phenomenal rust buckets due to issues in the adoption of water-based paints, cars which actually still have tangible amounts of steel in their panels are basically golden samples.
MCP can wrap things which have stateful processes, debuggers for example. Agents will use batch mode but it is quite limited and due to tool calls always being implemented as synchronous invocations, non-batch mode doesn’t work for tool calls. MCP solves this by giving the agent a handle it can use to refer to in multiple invocations.
Burns a lot of tokens though and if you need more than batch-mode gdb to debug something the chances of an agent solving it today are very slim.
> and due to tool calls always being implemented as synchronous invocations
Claude Code wil happily start long-running processes and put in the background, and is able to refer back to them. You don't need MCP for that - you can hand the model handles to refer to background jobs just fine with just tool-calling.
I’m guessing v4 C didn’t have structs yet (v6 C does, but struct members are actually in the global namespace and are basically just sugar for offset and a type cast; member access even worked on literals. That’s why structs from early unix APIs have prefixed member names, like st_mode.
There may have been a early C without structs (B had none,) but according to Ken Thompson, the addition of structs to C was an important change, and a reason why his third attempt rewrite UNIX from assembly to a portable language finally succeeded. Certainly by the time the recently recovered v4 tape was made, C had structs:
~/unix_v4$ cat usr/sys/proc.h
struct proc {
char p_stat;
char p_flag;
char p_pri;
char p_sig;
char p_null;
char p_time;
int p_ttyp;
int p_pid;
int p_ppid;
int p_addr;
int p_size;
int p_wchan;
int *p_textp;
} proc[NPROC];
/* stat codes */
#define SSLEEP 1
#define SWAIT 2
#define SRUN 3
#define SIDL 4
#define SZOMB 5
/* flag codes */
#define SLOAD 01
#define SSYS 02
#define SLOCK 04
#define SSWAP 010
This is a very Reddit comment. You can move to Oklahoma and get a brand new construction house for under $300k. But you won’t, because you want to live within an hour or so of the same dozen major US cities everyone else wants to live in close proximity to.
The houses as a structure aren’t going up in value (any more than the price of construction materials and labor has). It’s the land that’s appreciating faster than inflation in most cases you’re complaining about.
reply