It's more like replacing an old calculator with a new shiny graphing one that can solve ODEs but the basic addition is declared "out of scope" so it doesn't have it. Well, now you need another calculator to add numbers built into your toaster, with every vendor implementing it differently.
But why is my window manager now also my display server? The (relatively simple) program that decides how to lay out my windows is clearly a separate concern to the (complex) program that actually interfaces with the GPU to render the screen
Having to worry if your WM supports your GPU is like having to worry if your web browser supports your NIC
Gnome is not your window manager - it’s your display server. You can trivially install/write an extension to manage your windows differently and reuse everything gnome provides for example.
A wm has always been trivial compared to a display manager, people just mistake one for the other.
Yes that's the point I'm making. It is now a display server. On X it was just a desktop environment. Traditionally that comprises
* A set of utility X applications, like a panel and launcher
* A recommended window manager
* A recommended display manager
And then this all runs on top of a separate X display server
Generally the last two are easy to replace, for example I have used both Compiz and Xmonad on Gnome 2 and XFCE. Gnome was probably a bad example for me to pick since modern Gnome does merge the first two bullets into one, but it's still separate from the display server and can be launched from other display managers, or none (just startx)
Again, it would be like Firefox needing to talk directly to my network card rather than the actual packet sending be managed by the layer below (the OS in that case)
The Firefox one might be a good analogy to continue: a window manager is akin to a web extension. They are browser-dependent, a firefox one won’t work on chrome, and vice versa [1]
Gnome/KDE/wlroots are akin to separate browsers implementing the same (HTTP) protocol. It’s a lot of work, plenty choose to rather fork an already existing code base (chromium based ones), but with time people will consolidate on a few ones. But you surely wouldn’t want an all-Safari, or all-Chrome browser “ecosystem”, right? (Though unfortunately we are not far from the latter). That’s what Xserver gave practically.
[1] There is some compatibility but let’s forget about that for now.
> Yes that's the point I'm making. It is now a display server. On X it was just a desktop environment.
I think this is the point, but maybe in the opposite way you mean: X was the wrong architecture for what we want to achieve with a display system these days.
Some of that is about security, some of that is about which piece knows about what is responsible for what to make the system work. Wayland changes the responsibilities significantly. This is a GOOD thing - it's why things like HiDPI can be made to work in Wayland and fundamentally don't in X (unless you control a lot of variables very carefully).
I still think having multiple display server implementation is a mistake.
Qt/KDE is lagging behind and there are no easy way for them to catch up.
Had the display server been a single implementation, Qt/KDE can focus on WM/Widget..
It’s not ideal, but what is the alternative? We couldn’t even agree about an implementation language, and there are many more decisions that are display-server specific.
Wlroots is there for those who wants it and can agree with its design tradeoffs and it is used by plenty niche wms, so I think we have it as good as is feasible in a bazaar style, “everyone works on what they like” fashion. Without a higher power to spearhead a single project there can’t really be a difference.
With a compositor other than the Gnome one?