Do your Makefiles work across Linux, macOS and Windows (without WSL or MingW), GCC, Clang and MSVC, or allow loading the project into an IDE like Xcode or Visual Studio though? That's why meta-build-systems like cmake were created, not to be a better GNU Make.
Ok, then just cl.exe instead of gcc or clang. Completely different set of command line options from gcc and clang, but that's fine. C/C++ build tooling needs to be able to deal with different toolchains. The diversity of C/C++ toolchains is a strength, not a weakness :)
One nice feature of MSVC is that you can describe the linker dependencies in the source files (via #pragma comment(lib, ...)), this enables building fairly complex single-file tools trivially without a build system like this:
cl mytool.c
...without having to specify system dependencies like kernel32 etc... on the cmdline.
Cmake is doing a lot of underappreciated work under the hood that would be very hard to replicate in another tool, tons of accumulated workarounds for all the different host operating systems, compiler toolchains and IDEs, it's also one of few build tools which properly support Windows and Visual Studio.
Just alone reverse engineering the Xcode and Visual Studio project file formats for each IDE version isn't fun, but this "boring" grunt work is what makes cmake so valuable.
The core ideas of cmake are sound, it's only the scripting language that sucks.
...and for custom requirements a manually created CMakeLists.extras.txt as escape hatch.
Unclear to me how more interesting scenarios like compiler- and platform-specific build options (enable/disable warnings, defines, etc...), cross-compilation via cmake toolchain files (e.g. via Emscripten SDK, WASI SDK or Android SDK/NDK) would be handled. E.g. just trivial things like "when compiling for Emscripten, include these source files, but not those others".
One interesting chicken-egg-problem I couldn't solve is how to figure out the C/C++ toolchain that's going to be used without running cmake on a 'dummy project file' first. For some toolchain/IDE combos (most notably Xcode and VStudio) cmake's toolchain detection takes a lot of time unfortunately.
I'm intrigued by the idea of writing one's own custom build system in the same language as the target app/game; it's probably not super portable or general but cool and easy to maintain for smaller projects: https://mastodon.gamedev.place/@pjako/115782569754684469
Mario Zechner aka badlogic - (co?)creator of libGDX (for us old farts who were around in the early Android days): https://libgdx.com/
Later also heavily involved with Spine, which IME is still the defacto industry standard for 2D skinned animation in mobile/web games: https://esotericsoftware.com/
With the release of D3D9 in 2002, GPUs of different vendors didn't really stand out anymore since they all implemented the same feature set anyway (and that's a good thing).
IMO there’s room for something more recent, maybe a Titan or something, to stand in as an avatar for making GPUs as compute accelerators a thing. I know that’s been going on forever, but at some point it went from some niche hacky thing to a primary use-case for the cards.
But yeah this list has a on of incremental bumps on it. Maybe there was some mixing of cards that mattered historically and cards that mattered to the author.
Nvidia Turing (RTX 20) definitely marked a major shift IMO.
- It was the first card to enable real-time ray-traced effects.
- Mesh shaders are a significant overhaul of the geometry pipeline that's only recently getting real traction.
- Its tensor cores enabled a new generation of AI-driven upscaling/antialiasing. DLSS 2, FSR 4 and XeSS are all some variation of "TAA + neural networks", and these all rely on specialized matrix hardware to get optimal performance.
Obviously all of these features are supported across all vendors. Intel Arc Alchemist has all of these features as well, and AMD got RT and mesh shader support with RDNA2 along with slowly building up to tensor cores with RDNA3/4. But Turing clearly debuted these feature which have majorly changed the landscape of realtime 3D graphics.
It was only 3dfx and NVIDIA (since the TNT) that mattered in the 1990s though. All the other 3D accelerators were only barely better than software rasterization, if at all.
Seeing Quake II run butter smooth on a Riva TNT at 1024x768 for the first time was like witnessing the second coming of Christ ;)
Before that, you could even run Quake with anti-aliasing on one of those "barely better than software rasterization" cards, couldn't even be done on the first Voodoo cards.
Matrox was really halfhearted with game support. They seemed far more interested in corporate customers, advertising heavily stuff like "VR" conference calls that nobody wants. They were early with multi-monitor support back when monitors were big, heavy, and expensive. I had a G200 that was the last video card I've ever seen where you could expand the VRAM by slotting in a SODIMM. It also had composite out so you could hook it to a TV. I played a lot of games on it up until Return to Castle Wolfenstein, which was almost playable but the low res textures looked real bad and the framerate would precipitously drop at critical times like when a bunch of Nazis rushed into the room and started shooting.
Last time I saw a Matrox chip it was on a server, and somehow they had cut it down even more than the one I had used over a decade earlier. As I recall it couldn't handle a framebuffer larger than 800x600, which was sometimes a problem when people wanted to install and configure Windows Server.
reply