Hacker Newsnew | past | comments | ask | show | jobs | submit | flohofwoe's commentslogin

Sounds like a Europa Universalis player tbh ;)

Do your Makefiles work across Linux, macOS and Windows (without WSL or MingW), GCC, Clang and MSVC, or allow loading the project into an IDE like Xcode or Visual Studio though? That's why meta-build-systems like cmake were created, not to be a better GNU Make.

There is something fundamentally wrong with Windows or Visual Studio that it requires ugly solutions.

Windows and Visual Studio solutions are perfectly fine. MSBuild is a declarative build syntax in XML, it's not very different from a makefile.

XML is already terrible. But the main problem seems to be that they created something similar but incompatible to make.

Ok, then just cl.exe instead of gcc or clang. Completely different set of command line options from gcc and clang, but that's fine. C/C++ build tooling needs to be able to deal with different toolchains. The diversity of C/C++ toolchains is a strength, not a weakness :)

One nice feature of MSVC is that you can describe the linker dependencies in the source files (via #pragma comment(lib, ...)), this enables building fairly complex single-file tools trivially without a build system like this:

   cl mytool.c
...without having to specify system dependencies like kernel32 etc... on the cmdline.

Cmake is doing a lot of underappreciated work under the hood that would be very hard to replicate in another tool, tons of accumulated workarounds for all the different host operating systems, compiler toolchains and IDEs, it's also one of few build tools which properly support Windows and Visual Studio.

Just alone reverse engineering the Xcode and Visual Studio project file formats for each IDE version isn't fun, but this "boring" grunt work is what makes cmake so valuable.

The core ideas of cmake are sound, it's only the scripting language that sucks.


I suspect it depends on a specific directory structure, e.g. look at this generated cmake file:

https://github.com/randerson112/craft/blob/main/CMakeLists.t...

...and for custom requirements a manually created CMakeLists.extras.txt as escape hatch.

Unclear to me how more interesting scenarios like compiler- and platform-specific build options (enable/disable warnings, defines, etc...), cross-compilation via cmake toolchain files (e.g. via Emscripten SDK, WASI SDK or Android SDK/NDK) would be handled. E.g. just trivial things like "when compiling for Emscripten, include these source files, but not those others".


Heh, looks like cmake-code-generators are all the rage these days ;)

Here's my feeble attempt using Deno as base (it's extremely opinionated though and mostly for personal use in my hobby projects):

https://github.com/floooh/fibs

One interesting chicken-egg-problem I couldn't solve is how to figure out the C/C++ toolchain that's going to be used without running cmake on a 'dummy project file' first. For some toolchain/IDE combos (most notably Xcode and VStudio) cmake's toolchain detection takes a lot of time unfortunately.


I'm intrigued by the idea of writing one's own custom build system in the same language as the target app/game; it's probably not super portable or general but cool and easy to maintain for smaller projects: https://mastodon.gamedev.place/@pjako/115782569754684469

Same here, 27%, I am disappoint (although maybe the test doesn't account for East-German-ness)

I would argue East Germans are more German than average Germans

Mario Zechner aka badlogic - (co?)creator of libGDX (for us old farts who were around in the early Android days): https://libgdx.com/

Later also heavily involved with Spine, which IME is still the defacto industry standard for 2D skinned animation in mobile/web games: https://esotericsoftware.com/


Ah, that guy! I think I've seen him give a talk about Spine at Game Dev Days Graz a couple of years ago.

Pascal is already too late to matter (2016) IMHO.

With the release of D3D9 in 2002, GPUs of different vendors didn't really stand out anymore since they all implemented the same feature set anyway (and that's a good thing).


IMO there’s room for something more recent, maybe a Titan or something, to stand in as an avatar for making GPUs as compute accelerators a thing. I know that’s been going on forever, but at some point it went from some niche hacky thing to a primary use-case for the cards.

But yeah this list has a on of incremental bumps on it. Maybe there was some mixing of cards that mattered historically and cards that mattered to the author.


Nvidia Turing (RTX 20) definitely marked a major shift IMO.

- It was the first card to enable real-time ray-traced effects. - Mesh shaders are a significant overhaul of the geometry pipeline that's only recently getting real traction. - Its tensor cores enabled a new generation of AI-driven upscaling/antialiasing. DLSS 2, FSR 4 and XeSS are all some variation of "TAA + neural networks", and these all rely on specialized matrix hardware to get optimal performance.

Obviously all of these features are supported across all vendors. Intel Arc Alchemist has all of these features as well, and AMD got RT and mesh shader support with RDNA2 along with slowly building up to tensor cores with RDNA3/4. But Turing clearly debuted these feature which have majorly changed the landscape of realtime 3D graphics.


Kind of, because they still had different kinds of limitations, and that played a role in what is available to shaders ever since.

Just like nowadays not all mesh shaders, compute, ray tracing or DirectStorage are born alike across all vendors.

Usually one can expect that an Intel GPU will never deliver as much as one AMD or let alone NVidia one.

Naturally this is us focusing on the PC space, if taking mobile, game consoles, or Apple ecosystem, then there are many other factors.


It was only 3dfx and NVIDIA (since the TNT) that mattered in the 1990s though. All the other 3D accelerators were only barely better than software rasterization, if at all.

Seeing Quake II run butter smooth on a Riva TNT at 1024x768 for the first time was like witnessing the second coming of Christ ;)


Rendition's VQuake was actually pretty good, more than barely better than software rasterization.

Edge anti-aliased polygons!

Before that, you could even run Quake with anti-aliasing on one of those "barely better than software rasterization" cards, couldn't even be done on the first Voodoo cards.

> S3 ViRGE and the Matrox G200

Both were only really famous for how terrible they were though. I think the S3 Virge might even qualify as 3D decelerator ;)


The only thing the ViRGE was good for was passing through to a Voodoo2

But it WAS ultra popular with OEMs. If you had embedded video there was a huge chance that was it.

Matrox was really halfhearted with game support. They seemed far more interested in corporate customers, advertising heavily stuff like "VR" conference calls that nobody wants. They were early with multi-monitor support back when monitors were big, heavy, and expensive. I had a G200 that was the last video card I've ever seen where you could expand the VRAM by slotting in a SODIMM. It also had composite out so you could hook it to a TV. I played a lot of games on it up until Return to Castle Wolfenstein, which was almost playable but the low res textures looked real bad and the framerate would precipitously drop at critical times like when a bunch of Nazis rushed into the room and started shooting.

Last time I saw a Matrox chip it was on a server, and somehow they had cut it down even more than the one I had used over a decade earlier. As I recall it couldn't handle a framebuffer larger than 800x600, which was sometimes a problem when people wanted to install and configure Windows Server.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: