Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>but I really hope we'll soon move to a compute-only way of handling graphics.

Would you have the time to expand on this thought a bit? I am curious. Thanks!!



Software rendering. In parallel on the GPU.


I mean, about 5 years ago there were many folks that were trying to do raytracing using GPU compute rather than the current methods, it was essentially treating the CPU as a giant parallel software rendered. The results were pretty good even then.


Raytracing using GPU compute is pretty much the norm for offline rendering, for example Blenders Cycles renderer has support for all major GPUs: https://docs.blender.org/manual/en/latest/render/cycles/gpu_...

It is telling of the GPU compute landscape that there is separate implementation for each vendor


I don't get what that means. You mean configuring a shader to render pixels?

How do you think OpenGL/Vulkan works under the hood? It's been a very long time since fixed function pipelines.


The standard rendering pipeline is still fairly fixed and mandates shaders for vertices and fragments. With software rasterization, there are no more vertex or fragment shader, only compute. And you don't use the hardware rasterization units - you rasterize the triangles yourself and write the results to the framebuffers with atomic-min/max operations. You decide for yourself how and when you compute the shading in your compute shader. This can be multiple times faster for small triangles, and 10-100 times faster for points. And once you do things that way, there isn't much point for graphics APIs anymore - everything is just buffers and functions that process them.


Why is there a pipeline at all? Give me a framebuffer, a place to upload some CUDA or equivalent, and some dumb data pipe between the host program and GPU program.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: