Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'd begin by adding a new architecture, and merging an existing library which can convert expressions to GPU kernels -- same as he's doing now with his LLVM-based one. Why would it be any different? Is LLVM so GPU-friendly that it's worth it to write 250K lines of code just to be able to call it a bit more easily?


Yes it is, LLVM toolchain is directly supported by Khronos for GPGPU standards (LLVM <-> SPIR-V integration), NVidia also makes use of it, and it has a backend for shaders emulation on CPUs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: