Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That looks to me like they are using deep learning with CNN for denoising. NVIDIA OptiX can produce similar artifacts.

However, it appears they forgot to add a loss term to penalize if the source and the denoised result image turn out too different. NVIDIA's denoiser has user-configurable parameters for this trade-off.



I think it would be impossible to train the model in the first place without that loss term.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: