Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Also the model is trained on faces, not backgrounds. Pretty soon we’re going to see entire 3D scenes generated and rendered photorealistically through a camera model.


I don't think that's true. If they masked out the backgrounds on training, how would the model be able to synthesize a background at all?

The problem is there's too much variety in the backgrounds of the training set. They don't follow a pattern the way growing a human does.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: