Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This conclusion feels like saying more CPU and memory are better. Seems obvious that more moves allows matching to have more nuance, but I guess cool that someone proved it.


From what I understand, it says that more parameters are good. This wasn't obvious before this paper: you can fit a polynomial instead of a neutral net, but adding parameters wouldn't help with robustness in that case: the polynomial would become more and more jagged.


> Seems obvious that more moves allows matching to have more nuance,

This really has to be balance against overfitting. The key problem in ML is generalization, and lots of things improve training performance while making that worse.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: