Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The author also posited that the lack of AI progress back then was due to the fact that there are no constantly competing sub-brains.

That became popular in neural networks after the introduction of dropout regularization, which forced neurons to "co-adapt" and learn to do each others' jobs. Large, over-specified models also provide a natural setting for co-adaptation.



Isn't dropout just there to avoid overfitting? This is more like a mixture of experts type architecture.


That is one lens to view it through. Co-adaptation reduction is another, and it is an intuitive one: generalization ability is improved if a neuron has to support multiple contexts instead of relying on other neurons to lift the weight, if you pardon the pun.

Improving neural networks by preventing co-adaptation of feature detectors https://arxiv.org/abs/1207.0580




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: