> The author also posited that the lack of AI progress back then was due to the fact that there are no constantly competing sub-brains.
That became popular in neural networks after the introduction of dropout regularization, which forced neurons to "co-adapt" and learn to do each others' jobs. Large, over-specified models also provide a natural setting for co-adaptation.
That is one lens to view it through. Co-adaptation reduction is another, and it is an intuitive one: generalization ability is improved if a neuron has to support multiple contexts instead of relying on other neurons to lift the weight, if you pardon the pun.
That became popular in neural networks after the introduction of dropout regularization, which forced neurons to "co-adapt" and learn to do each others' jobs. Large, over-specified models also provide a natural setting for co-adaptation.