Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Awesome post. I'm not sure I understand the estimates of computational power of various brain parts (does that even make sense?) but overall I think the author has the right idea on how far we are away from simulating the brain.

Not only does planet Earth train trillions of neural nets in parallel, but it's running a very elegant evolutionary algorithm for hyperparameter selection among all possible units.

Some interesting (and somewhat depressing) tidbits of neuroscience that further corroborate the futility of full brain simulation:

- Collision avoidance in locusts is entirely implemented in a single neuron [1]. All the computation is performed by the complex nonlinear integration of action potentials across the dendritic tree. The geometry of the dendritic tree is really important. - Neurons with mechanosensory receptors can actually fire in response the dilation of a blood vessel pressing up against it [2]. The vascular system innervates throughout the brain like a secondary connectome, and is implicated in information processing as well. Good luck simulating blood flow in 400 miles of elastic tubing.

- Every voltage-gated channel or patch of cellular membrane basically acts as a leaky integrator, and digital systems are pretty bad at this kind of operation. Analog circuits/neuromorphics are useful for this (as well as the asynchronous dynamics) but good luck fabricating a chip that operates in 3D and integrates an arbitrary nonlinear equation.

- Simulations often involve injecting random stimulus into the network, or showing it images through some approximation of the retinal ganglion cells + V1 cortex. However, the brain has evolved to operate in a closed sensorimotor loop, so the brain's activity ought to influence subsequent perception (and brain state). The inputs one feed in through the eyeballs and thalamus play a large role in the dynamical state. This is one of the main arguments for embodied cognition approaches.

Not all is hopeless, though: - I think neuroscience / deep learning models complement each other well. The success of techniques like dropout-based regularization and ReLU in practical AI tasks have prompted neuroscientists to actively look for how biology solves rectification and un-learning.

- If they can get their act together and fire their middle management, I could see Cisco making a huge contribution to deep learning by building faster switches. Nvidia has done a good job with pushing GPU Flops, and the bottleneck right now is on the network side.

- DNNs and other data-driven generative models are really cool because they "replay" the human condition back to us. Images generated by DeepDream have surprising amounts of "ordered randomness" in comparison to fractal-based images. Perhaps instead of trying to build this generative process from inside-out (i.e. from neurons to minds), it might be interesting to see what happens if we train a DNN to mimic human behavior, and see what internal states self-organize as a result. The film "Ex Machina" mentions Jackson Pollock and the use of Search Engine Data to capture how people think, which I thought was brilliant.

Citations: [1] http://www.frontiersin.org/10.3389/conf.fphys.2013.25.00090/...

[2] http://www.ncbi.nlm.nih.gov/pubmed/17913979

[3] http://www.cl.cam.ac.uk/~jgd1000/metaphors.pdf

[4] http://hub.jhu.edu/2015/04/02/surprise-babies-learning



Jesus. This post exhibits classic hype.


I spent a nontrivial amount of time researching my comment. What do you find hyped?


We have no idea how far we are from simulating a brain. Because we have no idea how exactly computation is performed in a brain. Neither in a locust's brain, nor in a human brain.

On the other hand, we have already built systems (ANN based) which can do non-trivial things: play Atari games, tell a cat from a dog, convert speech to text, translate from one language to another, etc.

This points to the strong possibility that all that biological complexity in neurons is completely irrelevant to the principles of intelligence, just like the fact that a modern transistor needs 500 parameters and a ton of complicated equations to describe its physical operation is irrelevant to its main function - a simple ON/OFF switch. If we want to simulate a computer, using more complicated transistor models gains us nothing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: