Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thank you for posting this, I was beginning to go insane with all the deep learning hysteria.

I would encourage everybody here to purchase a neuroscience textbook. Then, if you see an AI researcher, bash them over the head with it.



>I would encourage everybody here to purchase a neuroscience textbook. Then, if you see an AI researcher, bash them over the head with it.

While I fully support doing so, it must be said: the active cabal of researchers behind "deep learning" don't go around saying their theories are neurologically accurate, only that their ANN models perform well on supervised classification and unsupervised feature-learning tasks.

I would indeed say that if you want to create "AI" or a "singularity", you need to know a lot more about the inner mechanism of what you're actually doing, what inference problem you're solving and how, than current deep-learning theories allow for.


There is some reasonable scientific middle ground between being an unreflected Deep Learning fanatic at one end of the spectrum and being an AGI denier at the other end. In typical internet fashion, we're mostly exposed to extreme and exaggerated viewpoints that are not above distorting a few things in order to get their message across - and this article is no exception.

Just to grab one strategy used in these articles (again, on both ends of the opinion spectrum): comparing apples to oranges. In this case, it's the neuronal firing rate. The biological brain uses firing rate to encode values, but that's rarely used in silico outside biochemical research because we have a better way of encoding values in computers.

One side (in my opinion, reasonably) asserts that this is an implementation detail where it makes sense to model a functional equivalent instead of strictly emulating nature. This view has gained credence from the fact that ANNs do work in practice, and even more importantly, several different ANN algorithms seem to be fit for the job. A lot of people believe this bodes well for the "functional equivalence" paradigm, not only as it pertains to AI, but also as it relates to the likelihood of intelligent life elsewhere in the universe.

The other side asserts that implementation details such as the neuronal firing rate are absolutely crucial and cannot be deviated from without invalidating the whole endeavor. They believe (and I'm trying to represent this view as fairly as I can here) that these are essential architectural details which must be preserved in order to preserve overall function. And since it's not feasible to go this route in large-scale AI, the conclusion must be that AGI is impossible. A lot of influential people believe this, including Daniel Dennett if I recall correctly.

The article is very close to the latter opinion, but it goes one step further in riding the firing rate example by not even acknowledging the underlying assumption and jumping straight to attacking the feasibility of replicating the mechanism.


Well...

ANNs may turn out to have enduring usefulness, but more likely is that better (more accurate, more efficient) tools will be found. I see no evidence that ANNs are optimal for the problem space they tackle, and non-optimal techniques tend to be quickly forgotten once bettered. Such is progress.

The only reason anybody seems to think ANNs have some kind of assured longevity is because of the magical word "neural" in the name.

So I agree. It would be wrong to conclude that "AGI cannot exist" on the basis of differences between ANNs and the human nervous system. On the other hand, if and when AGI does happen, ANNs may not have a major role.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: