Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can't agree with the dismissiveness of this comment, and frankly I find its tone out of line and not with the spirit of Hacker News.

There are insights that can come from studying the brain, that do indeed apply. Some researchers may not glean anything from such studies, and some may. I have no doubt that as neural networks get more an more powerful, we will continue to find more ways they are similar to the brain, and apply things we've learned about the brain to them.

I certainly prefer to see people making comparisons of neural networks to the brain, that the old "it's just a glorified autocomplete" and the like.

Relax.



No one disagrees we might be able to discern insights if we understand how our brain is wired. The problem is the current state of neuroscience is so flawed in its approach it’s not looking like they’re of any use. They don’t even understand how a 900 neuron worms system works but are more than happy to tap half a billion dollars from unsuspecting politicians saying they’ll map the human connectome. Go read the brain initiative proposal [1] to see how out of touch with reality the scientists in this field are. I agree with OP that sharp criticism of the entire field is fully warranted.

1. https://braininitiative.nih.gov/sites/default/files/document...


what are you talking about is this konrad kording's shitposting alt??? this reeks of naivety

I certainly have many critiques of methods used in neuroscience rn (as a working neuroscientist) but to reduce those to the conclusion that the entire project of neuroscience is hopeless is absurd. We understand certain things quite well actually, and it's not at all obvious what "understanding" at a larger scale would look like. It is very possible that the brain is irreducibly complex, and that the model you would need to construct to describe it would itself be so complex as to be useless in providing insight. Considering that the brain is by far the most complex object in the universe I think we're doing pretty well.

Furthermore, there are quite a lot of disagreements about the utility of connectomics. Outside of the extremists (Sebastian Seung and his ilk) no one thinks that connectomics is going to be the key that brings earth shattering insight. It's just another tool. There is a complete connectome for part of the drosophila brain already (privately funded btw), which is in daily use in many fly labs. It tells you what other neurons are connected to. Incredibly useful. Not earth shattering.

also you might want to measure the neuroscience funding you deem wasteful up against the tens of billions NASA is spending to send humans (and not robots) back to the moon for "the spirit of adventure". cold war's over. robots will do just fine for the moon.


Can you please elaborate what great strides the field of neuroscience has made in the past 30 years?

From where I stand I can’t see anyone giving a clear explanation of anything our brain does or does not do in a disease. The only novel treatment that has come out seems to have been stick a rod into the brain and zap it and it just magically cures a lot of diseases we still don’t understand even a bit.

This is not even starting to discuss what little we have learned about how brains algorithms work. I’m still waiting to understand why pyramidal neurons were somehow groundbreaking. We found some neuron that fires when you walk to a place, why wouldn’t we find one?

And what are you saying about the fly connectome again? Do we have exact names for every neuron in the fly brain and its verified connectome for every neuron?

Last I checked the worm connectome has been available in intricate detail for decades and the scientists still haven’t had any proper decoding of the algorithms in that system. In fact I know every lab trying to figure that out now, I wrote proposals in the topic myself. Everyone else has apparently decided it’s not sexy enough to work with worms so they have just leaped to more complex systems with no basic understanding. I’m not the only one saying this. Sydney Brenner said as much in an editorial. But the field was too busy doing I don’t know what to listen.

Sydney, B. & Sejnowski, T. J. Understanding the human brain. Science 334, 567 (2011).

I remember sauntering to the occasional neuroscience talk during my ut southwestern PhD and occasionally hearing some professor brag about how the majority of one of their PhD’s jobs was to segmenting a single neuron in the thousand EM images or something. Surely that’s a sign this field needs revision?


> And what are you saying about the fly connectome again? Do we have exact names for every neuron in the fly brain and its verified connectome for every neuron?

onus isn't on me to justify the existence of an entire field to you. the claim that neuroscience has not made great strides in the last 30 years is an extraordinary one, and that's all on you. but it especially doesn't help your case that if you had googled "fly connectome " you would have seen that the first result is a complete connectome of a larvae and the third result is the tour de force from Janelia that produced an adult connectome. With names and verified connections. there is even a wikipedia article for the drosophila connectome!

> I remember sauntering to the occasional neuroscience talk during my ut southwestern PhD and occasionally hearing some professor brag about how the majority of one of their PhD’s jobs was to segmenting a single neuron in the thousand EM images or something. Surely that’s a sign this field needs revision?

and if you had gone on to actually read the hemibrain connectome paper you would have gained some appreciation for the gargantuan achievement that it was. it took hundreds of person years to generate ground truth segmenting neurons by hand, to develop the ML techniques required to automatically segment the rest (extremely difficult problem) and to then validate the automatic segmentations. not to mention the insane effort it was to acquire a half petabyte EM image of a single fly at sub-synaptic resolution in the first place.

I gotta hand it to you though, the position of naivety you've delivered your middlebrow dismissal from is truly impressive in magnitude.


Agreed. Reading the GP’s comment it feels like it’s from bizzaro world. It’s the computer scientists who have been claiming that neural networks resemble the human brain - they even fucking named them neural networks for christ’s sake! That could be excused as naive hubris in the 1980s, it’s utter delusion now.

A surface review of neuroplasticity literature alone should free anyone of the illusion that “neural networks” have even a passing resemblance to biological neurons, something covered in neuroscience 101 and is widely internalized by its practitioners. The BS grant writing and PR scientists have to participate in is hardly reflect of state of the art science itself.

The irony is that machine learning methods are a perfect fit for neuroscience and biology in general which generates reams of data that is largely so multidimensional that manual analysis is intractable. What we’re seeing now is the crest of the academic hype cycle which - if the history of bioinformatics is anything to go by - means that ML will take years if not decades for the field to understand and filly utilize.


Actually it was neuroscientists that developed the models nowadays used for machine learning. The McCulloch-Pitts neuron model introduced in 1943 which lead to Frank Rosenblatt's perceptron introduced in 1958. Machine learning algorithms mostly still use those models but computational neuroscience has progressed towards much more complicated neuronal models.


It's typical of the arrogant, borderline anti-scientific attitude of a non-negligible fraction of the HN hive mind, i.e. if it came out of academia it must be a waste of time.


As another working neuroscientist, thank you. And cheers.


No I think these comments are quite necessary. People need to stop making these comparisons because they have absolutely no grounding in how brains actually work. There are bad ideas that should be dismissed.


Neural networks are absolutely based on a very simplified model of how brains work. Specific NN architectures are in turn based on specific parts of the brain (e.g. Convolution Neural Networks are based on the visual cortices of cats/frogs).


nah, they're arbitrary function approximators that caught a lucky break. CNNs rose to prominence because natural scene statistics are translation invariant and convolutions can be efficiently computed on GPUs. and now that we have whole warehouses of GPUs, the current mood in DL is to stop building the symmetries of your dataset into the model (which is insane btw) and use brute force.

the tenuous connection DL once had to neuroscience (perceptrons) is a distant memory


A fabricated re-telling of the past, given that we didn't start using GPUs for this type of compute until the turn of the millenium.


If you want to talk about history, these things were invented using a 1950's understanding of neuroscience then promptly discarded until the ml people figured out how to make them useful.


AlexNet was the turning point for DL.


Why do you say that? Deep Learning was accelerating well before that (I would argue it has been accelerating for its entire existence).

AlexNet was a state-of-the-art image recognition net for a (relatively) brief amount of time. It wasn't the first CNN to use GPU acceleration, and it was quickly eclipsed in terms of ImageNet performance.

Regardless, I think bringing up AlexNet kinda invalidates your initial point. Although yes, it turns out that the two were a great match, CNNs and modern GPUs were clearly developed independently of each other, as evidenced by the many, many iterations of both before they were combined.


is this schmidhuber's alt? sure they existed before AlexNet was where it really took off. just look at the number of citations. right paper, right time. CNNs were uniquely suited to the hardware at the time. because of their efficiency due to symmetry and suitability to GPGPU computing. not because of their history.


You're saying the study has no grounding in how brains work? I'd think a more reasonable conclusion would be that the neuroscientists involved have no grounding in how artificial neural networks work.

It seems the whole point is to bring in additional details of how brains work, that the think may be relevant to artificial NNs.


Artificial neural networks are the closest working model of a brain we have today.

Lots of graph nodes, with weighted connections, performing distributed computation (mainly hierarchical pattern matching), learning from data by gradually updating weights, using selective attention (and/or recurrence, and/or convolutional filters).

Which of the above is not happening in our brains? Which of the above is not biologically inspired?

In fact this description equally applies to both a brain and GPT4.


Many organisms have just a handful of neurons yet exhibit complex behavior that would be impossible given the weighted connections model. Not to mention single-celled organisms that exhibit ability to navigate.

The model can be the closest working model but that doesn't mean it is complete. It's very likely that cells can store memories/information independent from weights.


We can’t do that not because our mathematical neurons are too simple. We can’t do that because we don’t know the algorithms those biological neurons are running.

Do you see the difference?


There is of course a difference between the two things you say. They're both the reason we can't recreate the brain in software though.


There are two separate goals: to simulate the brain in software, and to understand brain algorithms. They overlap, but they are still distinct, and appeal to different groups of people. Neuroscientists want to understand detailed brain operations. They are primarily interested in the brain itself. AI researchers want to understand intelligence, they are primarily interested in higher brain functions (e.g. reasoning, attention, short/long memory, emotions, motivations, goal setting, etc).

We can't (fully) recreate the brain in software partly because we don't know enough, and partly because it's too computationally complex - for example, we can't simulate an entire modern CPU at the transistor level - even though we know how each transistor works, and what each transistor does in the CPU - because each transistor requires a detailed physical model with hundreds of parameters. It's simply not computationally feasible using current supercomputers. Brain is even less feasible to simulate if we want to accurately simulate each individual neuron in it - even if we knew exactly how it works.

But the second goal is much more feasible, and we have made great progress simply by scaling up simple known algorithms which approximate some information processing functions in the brain (mainly pattern matching/prediction and attention). I can talk to GPT4 today just like I talk to other humans, and by the way, this is only possible because out of all AI/ML algorithms people have tried over the last 70 years, the most brain-like one have won (ANNs). If we want to make further progress in AI or if we want to make GPT5 to be more human-like (not sure we do), we don't necessarily need to simulate brain at a neuronal level, we simply need to understand a little bit more about higher level brain functions. Today, we (ML researchers) might actually benefit more from studying psychology than neuroscience.


> Many organisms have just a handful of neurons yet exhibit complex behavior that would be impossible given the weighted connections model.

That's rather a bold claim given that artificial neural networks are universal function approximators.


Impossible given that number of neurons.

It's perhaps not terribly surprising that it becomes possible with unlimited width or depth (or an arbitrarily complex activation function).

https://en.wikipedia.org/wiki/Universal_approximation_theore...


It's incredible to me how widely this is misunderstood.

The universal function approximator theorem only applies for continuous functions. Non-continuous functions can only be approximated to the extent that they are of the same "class" as the activation function.

Additionally, the theorem only proves that for any given continuous function, there exists a particular NN with particular weight that can approximate that function to a given precision. Training is not necessarily possible, and the same NN isn't guaranteed to approximate any other function to some desired precision.

It seems pretty obvious to me that most interesting behaviors in the real world can't be modelled by a mathematical function at all (that is, for each input having a single output); if we further restrict to continuous functions, or step functions, or whatever restriction we get from our chosen activation function.


> The universal function approximator theorem only applies for continuous functions. Non-continuous functions can only be approximated to the extent that they are of the same "class" as the activation function.

Yes, and?

> Training is not necessarily possible

That would be surprising, do you have any examples?

> and the same NN isn't guaranteed to approximate any other function to some desired precision.

Well duh. Me speaking English doesn't mean I can tell 你好[0] from 泥壕[1] when spoken.

> It seems pretty obvious to me that most interesting behaviours in the real world can't be modelled by a mathematical function at all (that is, for each input having a single output)

I think all of physics would disagree with you there, what with it being built up from functions where each input has a single output. Even Heisenberg uncertainty and quantised results from the Stern-Gerlach setup can be modelled that way in silico to high correspondence with reality, despite the result of testing the Bell inequality meaning there can't be a hidden variable.

[0] Nǐ hǎo, meaning "hello"

[1] Ní háo, which google says is "mud trench", but I wouldn't know


> Yes, and?

It means that there is no guarantee that, given a non-continuous function function f(x), there exists an NN that approximates it over its entire domain withing some precision p.

> That would be surprising, do you have any examples?

Do you know of a universal algorithm that can take a continuous function and a target precision, and return an NN architecture (number of layers, number of neurons per layer) and a starting set of weights for an NN, and a training set, such that training the NN will reach the final state?

All I'm claiming is that there is no known algorithm of this kind, and also that the existence of such an algorithm is not guaranteed by any known theorem.

> Well duh. Me speaking English doesn't mean I can tell 你好[0] from 泥壕[1] when spoken.

My point was relevant because we are discussing whether an NN might be equivalent to the human brain, and using the Universal Approximation Theorem to try to decide this. So what I'm saying is that even if "knowning English" were a continuous function and "knowing French" were a continuous function, so by the theorem we know there are NNs that can approximate either one, there is no guarantee that there exists a single NN which can approximate both. There might or might not be one, but the theorem doesn't promise one must exist.

> I think all of physics would disagree with you there, what with it being built up from functions where each input has a single output.

It is built up of them, but there doesn't exist a single function that represents all of physics. You have different functions for different parts of physics. I'm not saying it's not possible a single function could be defined, but I also don't think it's proven that all of physics could be represented by a single function.


> It means that there is no guarantee that, given a non-continuous function function f(x), there exists an NN that approximates it over its entire domain withing some precision p.

And why is this important?

> Do you know of a universal algorithm that can take a continuous function and a target precision, and return an NN architecture (number of layers, number of neurons per layer) and a starting set of weights for an NN, and a training set, such that training the NN will reach the final state?

> All I'm claiming is that there is no known algorithm of this kind, and also that the existence of such an algorithm is not guaranteed by any known theorem.

I think so: the construction proof of the claim that they are universal function approximators seems to meet those requirements.

Even better: it just goes direct to giving you the weights and biases.

> My point was relevant because we are discussing whether an NN might be equivalent to the human brain, and using the Universal Approximation Theorem to try to decide this. So what I'm saying is that even if "knowning English" were a continuous function and "knowing French" were a continuous function, so by the theorem we know there are NNs that can approximate either one, there is no guarantee that there exists a single NN which can approximate both. There might or might not be one, but the theorem doesn't promise one must exist.

I still don't understand your point. It still doesn't seem to matter?

If any organic brain can't do $thing, surely it makes no difference either way whether or not that $thing can or can't be done by whatever function is used by an ANN?

> It is built up of them, but there doesn't exist a single function that represents all of physics. You have different functions for different parts of physics. I'm not saying it's not possible a single function could be defined, but I also don't think it's proven that all of physics could be represented by a single function.

I could point you to this: https://www.youtube.com/watch?v=PHiyQID7SBs

But that would be unfair, given the QM/GR incompatibility.

That said, ultimately I think the onus is on you to demonstrate that it can't be done when all the (known) parts not only already exist separately in such a form, but also, AFAICT, we don't even have a way to describe any possible alternative that wouldn't be made of functions.


> And why is this important?

Since we know non-continuous functions are used in describing various physical phenomena, it opens the gate to the possibility that there are physical phenomena that NNs might not be able to learn.

And while piece-wise continuous functions may still be ok, fully discontinuous functions are much harder.

> I think so: the construction proof of the claim that they are universal function approximators seems to meet those requirements.

Oops, you're right, I was too generous. If we know the function, we can easily create the NN, no learning step needed.

The actual challenge I had in mind was to construct an NN for a function which we do not know, but can only sample, such as the "understand English" function. Since we don't know the exact function, we can't use the method from the proof to even construct the network architecture (since we don't know ahead of time how many bumps there are are, we don't know how many hidden neurons to add).

And note that this is an extremely important limitation. After all, if the UAF was good enough, we wouldn't need DL or different network architectures for different domains at all: a single hidden layer is all you need to approximate any continuous function, right?

> If any organic brain can't do $thing, surely it makes no difference either way whether or not that $thing can or can't be done by whatever function is used by an ANN?

Organic brains can obviously learn both English and French. Arguably GPT-4 can too, so maybe this is not the best example.

But the general doubt remains: we know humans express knowledge in a way that doesn't seem contingent upon that knowledge being a single continuous mathematical function. Since the universal function approximator theorem only proves that for each continuous function there exists an NN which approximates it, this theorem doesn't prove that NNs are equivalent to human brains, even in principle.

> That said, ultimately I think the onus is on you to demonstrate that it can't be done when all the (known) parts not only already exist separately in such a form, but also, AFAICT, we don't even have a way to describe any possible alternative that wouldn't be made of functions.

The way physical theories are normally defined is as a set of equations that model a particular process. QM has the Schrodinger equation or its more advanced forms. Classical mechanics has Newton's laws of motion. GR has the Einstein equations. Fluid dynamics has the Navier-Stokes equations. Each of these is defined in terms of mathematical functions: but they are different functions. And yet many humans know all of them.

As we established earlier, the UFA theorem proves that some NN can approximate one function. For 5 functions you can use 5 NNs. But you can't necessarily always combine these into a single NN that can approximate all 5 functions at once. It's trivial if they are simply 5 easily distinguishable inputs which you can combine into a single 5-input function, but not as easy if they are harder to distinguish, or if you don't know that you should model them as different inputs ahead of time.

By the way, there is also an example of a pretty well known mathematical object used in physics that is not actually a proper function - the so-called Dirac delta function. It's not hard to approximate this with an NN at all, but it does show that physics is not strictly speaking limited to functions.

Edit to add: I agree with you that the GP is wrong to claim that the behavior exhibited by some organisms is impossible to explain if we assumed that the brain was equivalent to an (artificial) neural network.

I'm only trying to argue that the reverse is also not proven: that we don't have any proof that an ANN must be equivalent to a human/animal brain in computational power.

Overall, my position is that we just don't know to what extent brains and ANNs correspond to each other.


> Lots of graph nodes

Neurons are not connected by a simple graph, there are plenty of neurons which affect all the neurons physically close to them. There are also many components in the body which demonstrably affect brain activity but are not neurons (hormone glands being among the most obvious).

> with weighted connections

Probably, though we don't fully understand how synapses work

> performing distributed computation (mainly hierarchical pattern matching)

This is a description of purpose, not form, so it's irrelevant.

> learning from data by gradually updating weights

We have exactly 0 idea how biological neural nets learn at the moment. What we do know for sure is that a single neuron when alone can adjust its behavior based on previous inputs, so the only thing that is really clear is that individual neurons learn as well, it's not just the synapses with their weights which modifies behavior. Even more, non-neuron cells also learn, as is obvious from the complex behaviors of many single-cell organisms, but also some non-neuron cells in multicellular organisms. So potentially, learning in a human is not completely limited to the brain's neural net, but it could include certain other parts of the body (again, glands come to mind).

> using selective attention (and/or recurrence, and/or convolutional filters).

This is completely unknown.

So no, overall, there is almost no similarity between (artificial) neural nets and brains, at least none profound enough that they wouldn't share with a GPU.


What does this comment add to the discussion?


I dunno. My comment complained about the parent comment not adding positively to the discussion. And gave at least a bit of support for that complaint.

Would you have preferred I emulate your style, and complain while providing no support for my complaint?

Ok.


Being positive is not a requirement of commenting on HN, but you should comment with something that is substantive, so yes I do think you shouldn't have commented at all. Tone policing is cringe.


I don't like tone-policing in general. But when I opened this post the negative comment we're talking about was the top comment. That's makes me much more sympathetic to someone calling out the cynicism.


Exactly what are you doing here then?

But hey I guess I can do this too. How's this? Using cringe as an adjective is cringe.


> But hey I guess I can do this too.

It sucks, doesn't it?




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: