Well, my son is a meat robot who's constantly ingesting information from a variety of sources including but not limited to youtube. His firmware includes a sophisticated realtime operating system that models reality in a way that allows interaction with the world symbolically. I don't think his solving the |i+1| question was founded in linguistic similarity but instead in a physical model / visualization similarity.
So -- to a large degree "bucket of neurons == bucket of neurons" but the training data is different and the processing model isn't necessarily identical.
I'm not necessarily disagreeing as much as perhaps questioning the size of the neighborhood...
Heh I guess it's s matter of perspective. Your son's head is not made of silicon so in that sense it is a large neighborhood. But if you put them behind a screen and only see the output then the neighborhood looks smaller. Maybe it looks even smaller a couple of years in the future. It certainly looks smaller than it did a couple of years in the past.
There are thousands of structures and substances in a human head besides neurons, at all sorts of commingling and overlapping scales, and the neurons in those heads behave much differently and with tremendously more complexity than the metaphorical ones in a neural network.
And in a human, all those structures and substances, along with the tens of thousands more throughout the rest of the body, are collectively readied with millions of years of "pretraining" before processing a continuous, constant, unceasing mulitmodal training experience for years.
LLM's and related systems are awesome and an amazing innovation that's going to impact a lot of our experiences over the next decades. But they're not even the same galaxy as almost any living system yet. That they look like they're in the neighborhood is because you're looking at them through a very narrow, very zoomed telescope.
Even if they are very different (less complex at the neuron level?) to us, do you still think they’ll never be able to achieve similar results (‘truly’ understanding and developing pure mathematics, for example)? I agree that LLMs are less impressive than it may initially seem (although still very impressive), but it seems perfectly possible to me that such systems could in principle do our job even if they never think quite like we do.
True. But a human neuron is more complex than an AI neuron by a constant factor. And we can improve constants. Also you say years like it's a lot of data--but they can run RL on chatgpt outputs if they want, isn't it comparable? But anyway i share your admiration for the biological thinking machines ;)
The sun is also better than a fusion reactor on earth by only a constant factor. That alone doesn't mean much for out prospects of matching its power output.
> human neuron is more complex than an AI neuron by a constant factor
constant still can be not reachable yet: like 100T neurons in brain vs 100B in chatgpt, and also brain can involve some quantum mechanics for example, which will make complexity diff not constant, but say exponential.
> and also brain can involve some quantum mechanics
A neuroscientist once pointed this out to me when illustrating how many huge gaps there are in our fundamental understanding of how the brain works. The brain isn't just as a series of direct electrical pathways - EMF transmission/interference is part of it. The likelihood of unmodeled quantum effects is pretty much a guarantee.
To continue on this. LLMs are actually really good at asking questions even about cutting edge research. Often, I believe, convincing the listener that it understands more than it goes
LLMs comsume training data and can then be asked questions. How different is that to your son watching YouTube and then answering questions?
It's not 1:1 the same,yet, but it's in the neighborhood.