Recently having become a father has made me think a lot about general intelligence. Seeing my son getting excited about his 'world state changing' gave me an idea. What if the main thing that holds us back is the reliance on cost functions? Human, and to some extent, animal intelligence is the only intelligence we know about. If that's what we want to emulate, why don't we try modelling emotions as the basic building blocks that drive the AI forward? Until now, the way I understand neuronal nets, we have basically modelled the neurons and gave them something to do. My hunch is that brain chemistry is what's driving us actually forward, so what if we model that as well? Instead of seratonin, endorphin etc. we could also look at it at a higher level, akin to Pixar's Inside Out - joy, fear, sadness, disgust, anger, and I would add boredom.
Let's stay with video games for a bit. What if we look at joy as 'seeing the world change', graded by the degree of indirection from our inputs (the longer it cascades, the more joy it gets). Maybe let it have preference for certain color tones and sounds, because that's also how games give us hints about whether what we do is good or not. Boredom is what sets us on a timer - too many repetitions of the same thing and the AI gets bored. Fear and disgust is something that comes out of evolutionary processes, so it might be best to add a GA in there that couples success with some fear like emotion. Anger, well, maybe wait with that ;-).
Edit: Oh, and for the love of god, please airgap the thing at all times...
IIRC, DeepMind is also working on such goal functions to get their Atari playing RL-based AI to seek more data about the world even when it does not immediately help achieving the main goal function (achieving a high score).
Novelty seeking behavior probably has evolved because there are just not enough immediate rewards in our world to teach us everything that is nesessarey to reproduce [0]. Thus the brain rewards itself for exploring new things which has a collateral effect that we are interested in art and can find intrinsic motivation in all kinds of things (science, work, hobbies etc.).
[0] which does not mean that we are here to maximize the number of our babies. We aren't fitness maximizers ourselves, but we are just adaptation executors of genetic code that has necessarily been shaped by such goals (since the alternative to reproduction is to not reproduce, i.e. going extinct). In other words: We are free do whatever we want!
A kid's emotions are actually tightly linked to cost (or rather: fitness) functions: it gets fascinated by things it just barely cannot do. Somehow it "knows" that it could learn it and gets drawn to it. It gets bored by things it can do already, and frustrated by matters that are too hard. I think there's a pattern - emotions are a device for steering the system as a whole in a certain direction that increases its (or rather: its genes') chance of survival - finding a mate, finding food, adapting to the environment. They are one of the devices used for increasing our chance to create offspring, even in places where we don't expect them. For example there is this study where people around the world were asked in detail what kind of art they would find most beautiful (there's a TED talk about it): basically, across the globe, from Greenland to Sahara, it was a landscape with lush greens and a waterhole: a place where food would ve abundant.
This system is highly adaptive - e.g. if you look at how beauty ideals have changed over the centuries: the 16th century "Rubens type" signalled fitness in a way that we would call "overweight" today. A skinny model from today wouldn't have drawn the attention of Rubens' contemporaries. So perhaps we have an innate mechanism to recognize fitness within the local context, and we are drawn to it.
I think one problem of computer scientists working on the problem might be that they are often not self aware about their own emotions. Perhaps we should have more painters and fashion designers amongst us to understand the topic.
To me this model feels a bit too simplistic. If you only look at how children learn their abilities, then yes, absolutely. Where it breaks down for me is in the interaction with play partners. The joy he seems to get for playing together doesn't seem to be fully explainable through pure evolutionary steering / fitness functions, but it's hard to put my finger on what's missing in the picture. An example is his joyful giggling when I'm doing something unexpected. You can see the tug of war between fearfulness and joyfulness - at the beginning when they become sensitive to playful behavior it makes them afraid, but more and more this is replaced with pure joy, also showing a trust relationship. So to me it seems the curiosity goes beyond just what the child can achieve him/herself in the near future, it's also a curiosity and joy of observing the world, and more importantly, what the caregivers are doing. Everything new is exciting, and much of it doesn't seem to be something that could have been selected for directly. So there seems to be some emergent behavior that comes from the interaction of evolved chemistry/signaling, and the actual cognitive functions.
Unknown things are to be avoided, until feedback shows that there's no threat, which makes aversion costly. Then simple observing is the lowest cost option until there's fewer novelties. The extra energy expended from interaction is then offset by the gains in feedback. Eventually even that peters out and finding something else to do makes more sense.
>What if the main thing that holds us back is the reliance on cost functions? ... why don't we try modelling emotions as the basic building blocks that drive the AI forward?
There are theories that intelligence comes about from relatively simple processes that generate complex structures. If we can model these simple processes and throw increasing amounts of computing power at it, perhaps we can actually get to something we agree is intelligent. This is largely done through cost functions: directing the structure to a sensible direction when we can. Now, I think this approach may very well be a dead end on the road to general AI. At the least, I think we're nowhere near it in our current direction. But it's taking us in interesting directions.
Now, what is emotion? My impression is emotion is potentially far more complex, abstract, and ill-defined than intelligence. At the least, we see it from a super biased perspective because our brain is good at lying to us. Much like we never really see our own nose even though we're ALWAYS looking right at it, maybe our brain is really good at hiding emotions. Like an old friend of mine who was probably clinically depressed but didn't realize it for months. This is why I think modeling emotions would be really difficult.
My guess is whenever we figure out intelligence (50 years from now?) it will be much easier to figure out how emotion can come out of that intelligence. Maybe it will even be emergent - for example, the AI is smart enough to realize something is wrong, so it feels fear. It realizes things are going well for it, so it is happy. Etc.
It's an interesting thought, but I have trouble thinking about intelligence as a consequence of emotions rather than the other way around. I've always thought of emotions like "sadness" and "love" as the words we use to describe brain states that are the obvious result of our having intelligence.
Kenneth O. Stanley and Joel Lehman have a great book out on measuring novelty, and using it to search large parameter spaces for interesting behaviors.
I read an article in the late '90s that implemented game AI a bit like that.
The bots would have a number of emotions, and certain events would affect different emotion-levels. Like, a hit/miss from a bot against another would increase/decrease the "confidence" meter, and getting hit by an enemy would increase the "fear" meter.
The total state of all meters would drive the high-level behavior, like fight/flee and weapon choice.
There has been extensive research on modeling emotions. Here is an example of a professor, but there are others (http://web.eecs.umich.edu/~emilykmp/). Apple recently acquired a machine learning startup that mainly focused on emotion.
> Recently having become a father has made me think a lot about general intelligence. [...] why don't we try modelling emotions as the basic building blocks that drive the AI forward
Because, among many other reasons, an AI going through the "terrible two(minute)s" could decide to destroy the world, or simply do so by accident. We will have a hard enough time building AI that doesn't do that when we set that specifically as our goal, let alone trying to "raise" an AI like a child.
> Edit: Oh, and for the love of god, please airgap the thing at all times...
There are multiple factions when it comes to AI, and both positions don't seem to be disprovable to me, i.e. whether AI will safe us or be our doom. On the opposite spectrum I'd put David Deutsch[1]. My position is that if such a singularity is possible, we probably can't avoid it, but it's probably possible to nugde it into a good direction by being careful during research. According to Deutsch, the problem of keeping AI on a good track is the same as keeping humans on a good track, since modelling ourselves is the only way we know how to build a general intelligence. So if we can succeed in building a stable society (which we sort of have at least locally), then we might also succeed in building a general AI that is acting in our interests.
I'd argue that if possible, it has great potential for both: AI provides one of the most universal solutions to a wide range of problems humanity faces, while simultaneously providing an existential threat of its own if it goes badly.
Let's stay with video games for a bit. What if we look at joy as 'seeing the world change', graded by the degree of indirection from our inputs (the longer it cascades, the more joy it gets). Maybe let it have preference for certain color tones and sounds, because that's also how games give us hints about whether what we do is good or not. Boredom is what sets us on a timer - too many repetitions of the same thing and the AI gets bored. Fear and disgust is something that comes out of evolutionary processes, so it might be best to add a GA in there that couples success with some fear like emotion. Anger, well, maybe wait with that ;-).
Edit: Oh, and for the love of god, please airgap the thing at all times...