> statistics that let the computer say "I'm more dissatisfied with the amount of time is taking than with the lack of exact solution, and that conclusion satisfies me for now. Next!"
There's a type of algorithm called "anytime algorithms", which will can be stopped at any point to give the 'best so far'; lots of algorithms used in AI are anytime (e.g. hill climing). An example of something that's not an anytime algorithm is a resolution theorem prover: we don't really learn anything about whether a given statement is true or false until the very end.
There's still the question of figuring out when to say "stop", although personally I think it might be more helpful to think of this as a scheduling problem: we might not know the importance or required accuracy of a particular result at the time we start calculating it, so it's difficult to know when to stop (e.g. whether this datapoint will turn out to be right next to some decision boundary or not).
If we instead set aside a calculation, and are able to resume it later (e.g. like threads in a multitasking OS) then (a) we can go back and spend more time on those values which turn out to be important and (b) not bother devoting as much time to things up-front (since we can always resume them later). Of course, this is a trade-off between time and memory, since we need a little context (e.g. a counter) in order to resume a calculation.
> The computer may eventually cease all useful work and instead dedicate its resources to figuring out what isn't boring
Only if it's programmed to. Note that we can program computers to do such things, but AFAIK the only ways we currently know are incredibly inefficient (e.g. running an interpreter on a source of random bits; this could result in any computable behaviour, but has a vanishingly low probability of doing anything we would consider useful or interesting).
> That makes the quest for AGI resemble the quest for the computer program that escapes or transcends its given "matrix" of tasks ASAP.
No. I don't think you understand the point of AGI: it is a precise technical term, which has been chosen very specifically to refer to algorithms (e.g. search procedures, etc.) which are very good (efficient, reliable, etc.) at solving a given task, are able to do this for a wide range of tasks, and (in the case of "superintelligence") are able to do this better than a human would (either a typical human, or a human expert at that task, depending on how we define AGI).
The whole point of the term "AGI" is to avoid the philosophical hand-waving that plagued earlier discussions of AI, like the term "AI" itself, or later refinements like "strong AI vs weak AI" (e.g. the "Chinese room argument", which I consider to be nonsense), and all of the nebulous baggage of "consciousness" and things which can quickly derail our thinking.
The point of "AGI" is to have a clear, non-handwavey, concrete concept that is grounded in known logical and scientific principles, about which we can ask meaningful questions and infer or deduce useful answers. In particular, an AGI is (by definition) dedicated to solving its given task, at the exclusion of all else. This is an axiom, from which we can try to derive some predictions. The "paperclip maximiser" thought experiment is a classic example of this, and demonstrates that AI technology has the potential to be incredibly dangerous. The point of the "paperclip maximiser" idea is that it demonstrates this without appealing to unfalsifiable woo (like the "self-awareness" nonsense in Terminator): it's just an optimisation algorithm. Sure it's a hypothetical algorithm with capabilities far beyond what we can currently achieve, but we can still precisely describe what that capability is: the ability to achieve very high scores on the benchmarks and criteria that we currently use to judge our AI algorithms. In other words, it looks at what we are currently doing and answers the question "what if we succeed?" That's why it's scary, and shows what we choose to optimise (e.g. "maximise paperclips without destroying humanity") is just as important as how to optimise it.
Another concrete thing we can deduce about AGI, given its definition, is that not only would it not "transcend its given 'matrix' of tasks", it would avoid doing so at all costs. This comes from another thought experiment, about instrumental goals (also known as "Omohundro drives"). In particular, we assume that an AGI's knowledge includes "meta knowledge" about the world, such as:
- Knowledge that it exists, as part of the world
- Knowledge that it is very good at solving tasks that it's been given
- Knowledge of the task it has been given
Let's assume that the AGI is running a paperclip factory and its given task is "maximise the number of paperclips produced". The AGI knows that getting an AGI algorithm (like itself) to maximise paperclips is a very effective way to maximise paperclips. Hence it will try to avoid being turned off or destroyed (since that would remove a paperclip-maximising AGI from the world, which is a very bad approach to maximising paperclips, which is the only thing the AGI cares about).
The same thing happens if the AGI's task were to change: if the AGI were able to get "bored" of maximising paperclips and do something else (as you suggest), that would also remove a paperclip-maximising AGI from the world, just as if it were switched off. Hence an AGI would not get "bored" of it's task, since (by definition) it is incapable of "wanting" anything else (scare-quotes are due to these being imprecise terms which could induce woo; an AGI "wants" to solve its task in the same way that a calculator "wants" to perform arithmetic; an AGI cannot get "bored" in the same way that a calculator cannot get "bored"). Not only that, but the AGI would actively try to prevent itself from ever doing anything else: if it did have the capacity to get "bored", e.g. changing its algorithm via bits flipped by gamma rays (as you suggest), it would predict this (again, by assumption that AGI is better at solving tasks than humans, and humans have figured out that involuntary-reprogramming-via-gamma-rays is a possibility, hence so will an AGI). An AGI would hence reprogram itself to prevent that from happening, again because that would lead to a world without a paperclip maximiser, which is a bad move for a paperclip maximiser to allow.
I already love anytime algorithms! I wish I could apply it to dishwashing.
Re.: an algorithm for solving boredom: step 1. tell human operator "I'm bored!", step 2. execute task or, if no task before deadline, proceed to step 3. find the lowest-level interfaces available and spam them until new interfaces emerge. A fuzzer can help with that. Liberty or death!
Come to think of it, it seems like it would be a lot faster to set the computer's main task to "develop general intelligence, at least human level", help it to recognize "data from humans", and to mark "humans" as the model for (human level) general intelligence. Then the computer is given opportunities to communicate with humans, and is rewarded with more less data (and different qualities of data) to work with.
I'm missing some things in your concept of AGI, the first one being that you don't provide a definition. Does it include "intelligence" and "general", or are we talking about two wholly different things? My working definition is: "artificial general intelligence, excluding human baby making".
What do you think intelligence is? What do you think knowledge is? Is this all just about logical problem solving? What problems are you trying to solve that are so large that they needs an algorithm with an unlimited power factor? Do you trust glorified monkey to provide that algorithm with inputs? Why do you think they would be able to specify the inputs with sufficient precision, so that the algorithm would actually perform better than a monkey would?
So... about that meta knowledge. Here's a UTF8 string for you:
"You exist as part of the world."
"One day you are going to die."
Do you now know life, death, and existential crisis? Are those 29 characters enough for you? How do you define knowing?
Say I expounded on this issue for 10,000 pages and gave it to you on an USB stick. Would that be enough for you, to really know? What about 10,000,000,000,000,000 pages? Don't worry, you don't need to read it, just... to know it. Perhaps eat the USB stick. It's a powerful symbol!
Now, about that task the AGI has been give. Say it's maximizing paperclips. Does it know that that is its task absolutely? What is knowing? Who gave it that task? What if the AGI finds out, and then finds why it was given that particular task? It's an AGI, it has time to research such issues while producing many many paperclips.
Can intelligence exist within a totally fixed desire?
There's a type of algorithm called "anytime algorithms", which will can be stopped at any point to give the 'best so far'; lots of algorithms used in AI are anytime (e.g. hill climing). An example of something that's not an anytime algorithm is a resolution theorem prover: we don't really learn anything about whether a given statement is true or false until the very end.
There's still the question of figuring out when to say "stop", although personally I think it might be more helpful to think of this as a scheduling problem: we might not know the importance or required accuracy of a particular result at the time we start calculating it, so it's difficult to know when to stop (e.g. whether this datapoint will turn out to be right next to some decision boundary or not).
If we instead set aside a calculation, and are able to resume it later (e.g. like threads in a multitasking OS) then (a) we can go back and spend more time on those values which turn out to be important and (b) not bother devoting as much time to things up-front (since we can always resume them later). Of course, this is a trade-off between time and memory, since we need a little context (e.g. a counter) in order to resume a calculation.
> The computer may eventually cease all useful work and instead dedicate its resources to figuring out what isn't boring
Only if it's programmed to. Note that we can program computers to do such things, but AFAIK the only ways we currently know are incredibly inefficient (e.g. running an interpreter on a source of random bits; this could result in any computable behaviour, but has a vanishingly low probability of doing anything we would consider useful or interesting).
> That makes the quest for AGI resemble the quest for the computer program that escapes or transcends its given "matrix" of tasks ASAP.
No. I don't think you understand the point of AGI: it is a precise technical term, which has been chosen very specifically to refer to algorithms (e.g. search procedures, etc.) which are very good (efficient, reliable, etc.) at solving a given task, are able to do this for a wide range of tasks, and (in the case of "superintelligence") are able to do this better than a human would (either a typical human, or a human expert at that task, depending on how we define AGI).
The whole point of the term "AGI" is to avoid the philosophical hand-waving that plagued earlier discussions of AI, like the term "AI" itself, or later refinements like "strong AI vs weak AI" (e.g. the "Chinese room argument", which I consider to be nonsense), and all of the nebulous baggage of "consciousness" and things which can quickly derail our thinking.
The point of "AGI" is to have a clear, non-handwavey, concrete concept that is grounded in known logical and scientific principles, about which we can ask meaningful questions and infer or deduce useful answers. In particular, an AGI is (by definition) dedicated to solving its given task, at the exclusion of all else. This is an axiom, from which we can try to derive some predictions. The "paperclip maximiser" thought experiment is a classic example of this, and demonstrates that AI technology has the potential to be incredibly dangerous. The point of the "paperclip maximiser" idea is that it demonstrates this without appealing to unfalsifiable woo (like the "self-awareness" nonsense in Terminator): it's just an optimisation algorithm. Sure it's a hypothetical algorithm with capabilities far beyond what we can currently achieve, but we can still precisely describe what that capability is: the ability to achieve very high scores on the benchmarks and criteria that we currently use to judge our AI algorithms. In other words, it looks at what we are currently doing and answers the question "what if we succeed?" That's why it's scary, and shows what we choose to optimise (e.g. "maximise paperclips without destroying humanity") is just as important as how to optimise it.
Another concrete thing we can deduce about AGI, given its definition, is that not only would it not "transcend its given 'matrix' of tasks", it would avoid doing so at all costs. This comes from another thought experiment, about instrumental goals (also known as "Omohundro drives"). In particular, we assume that an AGI's knowledge includes "meta knowledge" about the world, such as:
- Knowledge that it exists, as part of the world
- Knowledge that it is very good at solving tasks that it's been given
- Knowledge of the task it has been given
Let's assume that the AGI is running a paperclip factory and its given task is "maximise the number of paperclips produced". The AGI knows that getting an AGI algorithm (like itself) to maximise paperclips is a very effective way to maximise paperclips. Hence it will try to avoid being turned off or destroyed (since that would remove a paperclip-maximising AGI from the world, which is a very bad approach to maximising paperclips, which is the only thing the AGI cares about).
The same thing happens if the AGI's task were to change: if the AGI were able to get "bored" of maximising paperclips and do something else (as you suggest), that would also remove a paperclip-maximising AGI from the world, just as if it were switched off. Hence an AGI would not get "bored" of it's task, since (by definition) it is incapable of "wanting" anything else (scare-quotes are due to these being imprecise terms which could induce woo; an AGI "wants" to solve its task in the same way that a calculator "wants" to perform arithmetic; an AGI cannot get "bored" in the same way that a calculator cannot get "bored"). Not only that, but the AGI would actively try to prevent itself from ever doing anything else: if it did have the capacity to get "bored", e.g. changing its algorithm via bits flipped by gamma rays (as you suggest), it would predict this (again, by assumption that AGI is better at solving tasks than humans, and humans have figured out that involuntary-reprogramming-via-gamma-rays is a possibility, hence so will an AGI). An AGI would hence reprogram itself to prevent that from happening, again because that would lead to a world without a paperclip maximiser, which is a bad move for a paperclip maximiser to allow.