- The halting problem is decidable for systems with finite memory. That's simple enough; a deterministic program with finite memory must either halt or repeat a previous state. Now the question is, is there a faster way to determine the outcome than running the program? That's a complexity problem. One of those where the average case is much easier than the worse case. Many NP-hard problems are like that.
- The same scaling issue applies to the "Chinese room", which is an extremely inefficient solution to the problem. The question is how much can you improve on exhaustive enumeration.
Then it gets hard. I thought it was established that quantum computing is not going to allow brute-force search (using multiple universes?) on problems like factoring. Is that wrong?
Though the halting problem may be decidable for universal machines with finite memory, we still don't know what the program that iterates the most states before halting looks like. The issue is further clouded when you allow a program to edit its own code..ie no code/data separation. In fact, if a machine separates code and data, the same one that doesn't will be able to run for longer before halting, iterating more states.
Now, this is a hunch based on research I have been working on: I believe that the programs that iterate the most states before halting have symmetry in time. The structure would mimic V [1]
Heres how the programs would operate:
Iterate all possible configurations for a finite subset of the memory.
Then iterate all possible ways to iterate.
Then iterate all possible ways to iterate all possible ways.
In writing the program, one would repeat the nesting until the program's memory requirements cannot support a halting condition. And then dial it back until they do.
The naive procedural method of a bunch of nested for loops will fail spectacularly. Static code is simply an inefficient use of memory space. That is what makes the challenge to write this necessarily polymorphic/self-modifying emulation of V in time and memory, a challenge still unsolved. But I think we are close.
The amount of nesting would be a measure of the memory requirements necessary to iterate X configurations, a kind of constant representating the languages specific relationship between code and data, efficiency of sorts. This constant would be determined by the language but moreso, the limits of computation in general.
If there was an algorithm that produced the busy beaver for a computer of a certian size, you could run it to see how long it would run and then compute the busy beaver function. The busy beaver function grows faster than any computable function[] so that's not possible.
You couldn't use that to compute the busy beaver function, as the busy beaver function asks about Turing machines which have unlimited tape. You can solve it if you have a finite amount of tape, but you'd never know whether one of machines that tried to exceed the limit would have produced a longer output before eventually halting.
- The halting problem is decidable for systems with finite memory. That's simple enough; a deterministic program with finite memory must either halt or repeat a previous state. Now the question is, is there a faster way to determine the outcome than running the program? That's a complexity problem. One of those where the average case is much easier than the worse case. Many NP-hard problems are like that.
Are you sure on that one? A system with finite memory can computate procedural, fractal, infinite equations and as long as it looses the state it started from - progress infinitely on this way back and forth. Thus even a limited system can computate eternally, and a program to proof this to be a correct program with a finite holding state, will follow this eternally into the abyss of computation.
I think memory includes all program state, so at some point in the fractal computation you're bound to hit a memory state you've seen before - and since that state contains all that determines the program evolution, you can stop here and say that it won't finish.
If i remember Cantors infinity theorem correctly, you can recombine sets to basically proof that the resulting set is bigger then the original sets.
Now the naive assumption is that to recombine a set, you have to have the original sets "stored" somewhere. But assuming you have enough computation time, you can travel lightly - aka have tail-call eliminated recursive threads go back to the origins of such a set, following a formula and recompute both the values.
Thus, you could have infinite reasonable work to do, with finite computation power?
No it isn't unsound, it is an application of the pigeonhole principle. A finite memory and finite set of states implies that there are some finite number of configurations (call it N) that the computer can be in. Once you let the computer go N+1 steps, there must be at least one repeated configuration; which might be halting or not.
In a memory bounded Turing machine, when we see that on step Y we repeated step X we also know that tape and header on step Y+1 will be identical to step X+1. This is because computation is entirely driven by the state of the tape, namely the symbols on the tape and where the head of the tape currently is located. It is fully deterministic.
By induction, since it only took a finite number of steps (n) to go from X to Y again we know that at Y+n steps we will have repeated the computation loop.
While we could use programs that change their instructions based on the data, they are still equivalent to a Turing machine in power so they would be able to solve the halting problem with the restriction of finite memory.
Yes, i know that proof by induction basically proofing that the program loops - the logic there holds.
But if you create what is basically a glider gun, calculating PI without every storing the numbers it works on and its results, it still will be busy indefinitely - so only from finite space, halting can not be proofen.
PS: Found the flaw in my thinking, even in such a stateless function - the numbers of the result would eventually get so big, they would fill the space on the tape, thus limiting computation, thus allowing to proof via size induction on the number that the program halts. Where there is computation, something needs storage, even if it is just a result.
Another way to see this is that a computer with limited memory can be modeled as a finite state machine. For every possible arrangement of memory and other internal state, you have a node. Include the finite input as part of the state too. If your computer is deterministic, every state leads inevitably to one next state.
If your computation runs infinitely and there are finite states it can hit, it must hit the same state twice. Since there's only one thing it can do from that state, it will just repeat the same thing it did on its first visit to the state and eventually come back. So the machine is in an infinite loop.
This is true even if you remove the output storage from your model. You could have external storage and label each state transition with what it writes to the output. This tells us that either the output is finite or it's periodic. The digits in an irrational number are not periodic, so a finite machine can only output all the digits in a rational number.
I don't think you have the right point, even with an infinite write-only output tape, you'd still need infinite states to output pi. How can your machine know how far along it is otherwise?
The original thought was, that it wouldnt need to know, it would get the operands on a tail-call eliminated, recursive need to know basis- very slow, but very space efficient, basically lazy re-compute it every time its needed.
Even with tail calls, and purely functional code, it can be shown like this: imagine you see a purely functional, tail-called, all-state-as-argument call foo(A, B, REST-OF-STATE). Later on, you encounter the same call with exactly the same values - foo(A, B, REST-OF-STATE). At this point you know that the program looped, since you've proven (through observation) that the call to foo(A, B, REST-OF-STATE) will eventually yield another call to foo(A, B, REST-OF-STATE).
This is essentially equivalent to detecting cycles in a directed graph where each node has outdegree of 1. If you hit the same node twice, you've found a cycle.
It boils down to the question how dense you can encode, retrieve and write information- and this can be very dense.
Conways compression comes to mind, where a simple timing description and some start states can be used to describe a very complex outcome.
The problem is, that for every function computing a irrational number, you got to have the output as input again.
Imagine if it where otherwise, a completely lightweight PI-function, traveling for eternity, for which nothing in the universe could determinate if its going in a loop or going towards a circle constant.
The method of proof of P!=NP will undoubtedly tell us something deeply fundamental about computation and therefore logic and therefore philosophy. That complexity theory is so difficult thus far is in itself philosophically interesting.
Seems directly related to philosophy via the Busy Beaver[1] function. Every statement about all integers, ie. Goldbach's conjecture, can map to a predicate P(n). P(n) is either true for all integers or fails at a least n. This least n must be less than BB(program_size(P)).
We can look at the predicates that don't fail, and take their negation also, ~P(n), which clearly does fail at n=1. These predicates are all P such that P(1) is true or ∄n P(n) is true. They take up a percentage of the space of all predicates (analog of Chaitin's constant[2]). Meaning that in the distribution of all possible predicates, a certain proportion are just (∞,1) and (1,∞) mappings, while the rest of the predicates have mappings bounded by their program size. It seems that because of the pigeonhole principle and the mappings already "wasted" on (1,∞) and (∞,1), the rest of our distribution gets restricted to bounded levels of expressive power.
The literal allowance of statements that are false trivially, or true for everything, limits the power of all of the remaining statements that now have to position themselves as finitary AND confined by that proportionality.
- The halting problem is decidable for systems with finite memory. That's simple enough; a deterministic program with finite memory must either halt or repeat a previous state. Now the question is, is there a faster way to determine the outcome than running the program? That's a complexity problem. One of those where the average case is much easier than the worse case. Many NP-hard problems are like that.
- The same scaling issue applies to the "Chinese room", which is an extremely inefficient solution to the problem. The question is how much can you improve on exhaustive enumeration.
Then it gets hard. I thought it was established that quantum computing is not going to allow brute-force search (using multiple universes?) on problems like factoring. Is that wrong?