Schmidhuber has forgotten to cite Leibniz. The results in this document are nothing more than a trivial corollary of the principle of metaphysical efficiency, not to mention pre-established harmony, which Leibniz has discovered the most general forms of in 1710, which encompasses not only the construction of the physical universe, but also morality. See https://plato.stanford.edu/entries/leibniz-evil/.
Leibniz did invent binary arithmetic didn't he? And if I'm not mistaken his inspiration for it came from the I Ching, which was one of the original founding documents of ancient Chinese philosophy (i.e. yin and yang => 0 and 1).
"Several of the Great Programmer’s universes will feature another Great Programmer who programs another Big Computer to run all possible universes. Obviously, there are infinite chains of Great Programmers. If our own universe allowed for enough storage, enough time, and fault-free computing, then you could be one of them."
Loved that line.
I do not understand this main point of the article. Perhaps someone can explain?
"On the
other hand, computing just one particular universe’s evolution (with, say, one particular instance of noise), without computing the others, tends to be very expensive, because almost all individual universes are incompressible, as has been shown above. More is less!"
It seems the point is that the program to compute all universes is short, while the program to compute one particular universe is long. But surely runtime is greater in the former?
>"Several of the Great Programmer’s universes will feature another Great Programmer who programs another Big Computer to run all possible universes. Obviously, there are infinite chains of Great Programmers."
This is comparable to the metaphor of Indra's Net, as described by Alan Watts:
>"Imagine a multidimensional spider's web in the early morning covered with dew drops. And every dew drop contains the reflection of all the other dew drops. And, in each reflected dew drop, the reflections of all the other dew drops in that reflection. And so ad infinitum. That is the Buddhist conception of the universe in an image."
I'm taking a shot in the dark, but our state is "finite" and given the current state, there are infinitely many universes which are computable from the current state?
Or, for that matter his "Quantum Computing since Democritus" book. He covers some deep stuff like the presence of free will from a perspective of computational complexity theory and that is a fascinating read.
"More than 2000 years of European philosophy dealt with
the distinction between body and soul. The Great Programmer does not care.
...
From the view of the Great Programmer, though, such bitstring subpatterns may be entirely irrelevant. There is no need for Him to load them with “meaning”."
Interesting points, especially to follow up the recent 'is matter conscious?' article on HN.
Schmidhuber often seems to be more of a philosopher who happens to have made significant practical contributions to machine learning. The 'Great Programmer' is intentionally tongue-and-cheek, but for the agnostics out there, it's worth considering that regardless of what creator/process/fundamental mathematical requirement (if any) is responsible for the existence of our universe, it is unlikely that life on Earth (or perhaps in general) is relevant to It.
I find all of this really hard to take seriously when we work on the assumption that the design of the universe could be understood by a human. Why would something so small relative to the whole assume that its thoughts were high enough to comprehend the entire universe?
I love philosophy, but let's actually spend our time and energy asking questions that do have meaning, because any small meaning to us is more important than trying to do a task we are likely so ill equipped for that has no meaning to us.
I can't help but read titles like this piece and link it back to the anti-political and "above it all" attitudes that technical folk often take, believing that if they just eliminate meaning, emotion, all things that humans complicate pure logic with, and apply logic in the right way that somehow it will do some good. It doesn't matter if any of this is right or wrong - it has no relevant meaning to humans. Without meaning, it's a truly pointless task.
So far we're doing a decent job of understanding the design of the universe. Most things you interact with follow the rules we figured out so well that you'd be really hard pressed to find an experiment to find a difference between prediction and reality. That makes many people confident that we can figure out the missing pieces as well.
It depends on whether you think economics and sociology are fundamental to the physics of the universe or merely emergent phenomena. It's quite possible to understand all the fundamental rules of a system while having great difficulty working out all the consequences of those rules at a macroscopic level.
> It can be shown that there is no algorithm that can generate the shortest program for computing arbitrary given data on a given computer [2, 9, 1]. In particular, our physicists cannot expect to find the most compact description of our universe.
This seems wrong to me. There exists no general algorithm to find the shortest program to compute arbitrary data, but we can surely find specific shortest programs to compute arbitrary data.
For any given state of matter and energy, there is a maximal number of possible states. Then, for any given computer existing in our universe there is a finite amount of accessible states, given that the expansion of the universe creates a horizon beyond which creates a limit to the volume we can read back data from.
Couple this with the fact that you can not create an algorithm that can arbitrarily compress any given input string, and then for any given computer you can find input strings that requires more state to represent than said computer can contain. Even if you allow for output over time, for any given such computer you can find an input string that is sufficiently incompressible that finding the shortest possible representation requires more intermediate state than the computer can contain, and that can be true even if a sufficiently large computer will make it possible to find patterns that makes the data very compressible.
In fact, that is what the article is pointing out: The universe may be a lot more regular than it appears, but it is possible that we may never be able to determine what the patterns are because it may be impossible to hold or compute over enough state to be able to determine what the patterns are in a way that lets us identify the patterns.
It is also possible that it turns out to be possible to easily create such a minimal program, but the article also points out that the vast majority of possible universes are incompressible.
For us to be able to compute the most compact description of our universe we 1) need to be in a compressible universe (or the description will subsume the totality of the universe itself), 2) need to be in one where the shortest possible description can be computed with the intermediate states possible to contain within the observable universe.
As the article points out, without additional knowledge of what type of universe we are in, the odds are firmly against us in that respect.
Our brain is a computer. That fact may or may not offer useful insights on the properties of brains that we care about.
Likewise, our brains are engines, converting energy from one form to another, and hence governed by thermodynamics. Again, that fact may or may not be useful for investigating properties of the brain we care about.
Our brains are also matter, meat, networks, made of atoms, and squishy. These are all facts, they may or may not be useful.
Or our brain is a telegraph, the world wide web, an orchestra, a theater, a tv, millions of mindless bots, etc.
We use metaphors to help understand things we lack a literal explanation for, or the full explanation is complex and ongoing (neuroscience, biology, chemistry).
See Marr's 3-levels hypothesis. If we "run" an algorithm able to find a program to compute arbitrary data, there must to exist that general algorithm. We (just) need to copy it and implement in silicon..
I have some (radically) different opinions on what would make for a useful simulation, and questions about decisions you made -- would you be open to discussing the project in abstract with people who didn't necessarily want to join yours, but have a general interest in the topic?
Why Turing completeness? In particular, you're not going to be able to realize the execution of an arbitrary Turing machine, so why not restrict to a subset you can actually compute and do your modeling in that non-complete framework?
I don't think you can complete the torus in x, y, t (as there's no way to close the time dimensionl, so why not use a structure that is dimensionally symmetric? Similarly, hexagonal prism tiling is strange -- why not a tiling that is x, y, t symmetric?
Do you actually even want it to be "imperative" time, where you're stepping the simulation in clock cycles? Why not look for 3D universes that satisfy a set of constraints? (This might let you use a symmetric torus.)
I sort of get the lack of boundary, but why not simulate that in an "infinite" space rather than a closed one? Wouldn't this allow a more tractable simulation?
Why conservation laws? How do you plan to enact them?
If you have a discussion page, it might work better to talk about it there, since I think some of these questions get fairly involved.
1) Turing completeness is necessary for intelligent life. The Church-Turing [1] thesis initially defined turing machines in terms of what a person could do with endless paper and pencils. Effectively, we are turing machines that consume energy in order to remain stable, compute, and reproduce. We of course, are possibly more than that -- but turing completeness is a base property that seems like it should be met. With the right information storage/movement principles, any calculation should be possible, which would trivially realize turing completeness without it being contrived -- it would be natural emergence.
2) The hexagonal tessellation on the torus is just the space. Not time. Time is the standard cellular automata clock ticks, where at t+1, every cell gets replaced with a new value, atomically. Every cell's new value depends upon its 6 immediate neighbors, and itself. This is not only parallelizable but also preserves locality.
3) 3D universes can come later. They are much more complex.
This universe will be built iteratively. If 2d doesn't peel out, or it seems like 3d will improve it in some way, then it's not out of the question.
4) An infinite space without gravity has issues with conservation laws. A particle can fly off into space, never interact, and basically be "lost" as an outlier. With respect to our universe, the possible topologies are infinite euclidean plane, like you mention, or a boundaryless topology so large, that it appears flat at small scales. The torus isn't being used now anyhow. Since I restarted the project, I ended up going with a flat hexagonal tessellation that wraps at the edges like an Asteroids game. It's much simpler for now.
5) Conservation laws are necessary. In our universe, the speed of light is the speed of causality - its the speed of information movement. Yet the same thing that moves at the maximum speed of the universe, also provides the potential to keep things stable and at rest. This duality between stationary and moving information is why computation can happen. Too chaotic, and computation is easy to do, but results hard to retain. Too simple, and computation is hard to do, but results easy to retain. One needs a medium ground between these two. In any case, conservation laws make sure that no information storage is lost or gained - and that is effectively conservation of energy. It keeps computation requirements linear with the initial abstraction of mass the universe is started with.
Comments are welcome on the blog post. I will be putting up a github repository shortly but the code is changing very rapidly at the moment due to design decisions, so I don't wish to put it up yet -- it would just be a disservice to those interested.
"Turing completeness is necessary for intelligent life." Sure, but it would be extremely difficult to write a simulator with "interesting feedback" that is not turing complete ...
"In our universe, the speed of light is the speed of causality" In your simulation the step function is causality. I would not be too concerned if the simulator was not strict in conservation. As a matter of fact, the interesting parts of our universe have the property that they are in the middle of an entropy flow.
Just some random advice, maybe you don't find it useful, but I would not focus too much on these things ...
Energy is an abstraction dealing with information states. So one has to first define what is the instigator of state change..and what its state is. The whole thing is a self referential nightmare if you don't conserve. Any kind of constant bias will take hold if vast compute cycles pass. But I dont necessarily disagree with psuedo randomness like hidden cyclic variables and other types of statistic macroscopi conservation without strict micro conservation.
"whole thing is a self referential nightmare if you don't conserve"
Indeed and that is probably best to focus on. Conservation, should serve that goal first. And I think you will see that as you implement things you can let go of strict versions.
1) But we don't have arbitrary amounts of paper or time.
2,3) Your structure is 2+1D, which is a kind of 3D. Comments about 3D objects were talking about spacetime regions, not space region.
2b) Do you want locality? I'm not convinced spatial locality is how our universe operates. Also, non-locality in Minecraft interacting with the update engine is what caused a lot of seemingly non-deterministic effects. I guess it makes simulating easier, though.
4) Asteroids is a torus, just a flat torus. I had actually assumed that's what you meant in the first place. (Also, you left out hyperbolic spaces.)
We don't but neither do universal simulations. Maybe I should have said finite-state turing machines. With a vast enough universe, the finite # of states won't matter much when it comes to producing life. What fraction of the information in the universe are our bodies, or if you go down further to simpler smaller species, the fraction gets smaller. It's tiny.
2) Yes, basically a flat torus. Not the most beautiful topology.
3) Yes, I think locality is the right approach at least for starting out. This project is iterative so nothing is set in stone.
Thank you for the questions and comments. I guess I left quite a bit out. And right now, I'm in the middle of a lot of design decisions regarding fusion and decay. It's not easy to build a universe, even a simple one.
That's really cool. I see how you sort of built in decay through age.
I think the golden information principles for emergent life are easier to find than biological ones, but of course the requirements of space, time, etc are much larger when ones' simulation is at a lower level of abstraction. It really is a question of level of emergent behavior versus level of base behavior. Even QM can be emergent from deterministic automata. [1]
I have tried a few approaches at not having the simulator provide the copying function. But so far no luck. Do you have any concrete ideas?
I would suggest it needs to be a simulation in which self-organisation is possible. But also catalyzation and self-catalyzation (autocatalysis)[1]. From there it is a small step to self-copying and thus ... evolution.
Reproduction for RNA/DNA and cells, and later multicellular life, is an emergent property of some heavy-handed universal laws.
I would suggest forgetting about reproduction and focusing on principles of information storage and movement. Computation is really the cornerstone of reproduction. In any large enough space that allows computation, computers will pop up that copy themselves. By chance. They will then dominate if their resource requirements allow them to. That's the ticket. Emergent reproduction.
"computers will pop up that copy themselves" That is interesting idea. But do they pop up because the simulator provides the compute? Or because the simulator allows for the construction of structures that can compute?
I don't think you mean the first, as that is not too interesting to your goal. The latter is unlikely. How many possible configurations are there, and how many of those can compute? Let alone compute + copy? The universe will end before a random such entity appears.
Gravity causes matter to self organise into globes with disks. Charged molecules + surface tension causes crystallisation. Organic molecules support self-catylization.
From such organising principles there is a change self-copying arises before the universe ends. Or your EC2 budget ;)
Yes both. The sim provides the dynamics that allow for computers to exist. At a basic level, even simple phenomena are computations. RNA polymerase performs a computation when it copies. And we can go really low level, pretty much everything is a computation - of course, not all computations are equivalently great for life.
You are assuming this universe ends. In a type of heat death or singularity (choose your poison.) Not if gravity isn't part of the dynamics. And even then, the universe could be a cyclic one. [1]
Most likely EC2 is not the right approach. I have worked with Cuda and OpenCL and buying a bunch of beefy Teslas or the new Titans would be more cost effective.
Every simulator of any kind can be described as "compute". The question is, can the simulated environment itself support a new level of compute? And if so, how independent is that compute?
Redstone in minecraft supports compute, but is not fully independent. That is, the "laws of physics" of minecraft include special rules for redstone.
RNA polymerase is fully independent, the "laws of physics" do not have special rules for RNA.
So that last thing is the interesting bit in your effort, correct?
Now you seem to hope that the second level compute comes about randomly. But for that to do anything before the end of time, the ration between:
random states that compute / all possible random states
must be reasonable. And even then, one could argue that how those random states come about should be such that it is nothing special in the "laws of physics", or it is still "cheating".
What I am trying to point out, in our universe, the big bang was a shitload of energy in low entropy state. Due to gravity it started self organising. And due to many other self organising effects, we got to self copying ...
I am well aware. The simulation runs on a turing machine..a computer. That's different from turing completeness. Turing-completeness means a turing machine can be built inside of the simulation.
Fully independent? Nothing is fully independent of the universe it resides in.
Cheating doesnt mean much. Cheating is making AI from the top-down if thats the case. Because its the extreme opposite end of emergence. And plenty of work is being done on that now. So if you manage to get life by "cheating", fuck it. It's life. It's a milestone.
There's other ways to avoid total chaos besides gravity. There's just got to be a delicate balance of stability of structure yet the ability to destroy structure and extract potential in order to compute, ie make new structure.
Yes, it's an ambitious thing.. But its a question of whether life is a product of information and computation, or a product of more specific principles. Who really knows.
We are talking past each other because we are focused on slightly different things.
"Fully independent? Nothing is fully independent of the universe it resides in."
Indeed. But there are levels of independence. That is, do the laws of the simulation: 1) merely provide an environment in which there can be turing completeness. or 2) is some aspect of that turing completeness explicit in those laws. Again, redstone(2) vs RNA(1).
And in a similar vain, 3) is the world initialised with some low entropy state and merely lets that run. Or 4) is the simulator constantly randomly generating states?
If you can manage 1 and 3, that would be amazing. Anything less, might still be great, or nothing special. It will really depend on the exact nature of the simulation.
And I would wager that 1 and 3 can only be reached by simulating some sort of entropy flow that causes self-organisation + self-catalysation.
But maybe a much more compute rich universe as you seem to be planning (?) might work. You should surely try if you have ideas. My comments are by no means meant to discourage you!
---
Note that from the perspective of entropy, there are 4 levels of organisation:
a) direct, the structure is the end result of high entropy (for gravity this is globes)
b) indrect, the structure is caused by the flow, but are not the end result (for gravity this is disks, but meandering rivers, crystals or organic molecules are other examples)
c) self-catalysation, (by lack of better term) like indirect, but the structure grows more than linear because more structure creates more structure[1]
d) self-copying, like self-catalysation, but in discrete space/time steps
Somewhere after (c) you would get self selection of the faster/better copiers ... which might lead to what falls squarely in a recognisable definition of life.
"But its a question of whether life is a product of information and computation, or a product of more specific principles. Who really knows."
That is an interesting saying. What would be more specific principles? And maybe looking at life from the perspective of information and computation is not the most useful?
(That is why I said turing completeness is not so important to focus on, most simulations like we are talking about here are trivially turing complete.)
"Who really knows" :) our universe is surely 1 and 3, and life is at least 3 emergent system steps away from its most fundamental laws.
Thanks for that link. And the breakdown. I agree with everything you say here. You have a great handle on the task at hand. Perhaps you'd like to exchange ideas / collaborate? Can you shoot me an email at andy at scrollto.com ?
In some respects but consciousness is unknown with regard to it.
I am defining life as intelligent computing machines that remain stable through consumption of their environment. Whether or not the life created is conscious is a deep question.
I wonder who supplied the great programmer with the Turing machine in the first place. What separates Turing machines from mincing machines in that context? And an important premise is an implied equivalence of an infinite sequence of states with infinite states. "Before" the world creation there is not "time" per se. So how is the Turing machine, where sequence plays a pivotal role, supposed to work in that exo-time context?
> Since every change has a cause, the origin of the chain of causes and effects must be a cause without cause (first cause), which is a primary source of motion without motion.
The thought is logical, but there are also cosmological theories that do not presuppose the existence of a "first unmoved mover".
Hardly a strange coincidence. How strange is it for posts to get at least 42 comments at some stage in their existence, and for this to be one of them?