1) Turing completeness is necessary for intelligent life. The Church-Turing [1] thesis initially defined turing machines in terms of what a person could do with endless paper and pencils. Effectively, we are turing machines that consume energy in order to remain stable, compute, and reproduce. We of course, are possibly more than that -- but turing completeness is a base property that seems like it should be met. With the right information storage/movement principles, any calculation should be possible, which would trivially realize turing completeness without it being contrived -- it would be natural emergence.
2) The hexagonal tessellation on the torus is just the space. Not time. Time is the standard cellular automata clock ticks, where at t+1, every cell gets replaced with a new value, atomically. Every cell's new value depends upon its 6 immediate neighbors, and itself. This is not only parallelizable but also preserves locality.
3) 3D universes can come later. They are much more complex.
This universe will be built iteratively. If 2d doesn't peel out, or it seems like 3d will improve it in some way, then it's not out of the question.
4) An infinite space without gravity has issues with conservation laws. A particle can fly off into space, never interact, and basically be "lost" as an outlier. With respect to our universe, the possible topologies are infinite euclidean plane, like you mention, or a boundaryless topology so large, that it appears flat at small scales. The torus isn't being used now anyhow. Since I restarted the project, I ended up going with a flat hexagonal tessellation that wraps at the edges like an Asteroids game. It's much simpler for now.
5) Conservation laws are necessary. In our universe, the speed of light is the speed of causality - its the speed of information movement. Yet the same thing that moves at the maximum speed of the universe, also provides the potential to keep things stable and at rest. This duality between stationary and moving information is why computation can happen. Too chaotic, and computation is easy to do, but results hard to retain. Too simple, and computation is hard to do, but results easy to retain. One needs a medium ground between these two. In any case, conservation laws make sure that no information storage is lost or gained - and that is effectively conservation of energy. It keeps computation requirements linear with the initial abstraction of mass the universe is started with.
Comments are welcome on the blog post. I will be putting up a github repository shortly but the code is changing very rapidly at the moment due to design decisions, so I don't wish to put it up yet -- it would just be a disservice to those interested.
"Turing completeness is necessary for intelligent life." Sure, but it would be extremely difficult to write a simulator with "interesting feedback" that is not turing complete ...
"In our universe, the speed of light is the speed of causality" In your simulation the step function is causality. I would not be too concerned if the simulator was not strict in conservation. As a matter of fact, the interesting parts of our universe have the property that they are in the middle of an entropy flow.
Just some random advice, maybe you don't find it useful, but I would not focus too much on these things ...
Energy is an abstraction dealing with information states. So one has to first define what is the instigator of state change..and what its state is. The whole thing is a self referential nightmare if you don't conserve. Any kind of constant bias will take hold if vast compute cycles pass. But I dont necessarily disagree with psuedo randomness like hidden cyclic variables and other types of statistic macroscopi conservation without strict micro conservation.
"whole thing is a self referential nightmare if you don't conserve"
Indeed and that is probably best to focus on. Conservation, should serve that goal first. And I think you will see that as you implement things you can let go of strict versions.
1) But we don't have arbitrary amounts of paper or time.
2,3) Your structure is 2+1D, which is a kind of 3D. Comments about 3D objects were talking about spacetime regions, not space region.
2b) Do you want locality? I'm not convinced spatial locality is how our universe operates. Also, non-locality in Minecraft interacting with the update engine is what caused a lot of seemingly non-deterministic effects. I guess it makes simulating easier, though.
4) Asteroids is a torus, just a flat torus. I had actually assumed that's what you meant in the first place. (Also, you left out hyperbolic spaces.)
We don't but neither do universal simulations. Maybe I should have said finite-state turing machines. With a vast enough universe, the finite # of states won't matter much when it comes to producing life. What fraction of the information in the universe are our bodies, or if you go down further to simpler smaller species, the fraction gets smaller. It's tiny.
2) Yes, basically a flat torus. Not the most beautiful topology.
3) Yes, I think locality is the right approach at least for starting out. This project is iterative so nothing is set in stone.
Thank you for the questions and comments. I guess I left quite a bit out. And right now, I'm in the middle of a lot of design decisions regarding fusion and decay. It's not easy to build a universe, even a simple one.
2) The hexagonal tessellation on the torus is just the space. Not time. Time is the standard cellular automata clock ticks, where at t+1, every cell gets replaced with a new value, atomically. Every cell's new value depends upon its 6 immediate neighbors, and itself. This is not only parallelizable but also preserves locality.
3) 3D universes can come later. They are much more complex. This universe will be built iteratively. If 2d doesn't peel out, or it seems like 3d will improve it in some way, then it's not out of the question.
4) An infinite space without gravity has issues with conservation laws. A particle can fly off into space, never interact, and basically be "lost" as an outlier. With respect to our universe, the possible topologies are infinite euclidean plane, like you mention, or a boundaryless topology so large, that it appears flat at small scales. The torus isn't being used now anyhow. Since I restarted the project, I ended up going with a flat hexagonal tessellation that wraps at the edges like an Asteroids game. It's much simpler for now.
5) Conservation laws are necessary. In our universe, the speed of light is the speed of causality - its the speed of information movement. Yet the same thing that moves at the maximum speed of the universe, also provides the potential to keep things stable and at rest. This duality between stationary and moving information is why computation can happen. Too chaotic, and computation is easy to do, but results hard to retain. Too simple, and computation is hard to do, but results easy to retain. One needs a medium ground between these two. In any case, conservation laws make sure that no information storage is lost or gained - and that is effectively conservation of energy. It keeps computation requirements linear with the initial abstraction of mass the universe is started with.
Comments are welcome on the blog post. I will be putting up a github repository shortly but the code is changing very rapidly at the moment due to design decisions, so I don't wish to put it up yet -- it would just be a disservice to those interested.
[1] https://en.wikipedia.org/wiki/Church–Turing_thesis