TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

A simple test of whether the Universe is a hologram (or a computer)

8 pointsby humanarityabout 10 years ago
assume the speed of light (and the uncertainty principle, or the product of the Planck constant and the speed of light) is the processing speed limit of the computer which simulates the universe.<p>Also assume that the universe has finite computational resources. Therefore resources have to be diminished in one location when they are in in demand another location.<p>The simulation of a complex event, such as some highly involved, very low redundancy, fast moving and extensive event (like some kind of very large very fast collision between two highly involved structures, there are better examples) will result in significant local load on the universe computer.<p>Given our two assumptions above then we have a measurable result:<p>Ether the speed of light (or Planck constants) will be diminished in that local region, or they or their product will be diminished in another region.<p>If the diminishment is local (or within a testable neighbourhood) this can be tested.<p>If the diminishment is not in a testable neighbourhood, perhaps other experimental constructions will work.

16 comments

johnloeberabout 10 years ago
&quot;Assume the speed of light (or ...) is the processing speed limit for the computer which simulates the universe.&quot;<p>Wholly unfounded assumption. If the universe is a simulation, there&#x27;s no reason why any particular constant would be the processing speed limit.<p>Indeed, even talking about <i>processing speed limit</i> is probably an incoherent concept when considering the question of whether the universe is a simulation. If our universe is a simulation, then some other entity is simulating our universe. We know absolutely nothing about the world of that entity, and have no grounds to conclude or assume anything whatsoever about the machinery of the simulator.<p>The egregious mistake in this &quot;test&quot; is the implicit assumption that the machinery some entity (in a universe we know nothing about) would use to simulate our universe would be a computer of the same basic architecture as the one sitting in your bedroom.
评论 #9451351 未加载
评论 #9451408 未加载
评论 #9451347 未加载
DaFrankerabout 10 years ago
There are so many wrong assumptions in this OP, most of which the OP didn&#x27;t state explicitly, that for once I don&#x27;t even feel like breaking it down into chunks:<p>&gt; [quote of the whole thing]<p>No. When you run Dwarf Fortress on a slower computer, or do something in it that taxes your computer resources more, the dwarves don&#x27;t suddenly sit up and take notice that something went faster in-game than it was supposed to. Everything just takes longer to calculate, at once, _including_ the dwarves&#x27; perceptions and any (in-simulation) tools they might use to measure this.<p>QED.
评论 #9451939 未加载
monk_e_boyabout 10 years ago
I would assume the frames per second would drop, and for us being simulated we wouldn&#x27;t notice. Our clocks would still read the same, no matter how &#x27;fast&#x27; or &#x27;slow&#x27; the simulation is running. Weather terms like fast and slow even make sense to the machine simulating us, a linear single direction timeline could be a construct they are simulating. Maybe the live in a universe with 3D time or something equally different.
dooptroopabout 10 years ago
Ooh, I wanna play too!<p>And as more and more classical information in one region of space has to be simulated (Allude to wave-function collapse), less and less information is possible to know about distant matter and events, which looks like universal expansion to our mortal eyes, but it&#x27;s actually our instance scaling down.
brudgersabout 10 years ago
There&#x27;s a linguistic problem. Assume there are real-computers [or real-holograms]. Inside the simulation the word &quot;computers&quot; refers to simulated-computers. Even if we invent the term &quot;real-computer&quot; inside the simulation it&#x27;s fully qualified referent is simulated&#x2F;real-computer. Global variables aren&#x27;t global to the extent their referent persists across system boundaries.<p>Sysadmin privileges are required to configure the same name to the same value across systems. Once a proof assumes there is a sysadmin, we have to rely on faith along with our reason.
Mosix1975about 10 years ago
I don&#x27;t think this would testable because in the region of diminished resources, time would also be diminished (slow down but not necessarily stop).<p>Slowing time in the diminished resource region would also slow any local observer, making his observations seem normal. If other resources are diminished that will also interfere with observation.<p>Einstein&#x27;s work on simultaneity also prohibits a single observer from observing locally and in another region (i.e., there is no God-like view).<p>It appears that physics has built-in mechanism for keeping &#x27;observers&#x27; and &#x27;the observed&#x27; in check.
paulvsabout 10 years ago
When you reach for your phone, there&#x27;s always a time when your hand is only halfway there, so theoretically, it&#x27;ll never get there (Zeno&#x27;s paradox). Since we know from practice that it is possible to grab your phone, the Great Computational Machine that runs the universe must be using floating-point numbers with a finite precision that round to the nearest whole integer when your hand is about to arrive.<p>Clearly, proof that we live inside a computer simulation ;)
socket0about 10 years ago
As these measurements and observations are done inside the system, they&#x27;re completely unreliable. Every time the computer clock ticks, a second goes by. It doesn&#x27;t matter how fast it&#x27;s ticking to an observer watching the screen, for code on the inside a tick means a second. The outside observer would notice the processor struggle to render all those details, slowing everything down to a crawl, but inside the code every tick will still signal a second.
Lannisterabout 10 years ago
Since we are making assumptions I will make the assumption that each point in the universe is capable of calculating all possible events taking place at that point. As such there need not be a drawing of resources from once place to another. As each cell contains the DNA for our bodies so each point in the universe shall be capable of calculating all possible events ocurring in that particular point.
thebezetabout 10 years ago
&quot;Also assume that the universe has finite computational resources. Therefore resources have to be diminished in one location when they are in in demand another location.&quot;<p>The &quot;universe computer&quot; could easily be many times larger than our universe and have enough resources to simulate everything, everywhere.
taconeabout 10 years ago
&gt; &quot;Also assume that the universe has finite computational resources. Therefore resources have to be diminished in one location when they are in in demand another location.&quot;<p>With all due respect, it&#x27;s much more likely the universe is multi-core, highly parallel and enforces proper resource allocation.
ajucabout 10 years ago
You are mistaking internal time with external time in simulation.<p>Even if universe runs on Turing machine doing one instruction per external year, for one external year, and 1000 instructions per second for the next external year - you won&#x27;t notice the difference from inside.<p>Only observators from outside will notice.
shultaysabout 10 years ago
The real universe might have a higher speed of light and thus better CPU power than we can imagine. Our speed of light might be just a define in a header file so the simulation can be run in reasonable speeds for that super computer.
minerb50about 10 years ago
It is as if you&#x27;re trying to run a benchmark on a system: pushing the limits until frames start dropping.
chrisBobabout 10 years ago
That seems perfectly reasonable. Let me just crash a few galaxies together and see what happens.
humanarityabout 10 years ago
“Unfounded assumption” that’s why they’re called assumptions, so we don’t have to found them. We don’t have to and we also can, so let’s found some of them.<p>“No reason speed of light would be processing speed.” No reason it isn’t, and actually there are some reasons it is. SOL limits rate of information propagation (ignoring quantum entanglement, which may be like two or more particles being initialized with a shared key to Universe memcache). Planck Constant limits amount of information. These two things provide clear limits on how fast and how much information can propagate, which is a reason which can contribute to a choice to assume that SOL or PC or their product tracks inherent or imposed computation limits of the Universe computer.<p>“The simulation speed (from POV of observers in simulation) and the simulator speed (from POV of observers outside the simulation) are unrelated, because even if the computer was suspended, we would not notice, because time also would have stopped.” If the effect is global this would be correct. If I pause the Universe computer, then no one notices they’ve stopped, because their noticing has also stopped. If I rewind and refresh from a backup, then no one notices they’ve gone back Groundhog day style, except if observer memories are stored separately to the main Universe state, then someone’s information can persist between refreshes (as happens in Groundhog day and Edge of Tomorrow). So if the Universe computer has one processing loop, one core and that slows down, then everything slows down, and no one notices.<p>However, what if different regions each do their own processing and then update each other by exchanging photons (and maybe operating on shared memcache if you want to get quantum)? In this case a local slowdown will not be observed globally, meaning that it can be observed in a simple manner the same way that relativistic time dilation is observed. Synchronize two watches, send one observer to the event region with one watch and keep the other watch here. When the other observer returns, measure the time difference (correcting for any effects induced by velocity or gravity) -- is there some left over? Is there some slow down as a result of the observer having been present in a region where computation had to slow to maintain precision (Planck constant) because there was so much going on? Or was precision sacrificed (Planck constant) for speed? What optimizaion choices were made in that part of the simulation? If time slows we can measure, if SOL slows we can measure (with a watch whose movement is bouncing laser between mirrors), if Planck changes we can also measure it. So the result of this is that if there are local optimization choices being made, these can be measured, and the experimental construction proposed remains a workable one. There is evidence that constants have changed over time, (perhaps as the creators made optimizations?), and change over regions (perhaps due to run-time optimization choices as we are proposing to test here). One untestable (because it can’t separate matter interaction from computation) intuitive hypothesis for why the SOL varies per medium is that there’s far less computation to be done as photons go through a vacuum, and interact with nothing, than when they go through a dense material and interact with many things.<p>“Any measurements of time distortion done inside the system would be unobservable” Actually this seems to not be the case even with past experiments. Time dilation can be measured when it results from local effects (such as SOL travel, gravity), and these experiments have validated the theory of relativistic time dilation. Watches going out of sync because of time dilation is a testable phenomenon. Evaluation the theory of time dilation due to localized resource constraints will be similarly testable.<p>“External time is not internal time, any slowdown will be unobservable.” Not if the effects are local, with different regions making their own optimization choices. We can send a watch to the region of the high-load event, and when it returns we can see if it slowed down relative to its twin here.<p>“The real universe may be more than capable of simulating without slow down, the constraints may be artificial to keep the simulation in check.” Exactly, it might. Whether they are inherent or imposed limits of the Universe computer, if the effects occur locally we can test them.<p>“Benchmarking the universe.” Yes.<p>“Crash a few galaxies together.” Well, yes. Just observe when this happens and figure out a way to use the data we already have for those event to test theory of Planck constant or SOL being diminished due to these effects.<p>Taking it Further<p>What if the gravity effects from which we hypothesised the existence of dark matter were really just local resource constraints on the SOL or processing speed, resulting from optimization choices when large objects like galaxies are doing something load-heavy?<p>What if gravity itself is an optimization? The more gravity you have the less things you have to calculate because the more you restrict the movement leading to less possible system microstates. Broadly, infinite gravity is a black hole with 0 observable microstates, while 0 gravity is open space, with infinite possible microstates. Gravity could then change from place to place based on optimization choices, explaining the anomalous dark matter by assuming the effects observed resulted from changes in gravitation rather than extra matter.<p>However all these consequences is just theorizing. What we have is a theory that is testable.<p>So should we despair that &quot;nothing is real?&quot; Hold your horses. Even if such an effect was validated by experiment, it&#x27;s possible that Universe computer and simulation is simply an analogy for some physical principle at work. If that&#x27;s true it&#x27;s still a neat analogy. After all, all our theories are really just analogies to help us think about and model things about the real world.