“Unfounded assumption” that’s why they’re called assumptions, so we don’t have to found them. We don’t have to and we also can, so let’s found some of them.<p>“No reason speed of light would be processing speed.” No reason it isn’t, and actually there are some reasons it is. SOL limits rate of information propagation (ignoring quantum entanglement, which may be like two or more particles being initialized with a shared key to Universe memcache). Planck Constant limits amount of information. These two things provide clear limits on how fast and how much information can propagate, which is a reason which can contribute to a choice to assume that SOL or PC or their product tracks inherent or imposed computation limits of the Universe computer.<p>“The simulation speed (from POV of observers in simulation) and the simulator speed (from POV of observers outside the simulation) are unrelated, because even if the computer was suspended, we would not notice, because time also would have stopped.” If the effect is global this would be correct. If I pause the Universe computer, then no one notices they’ve stopped, because their noticing has also stopped. If I rewind and refresh from a backup, then no one notices they’ve gone back Groundhog day style, except if observer memories are stored separately to the main Universe state, then someone’s information can persist between refreshes (as happens in Groundhog day and Edge of Tomorrow). So if the Universe computer has one processing loop, one core and that slows down, then everything slows down, and no one notices.<p>However, what if different regions each do their own processing and then update each other by exchanging photons (and maybe operating on shared memcache if you want to get quantum)? In this case a local slowdown will not be observed globally, meaning that it can be observed in a simple manner the same way that relativistic time dilation is observed. Synchronize two watches, send one observer to the event region with one watch and keep the other watch here. When the other observer returns, measure the time difference (correcting for any effects induced by velocity or gravity) -- is there some left over? Is there some slow down as a result of the observer having been present in a region where computation had to slow to maintain precision (Planck constant) because there was so much going on? Or was precision sacrificed (Planck constant) for speed? What optimizaion choices were made in that part of the simulation? If time slows we can measure, if SOL slows we can measure (with a watch whose movement is bouncing laser between mirrors), if Planck changes we can also measure it. So the result of this is that if there are local optimization choices being made, these can be measured, and the experimental construction proposed remains a workable one. There is evidence that constants have changed over time, (perhaps as the creators made optimizations?), and change over regions (perhaps due to run-time optimization choices as we are proposing to test here). One untestable (because it can’t separate matter interaction from computation) intuitive hypothesis for why the SOL varies per medium is that there’s far less computation to be done as photons go through a vacuum, and interact with nothing, than when they go through a dense material and interact with many things.<p>“Any measurements of time distortion done inside the system would be unobservable” Actually this seems to not be the case even with past experiments. Time dilation can be measured when it results from local effects (such as SOL travel, gravity), and these experiments have validated the theory of relativistic time dilation. Watches going out of sync because of time dilation is a testable phenomenon. Evaluation the theory of time dilation due to localized resource constraints will be similarly testable.<p>“External time is not internal time, any slowdown will be unobservable.” Not if the effects are local, with different regions making their own optimization choices. We can send a watch to the region of the high-load event, and when it returns we can see if it slowed down relative to its twin here.<p>“The real universe may be more than capable of simulating without slow down, the constraints may be artificial to keep the simulation in check.” Exactly, it might. Whether they are inherent or imposed limits of the Universe computer, if the effects occur locally we can test them.<p>“Benchmarking the universe.” Yes.<p>“Crash a few galaxies together.” Well, yes. Just observe when this happens and figure out a way to use the data we already have for those event to test theory of Planck constant or SOL being diminished due to these effects.<p>Taking it Further<p>What if the gravity effects from which we hypothesised the existence of dark matter were really just local resource constraints on the SOL or processing speed, resulting from optimization choices when large objects like galaxies are doing something load-heavy?<p>What if gravity itself is an optimization? The more gravity you have the less things you have to calculate because the more you restrict the movement leading to less possible system microstates. Broadly, infinite gravity is a black hole with 0 observable microstates, while 0 gravity is open space, with infinite possible microstates. Gravity could then change from place to place based on optimization choices, explaining the anomalous dark matter by assuming the effects observed resulted from changes in gravitation rather than extra matter.<p>However all these consequences is just theorizing. What we have is a theory that is testable.<p>So should we despair that "nothing is real?" Hold your horses. Even if such an effect was validated by experiment, it's possible that Universe computer and simulation is simply an analogy for some physical principle at work. If that's true it's still a neat analogy. After all, all our theories are really just analogies to help us think about and model things about the real world.