It isn't necessarily necessary to invoke QM to make sense of this, though QM and GR have their place in the detailed formulation of various functions.<p>With a good read and a little work, one finds that one's experience of the "world" (whatever that is; fortunately, we don't need to know; cf below) can be summarized as:<p><pre><code> Xi = ΣPjiIj
Ai = Di(Xi)
Iij = ΣPijAi
</code></pre>
Your experience (Xi) is the probability-weighted sum of the impacts (Ij) of all actions, yours and others, aka, "the world", which we model as a space of actions; this includes your digestion, my writing this, your reading this, the temperature in your surroundings, what happened to each of us yesterday, etc.<p>Your actions, Ai, result from a decision function (Di) that takes your experience-space as its only input; the decision function is arbitrarily complex, but has order likely less than O(n<i></i>c), since you make decisions in real time (dithering is still a decision, in this sense; you are choosing to continue to attempt to choose to act)... ...unless we are a tightly coupled simulation in NP space and achieve momentary "consciousness" (whatever that means) whenever the simulation reaches "consensus" (but that's overly complicated, so let's use Occam's Razor and stick with the simpler model... ...at least until it fails).<p>In this case, the limit on "c" would be set by your wetware: Evolutionarily, we are optimized for decisions with order less than<p><pre><code> O(n**c)
</code></pre>
Any other decisions get us eaten by tigers (because we took too long). The better our wetware, the more complex and thorough can be Di and still produce result(s) in time to keep us alive.<p>This means our basic decision-making apparatus is short-term optimized, suggesting that many of our decisions for long-term effects will be "bad", i.e., sub-optimal, because we decide for survival and fitness, not optimal - or even good - long-term consequences.<p>Your impact on anything, including yourself (i) and everything else (Σj), Iij, is the probability-weighted sum of your actions.<p>This is an even simpler view than the article expresses: He suggests it when discussing that positing the existence of a world, W, is unnecessary, but then digresses to a theory of conscious agents... ...which is also unnecessary, but which may be illuminating.<p>In the above formulation, a world, W, is replaced with probability-weighted mappings of actions (of one's self and of others) as impacts (on one's self and on others). These probability-mappings may be arbitrarily complex; determining their order is a real poser....<p>At the very least, any Ij that takes more than time T to "reach" you does not impact your current Xi (though it may impact future Xi).<p>Interesting. The probability mapping could have a time-based (light cone?) component, or the simple formulation could be replaced with one involving Xif(Xic, Ijic).