Carroll is a big advocate of the Many Worlds Interpretation, so it's nice to see some other interpretations getting a decent hearing in this article as well as a decent treatment of some academic concerns about MWI. That's a testament to his humility as a researcher.<p>I must admit I struggle with the MWI Born rule derivations based on rational credence. I don't see why proving that one ought to assign credence in such and such a way is sufficient to prove that that's the way nature <i>is</i>. It feels too much like deriving an "is" from an "ought", although in a slightly different way than what Hume objected to!
This is related to his research. His latest podcast goes to it more deeply (there is a transcript): <a href="https://www.preposterousuniverse.com/podcast/2019/09/09/63-solo-finding-gravity-within-quantum-mechanics/" rel="nofollow">https://www.preposterousuniverse.com/podcast/2019/09/09/63-s...</a><p>The goal is to derive emergent spacetime and gravity from quantum mechanics.<p>Some features of their theory:<p>* Finite dimensional Hilbert space. Quantum field theory gets boot.<p>* Spacetime is entangled degrees of freedom in a way that semi-classical spacetime geometry emerges. Things are local because they are entangled not the other way around.<p>* Spacetime expands because initially untangled degrees of freedom become entangled with the rest of the the universe.
<i>In contrast with frequentism, in Bayesianism it makes perfect sense to attach probabilities to one-shot events, such as who will win the next election, or even past events that we’re unsure about.</i><p>Does frequentism really require actually performing the experiment? Or is imagining doing the experiment good enough? I would say<p><pre><code> »Candidate X will win the next election with a probability of Y percent.«
</code></pre>
is just a shorthand for<p><pre><code> »The following sets of states and possible evolutions of those states are
compatible with my knowledge about the world and in Y percent of the cases
candidate X wins the next election.«
</code></pre>
which seems not to different from a coin flip where the different outcomes are also due to imperfect knowledge of the initial state. The difference is that it is easy to sample the set of initial states for a coin flip by just repeatedly flipping a coin from slightly different initial states due to human imperfections in doing this task. Sampling the initial states of an election in the same way is obviously not possible and I have admittedly no real clue how people arrive at a meaningful number in practice. A similar example seems to be the probability of rain at some place some time into the future in which case it is possible to sample the set of initial states by running a weather model repeatedly.
Isn’t space and time considered continuous?<p>What about the Planck distance then? What’s that all about?<p>It seems to me that on a microscopic level and small time scales, a small change in input will lead to a small change in output.<p>This is certainly true in classical mechanics, but what about quantum mechanics? Are the quanta the result of a continuous process? Can an subatomic particle wind up on mars, exceeding the speed of light? with a certain probability?<p>HERE is what bothers me. The instability of certain physical problems (small change in input leads to large changes in output, like where a pencil is going to fall if stood on its tip). How can this happen if the composition of continuous functions is continuous???<p>In mathematics we have abstractions such Real Numbers and infinite sequences of functions that can converge to discontinuous and even really weird functions in the limit.<p>But in the real world it seems that we have some sort of minimum, like planck distance, or simple measurement error, that preclude us from reversing a process after a certain point. Maybe THAT is where unstable problems on the macro scale come from??<p>Pilot Wave Theory seems to say that everything is deterministic and the uncertainty in Quantum Mechanics comes from us being unable to observe the process that leads to the result. But PWC requires us to abolish the idea of locality, which to me is a special case of continuity.<p>Anyway can someone please explain this to me? As it regards quantum mechanics? Leslie Lamport’s paper caused a big watershed moment for me and I’m still reeling from it:<p><a href="https://lamport.azurewebsites.net/pubs/buridan.pdf" rel="nofollow">https://lamport.azurewebsites.net/pubs/buridan.pdf</a>
I'm not an expert, but one of the most exciting realizations I've had over the last few years is just how close quantum theory is to various "ordinary" kinds of probability theory, including Kolmogorov's classical theory. Now probability theory is not so boring for me.<p>Quantum physics has inspired so much work in other fields! Check out this guy's work for examples: <a href="https://scholar.google.com/citations?user=wdhkzPMAAAAJ&hl=en" rel="nofollow">https://scholar.google.com/citations?user=wdhkzPMAAAAJ&hl=en</a><p>I don't agree with Khrennikov's interpretation of quantum mechanics (he's a realist whereas I tend to appreciate the more "mystical" feeling interpretations of quantum mechanics), but he and others' work on the connections between quantum physics and classical probability theory, as well as on non-physics applications of quantum theoretic tools, is crazy thought provoking.
If you'd like an intuitive introduction to the actual technical details of quantum probability: <a href="https://www.math3ma.com/blog/a-first-look-at-quantum-probability-part-1" rel="nofollow">https://www.math3ma.com/blog/a-first-look-at-quantum-probabi...</a>
There's something that has deeply irked me for many years about these MWI probability constructions, and that is the largely glossed over fact that there's sort of a non-local numerical awareness and computation within the wave function necessary to construct the number of branches in proper ratio required to maintain self-consistency in the MWI universe. Additionally, this number of branches is incomprehensibly larger than simply splitting the universe once for every quantized event, and results in unwieldy levels of duplication of identical branches.<p>The reason for this is that if we take the most improbable outcome of a given wave function and say “This highly improbable branch occurs once”, we are immediately contradicted, as the next least improbable event is virtually certainly a non-integer ratio to the former. So, we give the wave function numerical factoring / self-resolving capabilities and instead, the least and second least improbable branches occur the number of times necessary to maintain status as whole integers with correct relative ratio. But then, that only resolves two possible events on the wave function, and so with the third least improbable event, almost certainly not an integer ratio to the first or the second, we must repeat this step again of multiplying the number of branches for the least and second least improbable, to maintain a consistent integer ratio for our types of branches. As you can see, as you follow this up through all the possible branch outcomes, to express their corresponding probabilities in whole integers counts of quantum outcomes, you essentially have to engage in a massive computation of finding common factors all the way up. Further, even the least improbable event will require an incomprehensible number of duplicate branches, and the most probable events will have an even more innumerable count of duplicate branches still.<p>The only way I can see to escape this madness with MWI seems to be give up on the notion of truly separate branches, and instead treat these “many worlds” as a stream of overlapping world-ish-nesses in which discrete outcomes don’t actually even exist, but then you have seeming contradictions in observable discreteness and it’s not clear it’s truly even MWI anymore.<p>Disclosure: I’m not a physicist, and it's quite plausible that I don’t know what I’m talking about.