I've been studying how the brain works since college, which is now 20+ years. One thing a professor said back then really sticks out -- "we always assume that the brain works the same way as the most advanced computing of the time". In the 50s we assumed the brain worked like a telephone switching system. In the 80s we assumed the brain worked like a bunch of transistors.<p>In the 90s we finally switched it up, and developed computing paradigms based on human biology, and came up with applying neural nets to brain science. Then someone had the idea to create a Bayesian NN.<p>Each iteration we get closer to explaining how the brain works, but showing that each iteration is closer to workable based on our knowledge of the biology of the brain.<p>But we still have no idea how we get from brain biology to rational (and irrational) thought. I'd be excited if we solved this within my lifetime.
Whenever this comes up, I think about the conjunction fallacy <a href="https://en.m.wikipedia.org/wiki/Conjunction_fallacy" rel="nofollow">https://en.m.wikipedia.org/wiki/Conjunction_fallacy</a>. The observation that human subjects seem to assign higher probability to joint events than a single event. Which is weird because the probability of two events at the same time (conjunction) is always less than or equal to the probability of a single event on its own.<p>How does the Bayesian brain hypothesis deal with this fallacy? It seems to me that nothing based on classical probability can explain this fallacy. So either the observation that humans can assign higher probability to joint events is wrong or human decision making isn't exactly probabilistic (in the classical sense, can't rule out exotic probabilistic approaches).<p>EDIT: As several folks have commented that the conjunction fallacy can be explained away by different arguments based on interpretation and semantic issues. Indeed, the original Linda problem was susceptible to these issues. However, since then several researchers have tried to study this effect more carefully and it seems to still persist. An example that I'm aware of is the following <a href="https://link.springer.com/article/10.3758/BF03195280" rel="nofollow">https://link.springer.com/article/10.3758/BF03195280</a> where the authors used unambiguous language and a betting paradigm, but still found the effect. Again, this is most likely not fool proof. Regardless, I do not think the fallacy can be trivially explained away as an effect of ambiguous language.
I'm currently reading <i>The Book of Why</i> by Judea Pearl and I think that he makes that claim that our brains are Bayesian and then some.<p>Haven't finished yet, he gets into artificial intelligence in the next few chapters, but he seems to make the claim that our brains work by doing a number of computations involved in causal inference beyond just Bayesian inference, such as subconsciously constructing causal diagrams and using them for causal inference including asking questions about counterfactual scenarios.<p>Would be interested to hear if anyone else who has read this book can help elaborate on this some.
My understanding of Bayesian thinking is that we form new beliefs that are a function of our prior <i>beliefs</i> plus new evidence. That's my interpretation of the article.<p>Another way of thinking is to form new beliefs that are a function of prior <i>evidence</i> plus new evidence, e.g., by setting all old beliefs aside and starting afresh with the complete body of available evidence.<p>It's not clear to me that these two methods produce the same results, though I can't think of a counterexample. But to the extent that our brains probably remember both beliefs and evidence, we are not probably not fully Bayesian.
See also the book _Surfing Uncertainty_ (<a href="https://www.amazon.com/Surfing-Uncertainty-Prediction-Action-Embodied/dp/0190217014/" rel="nofollow">https://www.amazon.com/Surfing-Uncertainty-Prediction-Action...</a>) which convincingly details many brain subsystems as approximating Bayesian updates.
Are our brains Bayesian?<p>1990s Internet debater #1: "The plural of anecdote is not data"<p>2010s Internet debater #2: "True, but it does make a good bayesian prior"
This is actually an idiotic question. The molecular biology does not employ a concept of a number (or any concepts whatsoever in principle), since this requires an intelligent observer (to form the abstract notion of a number) which is not available at that level.<p>The "concrete", real-world biological systems, including our brains, however employs frequencies and "weights", which corresponds to wideness of a pathway, which does not require any numbers or counting.<p><i>Mother Nature Does Not Count</i> (and it does not compute any probabilities, of course). Higher level intellect does.<p>Math and logic are superimposed on the Universe and Nature. It is naive to assume that the Universe and Nature is based on math and logic, as ancient Greek views suppose.<p>Yes, there is basis behind Modus Ponens - in a certain conditions having Hs and Os inevitably leads to H2Os because universe has its laws, and because it has its laws it produces certain regularities and certain patterns which could be matched and even generalized by an intelligent observer, but it is not the other way around.