Bayes' Theorem tells us that the quest for certain knowledge, which drove a great deal of science and philosophy in the pre-Bayesian era (before about 1990, when Bayesian methods started to gain real traction in the scientific community) is much like the alchemist's quest for the secret of transmutation: it is simply the wrong goal to have, even though it generated a lot of interesting and useful results.<p>One of the most important consequences of this is noted by the article: "Confirmation and falsification are not fundamentally different, as Popper argued, but both just special cases of Bayes’ Theorem." There is no certainty, even in the case of falsification, because there are always alternatives. For example, superluminal neutrinos didn't prove special relativity false, although they did provide some evidence. But the alternative hypothesis that the researchers had made a mistake turned out to be much more plausible.<p>Bayesian reasoning--which is plausibly the only way of reasoning that will keep our beliefs consistent with the evidence--cannot produce certainty. A certain belief is one that has a plausibility of exactly 1 or 0, and those are only asymptotically approachable applying Bayes' rule. Such beliefs would be immune from any further evidence for or against them, no matter how certain it was, essentially because Bayesian updating is multiplicative and anything times zero is still zero.<p>There is a name for beliefs of this kind, which to a Bayesian are the most fundamental kind of error: faith.
I recently started to think about connections between Bayes' Theorem and fuzzy logic:<p><a href="http://sipi.usc.edu/~kosko/Fuzziness_Vs_Probability.pdf" rel="nofollow">http://sipi.usc.edu/~kosko/Fuzziness_Vs_Probability.pdf</a><p><pre><code> (also from wikipedia on fuzzy logic):
"Bruno de Finetti argues[citation needed] that only one
kind of mathematical uncertainty, probability, is
needed, and thus fuzzy logic is unnecessary. However,
Bart Kosko shows in Fuzziness vs. Probability that
probability theory is a subtheory of fuzzy logic, as
questions of degrees of belief in mutually-exclusive
set membership in probability theory can be represented
as certain cases of non-mutually-exclusive graded
membership in fuzzy theory. In that context, he also
derives Bayes' theorem from the concept of fuzzy
subsethood. Lotfi A. Zadeh argues that fuzzy logic is
different in character from probability, and is not a
replacement for it. He fuzzified probability to fuzzy
probability and also generalized it to possibility
theory. (cf.[10])"</code></pre>
Here are a few tangentially related things that may be of interest:<p>(i) MacKay's book on Information Theory, Inference, and Learning Algorithms: <a href="http://www.inference.phy.cam.ac.uk/itila/" rel="nofollow">http://www.inference.phy.cam.ac.uk/itila/</a><p>(ii) Probability Theory As Extended Logic: <a href="http://bayes.wustl.edu/" rel="nofollow">http://bayes.wustl.edu/</a><p>(iii) Causal Calculus: <a href="http://www.michaelnielsen.org/ddi/if-correlation-doesnt-imply-causation-then-what-does/" rel="nofollow">http://www.michaelnielsen.org/ddi/if-correlation-doesnt-impl...</a><p>(iv) I recall reading a pretty good blog post a year or two ago that described how to implement some kind of Bayesian token recognition thing to parse screen captures from some database (or something roughly like that). The gist of the approach was like this:<p>1. define a model expressing that certain combinations of neighbouring tokens are more likely to occur than others
2. approximate the full Bayesian inference problem as MAP inference
3. the resulting combinatorial optimisation problem could be encoded as a relatively easy mixed integer program
4. easy mixed integer programs are very tractable to commercial solvers such as CPLEX, Gurobi, or sometimes even the open source COIN-OR CBC<p>At the time I found the idea fascinating as I was working with LPs/MIPs and had some interest in Bayesian inference, but hadn't figured out that the former could provide a way to computationally tackle certain approximations of the latter.<p>I cannot for the life of me find the link again for this.
“Seeing the world through the lens of Bayes’ Theorem is like seeing The Matrix. Nothing is the same after you have seen Bayes.”<p>I'm pretty sure this is an instance of cognitive bias.
My biggest issue with Bayes' Theorem as a method of making everyday decisions is that it assumes the ability to accurately assess the underlying likelihoods of events taking place, especially on-the-fly.<p>I would even argue that it's actually providing a <i>false</i> sense of precision because the sig figs are oftentimes not correctly represented.
> the Standard Model of particle physics explains much, much more than thunderstorms, and its rules could be written down in a few pages of programming code.<p>As a programmer who doesn't know advanced math, I'd really like to see that code, in literate form.
I recently had the need for Bayes Classifier[1] in a couple of projects, so I wrote a service that exposes one through an API. You can set up your prior set and then get predictions against that set.<p>I haven't gone through the trouble of making it suitable for public consumption yet. Would anyone be interested in consuming such a service?<p>[1]: <a href="https://en.wikipedia.org/wiki/Naive_Bayes_classifier" rel="nofollow">https://en.wikipedia.org/wiki/Naive_Bayes_classifier</a>
Speaking of Bayes, there's a great book by Allen B. Downey 'Think Bayes' <a href="http://www.greenteapress.com/thinkbayes/" rel="nofollow">http://www.greenteapress.com/thinkbayes/</a> available as free PDF or (if you wish to support the author, which I did) a paperback from Amazon.<p>It teaches Bayes theorem accompanied with Python code examples, which I found really useful.
This is excellent and finally prompted me to ask how to use Bayes more in my life: <a href="https://news.ycombinator.com/item?id=9782767" rel="nofollow">https://news.ycombinator.com/item?id=9782767</a>
I'm in the middle of designing and building a system which uses Bayesian models.<p>One thing that struck me early is that while Bayes itself is rock solid, like arithmetic, when you go to apply it the results live or die on the quality of the models, and the relevance/realism of the evidence used to train them. GIGO.<p>But once you do have a good, relevant, signal-producing model, then, using it is a bit like doing a multi-dimensional lookup, or function call. Conceptually easy to understand, and, in many cases (depending, of course, on the details) cache-friendly.