I don't think this guy understands the debate. A quick summary:<p>If you think statistics is a big toolbox, some of the tools give different answers that are better or worse in various ways, and you can just take out whatever tool you like, you're a frequentist.<p>If you think that there's such a thing as a correct probability estimate, and all coherent reasoning is required to come up with consistent answers regardless of which different path was taken to arrive at the same destination, you're a Bayesian. From this perspective, a "confidence interval" isn't a tool that's useful on some occasions, it's just plain crazy and wrong, like a weather forecaster who only tells you the probability that it's raining here <i>xor</i> in Narnia. Sure, the forecast is generated by a process that's sorta related to the correct answer, but by manipulating the imaginary land of Narnia you can make the forecast be basically anything. With Bayesianism there are no degrees of freedom in the likelihood ratio you report. See <a href="http://xkcd.com/1132/" rel="nofollow">http://xkcd.com/1132/</a>.<p>It doesn't do any good to appeal to the idea that Bayesian methods are just one tool in the toolbox. Only frequentists think in terms of toolboxes in the first place.<p>Also Bayes's Rule is tautologically equivalent to Bayes's Theorem. There's more wrong, but meanwhile, color me unimpressed.
<i>The Goal of Bayesian Inference: Quantify and manipulate your degrees of beliefs. In other words, Bayesian inference is the Analysis of Beliefs.</i><p>Bayesian inference is no more about beliefs than logic is (or any scientific inference, really). "M(G) AND C(M) => C(G)" can be rendered as "If you believe that glass is a metal, and you believe that metals are good conductors, then you should also believe that glass is a good conductor". Scientists omit the "if you believe" out of conciseness.<p>Some subjective Bayesians will tell you that their job is to produce the above. Then they're done. "You said you believe that glass is a metal, so I put that into my Bayesian inference procedure, and it says that you should also believe that glass is a good conductor."<p>But this is not what science is about! Obviously, "glass is a conductor" strongly contradicts empirical data. We have to challenge every assumption, and possibly change models!<p>This is why smart Bayesians check the fit of their model, and I would strongly recommend Gelman's Induction and Deduction in Bayesian Data Analysis to any statistician interested in that perspective. It places Bayesianism squarely in the paradigm of traditional scientific analysis.<p><a href="http://www.rmm-journal.de/downloads/Article_Gelman.pdf" rel="nofollow">http://www.rmm-journal.de/downloads/Article_Gelman.pdf</a>
I agree with much of this, and tend to be fairly ecumenical/pragmatic in my own choice of tools, but there are two things that lead to the "identity statistics" that are only briefly covered here, I think.<p>One is the entire philosophical debate, e.g. at least some Bayesians think arguments against the coherence of frequentist statistics are damning enough to make it questionable whether the methods should be considered rigorous statistics at all (admittedly this is basically the hardline view) [1].<p>The other is that it's not always agreed when it's appropriate to look for coverage versus to analyze beliefs, partly due to the philosophical debate, and partly because often what you ultimately want is a <i>decision</i>, and there are arguments for whether you should base decisions on frequentist-coverage machinery, or on belief-update machinery. For example, to move slightly afield from bounding a parameter, let's say we want an estimate of the region in which bombs are likely to fall. This can be formulated in frequentist statistics as a tolerance interval, with two decision thresholds, one for how many bombs we want to bound, and one for how confident we want to be in the bound: we want an interval that includes at least x% of the population with y% confidence, e.g. that with 99% confidence we'll bound 99% of bombs [2]. On the other hand, it can be formulated as a question about belief: essentially, we want to find the range in which we believe (for some suitably conservative definition of belief) we are going to find falling bombs, which Bayesian predictive statistics looks at.<p>[1] One famous/infamous such argument: <a href="http://en.wikipedia.org/wiki/Likelihood_principle" rel="nofollow">http://en.wikipedia.org/wiki/Likelihood_principle</a><p>[2] I wrote a bit on why tolerance intervals should really be a more prominent part of the frequentist toolbox: <a href="http://www.kmjn.org/notes/tolerance_intervals.html" rel="nofollow">http://www.kmjn.org/notes/tolerance_intervals.html</a>
<i>sigh</i> Here we go again. Inferential statistics is unified by a field called decision theory, which is the mathematical formulation of how you choose a "good" mapping from the set of possible outcomes from your experiments to a set of possible decisions.<p>Bayesian and frequentist are interpretations of probability theory, nor are they the only ones (nor are all interpretations even concerned with a formalization of a notion of "chance"). They are not necessary to statistics.
<a href="http://en.m.wikipedia.org/wiki/Bayes_theorem#Bayes.27s_rule" rel="nofollow">http://en.m.wikipedia.org/wiki/Bayes_theorem#Bayes.27s_rule</a><p>What is the difference between Bayes Rule and Bayes Theorem?