This is an ok overview, but it's missing a key point. Classical statistics can be viewed as the statistic of interest being fixed but unknown. However, it's better thought of as the parameter is fixed but set in a way that will be most troublesome for your estimation algorithm. Statistical analysis (when done carefully) represents a conservative view on how likely you are to be mistaken.<p>Bayesian statistics abandons this worst-case approach, instead opting for an average case analysis. Here, we average over the all the possibilities, weighting their relative merit by the prior. The analysis is always a little bit conservative (thus the connection to regularization), but it is never "worst case" in the same way that classical statistics operates under.<p>Lots of the other talk in this article is not really about classic vs Bayesian statistics at all. Both methodologies are perfectly happy working with more complicated, hierarchal models. Both approaches have plenty of work dealing with regularization, and both methods will suffer if you mis-specify your model. The fact that Bayesian analysis is less likely to "crash" in a case of model mis-specification can be thought of as just as much of a drawback as it is a benefit.
I particularly like the way the key difference between frequentist and Bayesian statistics is described: in the frequentist model, the parameter is fixed and the data is random, while in the Bayesian model, the data is fixed and the parameter is random. Since, as the author points out, the data is what is fixed in real life, the Bayesian model intuitively makes more sense.
I will ask a possibly naive question but...can anyone recommend a good "doing empirical science the Bayesian way" or "Bayesian quantitative methods" type of book? I'm mostly looking for something that can "replace those p-values for undergrads". Use of R, Python (or PSPP/SPSS if need be) would be appreciated.<p>My knowledge is pretty limited, I have heard of BEST but happily t-test away on a daily basis. Essentially what I'm looking for is a "I know t-tests and ANOVA and use them regularly...how would I switch all that to a Bayesian approach".<p>Does the book I look for even exist or would my best bet be reading the BEST paper (and the author's website/youtube video)?
Edit: It lookes like the second edition of "Doing Bayesian Data Analysis" by Kruschke looks good. It does have a dedicated chapter on NHT (vs. MCMC)
"However, all practitioners in data science and statistics would benefit from integrating Bayesian techniques into their arsenal" - that this wasn't already the case is news to me. Perhaps bioinf is less mainstream than I thought.
If one wanted to learn more about the math behind this, especially when they get into the "Bayesian Regression is a Shrinkage Estimator" section, what's a good place to start?
The clean and crisp look of the site is stunning. The fact that there's no busy sidebar cramping up the content makes a huge difference.<p>Does anyone know if it's an available theme, or if it was custom built?
Are Bayesian classifiers useful with any size data set, or is there a "threshold" amount of data that you need in order for Bayesian classifiers to be useful/work/be effective?