I noticed a shift in my attitude in strong priors when I switched from academia to industry and have only recently realized why.<p>When doing an analysis in an academic setting, the goal is to get a paper past reviewers to be published. And the reviewers were adversaries that were trying to disprove your work (at best these were helpful critique; at worst they were bad-faith nit-pickers that were looking for any excuse to reject). If you did a Bayesian analysis in this setting, the mean reviewers would just just point to the priors and say "you can't justify that choice, REJECT".<p>But in industry, there are no reviewers serving as adversarial gatekeepers. You may present analyses to a skeptical audience, but if they disagreed with your model priors you would work <i>with</i> them to come up with a mutually agreeable model because you're all on the same team.
There's also the fact that a prior is really hard to explain to someone else. By definition, it's the unexplainable starting point!<p>Yet when I lay out fairly tight Bayesian reasoning, there's always that one person sucking life out of the entire conversation with "Wait can you go back to that first number? How did you arrive at that?" and it's an unanswerable question because any attempt would have to start from another, more fundamental prior!<p>Sometimes this person is reasonable and I can go, "Ah, we can try a different starting point. What's your prior?" but often enough the person gets stuck on the idea of subjective probability and everything derails.<p>When it comes to important decisions, I've started hiding the prior with smoke and mirrors to redirect attention away from it.
Prior are now not subjective but useful, the OP is about the problem of choosing the best priors. The best options are informative priors (1) and regularizers (2). So, for example, choosing as prior a Laplace distribution for the unknown parameters is equivalent to the LASSO that is a well known way of obtaining sparse models with few coefficients. In (2) there is an example in which a prior suggest a useful regularization method for regression. In (3) the author discusses prior modeling.<p>(1) <a href="https://en.wikipedia.org/wiki/Prior_probability#Informative_priors" rel="nofollow">https://en.wikipedia.org/wiki/Prior_probability#Informative_...</a><p>(2) <a href="https://skeptric.com/prior-regularise/index.html" rel="nofollow">https://skeptric.com/prior-regularise/index.html</a><p>(3) <a href="https://betanalpha.github.io/assets/case_studies/prior_modeling.html" rel="nofollow">https://betanalpha.github.io/assets/case_studies/prior_model...</a>
I still have no clue what Gelman is saying about anything ever, and this post is no exeception. He seems like a great guy in interviews and presentations but anything he writes or talks about is highly non-specific.
Le Cam had interesting comments on Bayesian statistics.<p><a href="https://errorstatistics.com/2013/11/18/lucien-le-cam-the-bayesians-hold-the-magic/" rel="nofollow">https://errorstatistics.com/2013/11/18/lucien-le-cam-the-bay...</a>