As a mathematician, that article was a little embarrassing to wade through.<p>While there are valid criticisms of the way statistics are sometimes misused in science, pretty much every one of them comes from lack of understanding about how statistical models work - scientists reaching for a familiar test and following a formula they were taught. I can't blame them too much - understanding the true purpose and nature of statistical models is HARD (my recommended step one: become a Bayesian). What we need is for more people to recognise when they don't have that understanding and work with somebody who does.<p>What Briggs seems to have done, though, is decided that because <i>HE</i> doesn't understand statistical inference and modelling, that statistics are bunk. Taking a simplistic definition of "trend" like "the second-half average is higher than the first" and turning that into a boolean yes/no answer is the statistical equivalent of being an anti-vaccer.<p>The most frustrating thing, though, is that all the alternative definitions of "trend" he defines can actually be expressed as statistical models! The issue is that when you express these definitions/tests for "trend" as models, you see that the statements each model makes about the underlying system are very problematic.<p>TL;DR - Briggs doesn't understand statistical modelling, and has therefore concluded that his home-rolled tests are just as good.
This is a very interesting piece to read. The amazing thing to me is how smart people can look at the same evidence and come to very different conclusions. To get the feel of this piece, you really need to read it along with the comments.<p>The author of the piece is a very talented statistician, with a PhD in Mathematical Statistics (I hesitate to think of what other kind of statistics someone might study?), a Masters in Atmospheric physics, and a BA in Meteorology. Without question, he is very extremely well qualified on paper to make write the post he did: <a href="http://wmbriggs.com/blog/?page_id=1085" rel="nofollow">http://wmbriggs.com/blog/?page_id=1085</a><p>The main critical commentator is David Appell. He's a freelance writer who's been concentrating and reporting on global warming issues for many years. He's got a PhD, Masters, and BA in Physics, has interviewed most of the main scientists in the field multiple times, and understands the issues as well as or better than anyone else writing about it in the mainstream press: <a href="http://davidappell.com/" rel="nofollow">http://davidappell.com/</a><p>And it appears that the two completely disagree about practically every line in the article!<p>Without doubting the credentials or understanding of either, I can't shake the feeling that one is following the data wherever it leads, and the other could rationalize any result under the sun and not be knocked off a single one of his talking points. And I'm sure that others can read the piece and the comments and have exactly the opposite reaction.<p>What is it about their prose that can produce generate in me this amount of certainty and trust? Somehow, one is able to signal to me that they share my worldview, and thus I'm willing the trust them on the details I'm unfamiliar with. The other loses my trust within the first paragraph. At some level I know that neither of these responses can be fully trusted, and yet I can't shake the feeling that one can be trusted and the other cannot. How can this be?
Wow, the extensive comment section is eye-opening to me, in how little consensus there exists about the validity of models, line-fitting, and prediction.