The purpose of the Marcott, et al study was to understand what happened during the last 11,300 years. We already know what happened during the last 100 years. (Nonetheless the uptick at the end of their graph is almost certainly a byproduct of misaligned proxies.)<p>What's more, the FAQ selects quotes from the article; the FAQ offers no additional information. In particular, from the article and FAQ:<p>>Without filling data gaps, our Standard 5×5 reconstruction (Figure 1A) exhibits 0.6°C greater warming over the past ~60 yr B.P. (1890 to 1950 CE) than our equivalent infilled 5° × 5° area-weighted mean stack (Figure 1, C and D). However, considering the temporal resolution of our data set and the small number of records that cover this >interval (Figure 1G), this difference is probably not robust.<p>It's disingenuous to imply that this clarification was tacked on after the fact, as the blogger suggests.<p>Finally, there's this from the blog:<p>> If your methods can’t resolve data points in a given period of time, then DON’T REPRESENT DATA POINTS IN THAT GIVEN PERIOD OF TIME.<p>Marcott, et al clearly state (<i>in the paper</i>) that there is zero preservation of variability in 300 year resolutions, 50% at 1000 year resolutions, and nearly full preservation in 2000 year resolutions. This blogger takes this to mean that the last 2000 years should be arbitrarily trimmed from the graph. Following that logic, we might as well continue trimming along the x-axis until nothing is left.
This is a ridiculous critique. We have thermometer data for the last 100 years. We don't need to look at rocks for that part. Chopping off the hockey stick would be disingenuous, ignoring accurate data.