Thanks. I've been trying to find a good way to understand the accuracy of Silver's predictions.<p>What's a fair benchmark? This article offers up a "coin flip" for each state, computing that such a coin flip would have a Brier score of 0.25. (The Brier score is a mean-squared error between outcome (1 or 0) and the percent certainty of the prediction in that outcome. If a coin flip is the model, each state's result of 1 or 0 would be in error by 0.5. The mean squared error would be 1/51 * 0.25 * 51 = 0.25.)<p>But... that seems like too generous a benchmark. Take the simple model: "assume 100% likelihood that state X will vote for the same party as it did in 2008." That guarantees that deeply red or blue states will vote the same way, so it takes the non-battlegrounds out of the equation.<p>With this model, there would only have been 2/51 errors. This simple lazy model achieves a Brier score of 0.039, beating Intrade and the poll average computed in this article quite badly.<p>After working through this, I'm still impressed by Silver and the other quant predictions. But I'm more concerned about media that rely too much on reporting a single polls result as "news" rather than as part of a larger tapestry.<p>Then again, it's the maligned media polls that are the raw input to Silver and the other models. Unless the media keeps funding the polls, the quality of these more advanced models will suffer.
Note how NPR is one of the most right-biased in this result. It's pretty evident from years of listening that the NPR staff generally are progressives, and would left. So I think this result exemplifies how genuinely 'fair and balanced' NPR really is.
I would like to see them add in David Rothschild at Yahoo[1], who's an expert in scoring rules and prediction markets and whose February (!) predictions were almost exactly on the money.<p>[1] <a href="http://news.yahoo.com/blogs/signal/" rel="nofollow">http://news.yahoo.com/blogs/signal/</a>
This compares top lines. I think a comparison of turnout model accuracy would be more informative. Most of the models that erred predicted that the 2012 turn out would lean less Democratic than the 2008 turn out model, based on the 2010 mid-term turnouts and a (mis)perceived dampening of enthusiasm among Democrats and increased enthusiasm among Republicans. Based on exit polling, there was a drop off of 7 million white voters, and I don't think anyone who predicted that.
Does anybody know more about YouGov's methodology? On the face of it, I'm suspicious of their very low margin of error which seems substantially better than any other poll out there, but you can't deny that their polling was accurate.<p>Another thing that looks odd on that graph: the given polling numbers from Washington Times/Politico/Monmouth/Newsmax/Gravis/Fox/CNN/ARG all look <i>identical</i> despite their differing margins of error (which suggests their source data is different). What's going on there?
Article doesn't mention Sam Wang. His confidence level for Obama winning was at 99%<p><a href="http://election.princeton.edu/" rel="nofollow">http://election.princeton.edu/</a>
Actually the most accurate 2012 election pundit was Drew Linzer (<a href="http://votamatic.org/" rel="nofollow">http://votamatic.org/</a>). Provided Florida goes Obama's way, he correctly predicted the electoral college - Obama 332, Romney 206.
Slate Magazine found two other pundits which were as accurate as Silver.<p><a href="http://www.slate.com/articles/news_and_politics/politics/2012/11/pundit_scorecard_checking_pundits_predictions_against_the_actual_results.html" rel="nofollow">http://www.slate.com/articles/news_and_politics/politics/201...</a>
Great article, but I disagree with the colouring on the first graph: if reality was within the poll's margin of error, I don't think it should be coloured, because that implies a bias that (probably) isn't actually there.