Thanks. I've been trying to find a good way to understand the accuracy of Silver's predictions.<p>What's a fair benchmark? This article offers up a "coin flip" for each state, computing that such a coin flip would have a Brier score of 0.25. (The Brier score is a mean-squared error between outcome (1 or 0) and the percent certainty of the prediction in that outcome. If a coin flip is the model, each state's result of 1 or 0 would be in error by 0.5. The mean squared error would be 1/51 * 0.25 * 51 = 0.25.)<p>But... that seems like too generous a benchmark. Take the simple model: "assume 100% likelihood that state X will vote for the same party as it did in 2008." That guarantees that deeply red or blue states will vote the same way, so it takes the non-battlegrounds out of the equation.<p>With this model, there would only have been 2/51 errors. This simple lazy model achieves a Brier score of 0.039, beating Intrade and the poll average computed in this article quite badly.<p>After working through this, I'm still impressed by Silver and the other quant predictions. But I'm more concerned about media that rely too much on reporting a single polls result as "news" rather than as part of a larger tapestry.<p>Then again, it's the maligned media polls that are the raw input to Silver and the other models. Unless the media keeps funding the polls, the quality of these more advanced models will suffer.