><i>There’s less to fight about when polls show similar results. But that doesn’t necessarily mean they’ll turn out to be more accurate. Instead, that consensus may reflect herding — pollsters suppressing results that they deem to be outliers, out of fear of embarrassment.</i><p>And there goes my entire faith in the process. Deciding which polls to release based on the results is absurd. If the media is going to focus so much on polls (and see my other comment for what I think about that) then they should only source them from pollsters that share <i>all</i> results, not just the ones the pollster deemed "correct" because it fit with their expectations.<p>Nate Silver's strategy for dealing with it isn't encouraging either:<p>><i>In fact, you should trust a pollster more if it’s willing to publish the occasional “outlier.” Clinton probably isn’t winning Colorado by 13 percentage points right now or losing Pennsylvania by 6 points. But the fact that Monmouth and Quinnipiac are willing to publish such results are a sign that they’re letting their data speak for itself.</i><p>It sounds like he's admitting he doesn't even know what the practices of each pollster are. For all we know, a pollster could be so sure in a Hillary or Trump lead that they don't publish the outliers in the <i>other</i> direction. How can Nate Silver aggregate these results with any certainty whatsoever?