Data-viz rant ...<p>There's a word cloud about halfway down the post that shows words frequently used in comments on "rumor" posts. Words that are darkly shaded are associated with true rumors, and words that are lightly shaded are associated with false rumors.<p>I've never been into word clouds as a data visualization tool, but if you do use a word cloud, and you're using color to communicate something relevant about the data, PLEASE do not use a monochrome gray gradient, as this post has done. It's really difficult to tell whether "government" is slightly darker or lighter than "certainly," for instance.<p>A blue-red gradient would have worked a lot nicer in this case, in my humble opinion.
It was hard for me to find again, so i'll leave this here if anyone else is interested. The psychology behind a lot of how rumors form and perpetuate fall under herd behavior, availability cascades and information cascades. Pretty much all of it is based on a group of people deciding that something they all agree on is important, such as a significant threat, or a particularly good outcome. Everything from public policy to stock market prices to internet memes and media scandals seem to be decided this way.<p><a href="http://en.wikipedia.org/wiki/Herd_behavior" rel="nofollow">http://en.wikipedia.org/wiki/Herd_behavior</a> <a href="http://en.wikipedia.org/wiki/Availability_cascade" rel="nofollow">http://en.wikipedia.org/wiki/Availability_cascade</a> <a href="http://en.wikipedia.org/wiki/Information_cascade" rel="nofollow">http://en.wikipedia.org/wiki/Information_cascade</a>
This is really interesting, and I remember reading something similar awhile ago and having the same thought as I had today. I wonder how this would look if they segmented the data by education levels.<p>When I look through my social media news feed (or read BuzzFeed posts on the hilarious dumb things that have been said on the Internet), I see a very big difference in things that are posted, if just simply from those that are college-educated vs. those that are not. Certainly this is not a guarantee. I'm sure I've shared stuff that has been fake, and even reputable news agencies make mistakes. But to me the data would be a[nother] compelling argument for better and more access to education--to stop the damn rumors! (My real goal is to put Snopes out of business.)
Very interesting indeed. I think it raises many more questions than it answers about human behavior and the infectiousness of rumors in social media. I'd be particularly interested to see an analysis of whether sharing a false rumor has any effect on the reshare rate of a user's future posts. In other words: is a user's influence or perceived reliability (as measured by the relative rate of reshares of his/her future posts) diminished following the initial share of a false rumor? Reduced reshare rates could be a positive reflection of an increasingly skeptical and better-informed user community. Consistent reshare rates would be...a less optimistic sign.
I wonder how close this matches up to the "unsustainable growth" graphs from yesterday: <a href="https://news.ycombinator.com/item?id=7662841" rel="nofollow">https://news.ycombinator.com/item?id=7662841</a><p>edit: also, in light of "A Batesian Mimicry Explanation of Business Cycles" <a href="https://news.ycombinator.com/item?id=7634628" rel="nofollow">https://news.ycombinator.com/item?id=7634628</a> , could this be a good basis for a bubble investment model?
Seems like it would be easy to follow the identified snopes links and parse the true or false they give in order to put a verified or debunked link with every share (linking to snopes). Now that would be useful. All this data mining is interesting, but just tells us what we already know (people share lot's of rumors that are frequently false.) Why not do something about it?