TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The Filter Bubble: Algorithms as Gatekeepers

42 pointsby acrumalmost 14 years ago

4 comments

mhbalmost 14 years ago
Pariser's best point is that we should be aware that what we are seeing is actually being filtered, but I think that he is overstating the cost of personalized filtering and understating its benefit.<p>There is too much information available to view it unfiltered and it's unclear what that even means. If Google returned the same search results to everyone, that doesn't mean its unfiltered - it determined what to return based on what other people wanted to see. You're just seeing things with the same filter as everyone else. Why is that better? If you think it's better, consider that the reason there is so much trivial crap on TV is because the same people who watch it are the ones generating the aggregate filter.<p>Old media was already filtered. Just because everyone reads the same NY Times doesn't mean that it doesn't have a ____ slant and that all its subscribers aren't being affected by that confirmation bias. How many people who subscribe to the NYT also subscribe to Reason magazine to get the libertarian viewpoint on current events?<p>A way this could become better is for the personalizing algorithm to become smarter. Instead of showing you conservative viewpoints because you click on conservative opinions more than liberal ones, a smarter filter would make a higher level assessment of you. So if you click on some liberal ones, maybe it concludes something about you instead of something more trivial about what you look at - maybe it concludes that you like to hear other viewpoints and modifies your filter to better suit that conclusion.
mhbalmost 14 years ago
Suggest reading the original instead of this commentary on it: <a href="http://www.nytimes.com/2011/05/23/opinion/23pariser.html" rel="nofollow">http://www.nytimes.com/2011/05/23/opinion/23pariser.html</a>
spacemanakialmost 14 years ago
I'm really curious to read any discussion about this on HN, because I think the other times Eli Pariser's TED talk and book have been posted there haven't been much discussion. It's been circulating in my family (mostly by non-technical members) and there was some amount of alarm stirred up. I'm skeptical of this just because the evidence doesn't seem that damning.<p>Has anyone read the book? Is there significant evidence that this is a serious issue? I completely believe that this is beginning to happen, and would believe that FB does it since I don't spend enough time on the news feed to notice if it was happening, but the example in the TED talk and in the NYTimes column linked to by mhb is a Google search for "Egypt" earlier in the year, and two people seeing drastically different results: "Two people who each search on Google for “Egypt” may get significantly different results, based on their past clicks."<p>Aren't there other reasons for two people being served different results by Google? I've often read that they do heavy A/B style testing and stuff like that, which seems like it could explain some discrepancies.<p>So what does HN think? Is there stronger evidence here than comparing a few Google searches?
评论 #2586687 未加载
ericdalmost 14 years ago
Hm, if people estimate the popularity of something by the relative frequency with which they encounter it, and if they end up seeing only stuff that's tailored to them, then they may start believing that everything they like is mainstream and "correct" as a result. People may become less tolerant of stuff far outside of their comfort zone and end up becoming more polarized, which is not what I had imagined would be the effect of the internet.