A great recommendation engine should incorporate explicit feedback from users.<p>For example, Netflix knows what I've watched and how I've rated things. But it doesn't let me tell it things like "I categorically dislike horror movies", so it sometimes recommends those.<p>It's trying to guess something that I'd rather just tell it.
Interestingly, I've found an opposite effect -- people tend to oversell their favorite movies, restaurants, etc., so a person's strong recommendation will often lead to disappointment, not because the movie/food isn't good but because my expectations were set too high.
I think the 'black swan' is important in recommendations as well. There's stuff that will blow your mind because it's something you think about and is somehow connected to something you've done. It might not be relevant at all to most people who read the same stuff as you do. So it's very unlikely it will be recommended to you. Recommendations increase 'groupthink' and you find interesting stuff but not a lot that's outside of your comfort zone. Humans are still better at connecting dots and giving recommendations than machines. I guess it's difficult to teach a machine to make great surprises?
The author writes as if, when a 'real person' makes a recommendation, they aren't just making a prediction from data. The difference is that we can see how the algorithm works, whereas our picture of what the human brain does when it thinks about what someone might like is very murky.<p>And even if trusting a recommendation causes a placebo effect that increases the perceived quality (as others have pointed out, it seems plausible that a hyped recommendation may cause disappointment instead), why on Earth would we trust a process we <i>don't</i> understand, but mistrust a process we <i>do</i>?
<i>At the same time, it's hard for me to trust algorithmic recommendations. I don't trust them because I know they're from a computer, and I find it difficult to believe that a computer program can "know" who I am.</i><p>Determining something that's likely to suit your tastes based on your browsing/buying/watching/reading/etc. habits isn't "knowing" you and doesn't require such a thing. I don't know if this is some sort of accidental anthropomorphism or what, but I've seen this in more than one place and it just doesn't follow.
This article made me laugh, let's review.<p>Computer predictions seems to work, but only get you half way there. I don't trust them because they are from a computer, but if I wrote the program myself I would trust it 100 times less. As with baseball, guessing the home team will win works at least half the time. Our recommendation engine works, and people read articles we recommend 4 times longer.<p>Conclusion: computer based recommendation engines are 50% placebo.
The only thing that ultimately matters to me as far as recommendators (whether human or algorithmic) are concerned is whether their recommendations turn out to be valuable to me in some way.<p>That's how you build trust: by making good recommendations. Everything else is secondary.
> All additional information will make your prediction better<p>I don't think this is true... I find that there's a point where additional information becomes noise, and can make your prediction less useful.