When I started learning about Bayesian statistics years ago, I was fascinated by the idea that a statistical procedure might take some data in a form like "94% positive out of 85,193 reviews, 98% positive out of 20,785 reviews, 99% positive out of 840 reviews" and give you an objective estimate of who is a more reliable seller. Unfortunately, over time, it become clear that a magic bullet does not exist, and in order for it to give you some estimate of who is a better seller, YOU have to provide it with a rule for how to discount positive reviews based on their count (in a form of a prior). And if you try to cheat by encoding "I don't really have a good idea of know how important the number of reviews is", the statistical procedure will (unsurprisingly) respond with "in that case, I don't know really how to re-rank them" :(