I was browsing the Whoop subreddit when I noticed a particularly suspicious chain of comments in a format similar to ones I've seen before, where a product with an oddly generic name is linked using Markdown and the comment seems vaguely human like, but the commenter's profile clearly indicates that all they comment about is the product. Here is an example of what I mean:<p>https://www.reddit.com/r/whoop/comments/1ebvrld/whos_actually_paying_60_for_bands/lew1age/<p>This comment seems pretty real when you look at it, but then this is the user's profile:<p>https://www.reddit.com/user/AtmosphereNational96<p>My question is this: how can social media companies possibly counter this level of astroturfing? What can the average consumer do to prepare themself against this propaganda war?
Authentication is a bitch.<p>Several waves of effort to "raise awareness" have happened, they all fare about like Musk's most recent: <a href="https://www.cnn.com/2022/04/28/tech/elon-musk-authenticate-all-real-humans/index.html" rel="nofollow">https://www.cnn.com/2022/04/28/tech/elon-musk-authenticate-a...</a><p>Perhaps we just have to accept that things you can't smell are most likely not real, and assess the things we see on the internet in that light.<p>Did you believe that an energy drink would give you wings, when it was the company making it saying so? Why would it be more credible from some random handle on the net?
Why would they?<p>1. Astroturfing is marketing and in general people want information about products and services.<p>2. To be good, moderation must be done by hand. That's expensive. Platform users don't want to pay for it. Moderation done on the cheap leads to false positives and false negatives. Those feel like injustices and piss-off users.<p>3. Astroturf is asymmetric. Astroturfers have strong financial incentives to work toward bypassing filters. Platforms have large attack surfaces. Patching every possible small hole creates friction for good faith use.<p>The Usenet is still with us in spirit. Good luck.