This might be overly reductive, but I read this as "right" tweets and figures generate more collective user engagement than "left" tweets and figures, though the underlying algorithm(s) is blind to the substance - political or otherwise - of the message.<p>The proposed change would then introduce conscious political bias to ensure an even representation of the spectrum... but I wonder how we define even? Perhaps we can base it on user's constituencies?
I don’t see the claims from Twitter matching the headline The Guardian is running. Twitter’s internal research is saying that certain tweets are organically more popular and engaging and more likely to be shown, not that the algorithm has an <i>ideological</i> bias. If you search for the word “bias” in Twitter’s post you’ll not find support for The Guardian’s editorialization. Here’s an important excerpt, which notes that more analysis is needed to understand whether the amplification is unnatural relative to user interactions:<p>> Algorithmic amplification is problematic if there is preferential treatment as a function of how the algorithm is constructed versus the interactions people have with it. Further root cause analysis is required in order to determine what, if any, changes are required to reduce adverse impacts by our Home timeline algorithm.<p>This study also leaves out a very important consideration, which is the impact of Twitter’s moderation along political lines. I suspect that’s where the true bias lies and given Twitter’s content policies reflect progressive ideology, it is very likely that bias leans left.<p>Twitter’s original blog post: <a href="https://blog.twitter.com/en_us/topics/company/2021/rml-politicalcontent" rel="nofollow">https://blog.twitter.com/en_us/topics/company/2021/rml-polit...</a>
Emergence[1] happens. This is a cautionary point about automating societal processes. In the extreme case, we could arrive at SkyNet.<p>[1] <a href="https://en.m.wikipedia.org/wiki/Emergence" rel="nofollow">https://en.m.wikipedia.org/wiki/Emergence</a>
I skimmed through the study [1], and it seems that the methodology is as follows:<p>They define a metric called "amplification" ratio for a set of tweets. Roughly, for a specific set of the tweets, it is the ratio of their "reach" in the sample of users with the chronological timeline (control, 1% of global users), to the sample of users with the ML timeline (treatment, 4% of global users). For a sample of users, reach of a set of tweets is defined as the share of the sample which encounter at least one of the tweets in the set. (the amplification metric is actually shifted so that 0% means a ratio of 1, i.e. equal reach)<p>Then, they took a sample of right-wing and left-wing politions and media and calculated these amplification ratios (for individual accounts, and for the left-team and right-team grouped together, etc.). Generally, this "amplification" metric was larger for right-wing accounts (or groups of accounts).<p>I think the use of that metric for measuring bias is misleading though, in the sense that they do not account for the fact that Twitter users are mostly left-wing [2], and this does significantly effect the metric they have chosen.<p>Assume that I'm left-winger who does not follow any right-wing politions. Then probably any sensible algorithm which includes tweets in my timeline from accounts that I do not follow will increase the right-wing "amplification" metric, as it is enough for it to show me just one single tweet from a right-winger. If my understanding is correct, their measure for amplification is way too sensitive. (and it is worse when applied to understand the reach of a larger group of the accounts, as an encounter with a single tweet from any of the members of the group is counted as reach)<p>[1]: <a href="https://cdn.cms-twdigitalassets.com/content/dam/blog-twitter/official/en_us/company/2021/rml/Algorithmic-Amplification-of-Politics-on-Twitter.pdf" rel="nofollow">https://cdn.cms-twdigitalassets.com/content/dam/blog-twitter...</a><p>[2]: There are many studies on this, e.g. this one by Pew Research Center: <a href="https://archive.md/iEJaq" rel="nofollow">https://archive.md/iEJaq</a>