hi INDiA Copyright 2020
Twitter says its algorithm is biased toward RIGHT-WING content, says it will study results of ‘problematic’ review
Twitter has said that its algorithms amplify content from right-wing politicians and news outlets more than others but it does not know why. The platform described the findings, by a recent internal review, as “problematic.”
The study analyzed millions of tweets posted by elected officials in seven countries – Canada, France, Germany, Japan, Spain, the UK, and the US – between April and mid-August 2020. It found that, with the exception of Germany, content posted by the political right received more exposure on the platform than left-wing political entities’ content.
The internal review also examined hundreds of millions of tweets with links to content from news outlets during the same period – but not tweets directly shared by the primarily US media groups themselves. This revealed that conservative news portals benefited from greater algorithmic amplification as well.
Twitter noted that it didn’t categorize news outlets as left-leaning or right-leaning itself, but relied on classifications from “independently curated” third parties.
Also on rt.com
The study conducted a comparison of the platform’s ‘Home’ timeline, where it says its 200 million users see algorithm-tailored tweets based on their preferences, with a chronological timeline, where only the most recent posts were shown first. One finding was that politicians’ tweets were generally more amplified by the algorithmic timeline, particularly right-wing politicians.
In a blog post on Thursday, Rumman Chowdhury, Twitter’s software engineering director, and Luca Belli, a machine learning researcher, termed the findings “problematic” and indicated that changes might be required to “reduce adverse impacts” of the Home timeline algorithm.
“Algorithmic amplification is problematic if there is preferential treatment as a function of how the algorithm is constructed versus the interactions people have with it,” the post said, noting that the company needed “further root cause analysis” to explain why there was a bias in amplification.
There is also no ‘master algorithm’ of Twitter – your experience is the function of an algorithmic system. Even if we find algorithmic bias in our root cause analyses – we need to sleuth where it’s coming from and figure out what we can do. 7/n pic.twitter.com/wMFFwOBKSO
— Rumman Chowdhury (@ruchowdh) October 21, 2021
In April, Twitter had announced a plan, dubbed the Responsible Machine Learning Initiative, to study the fairness of its algorithms and whether they lead to “unintentional harms.”
The findings of its analysis directly contradict the claims of many US conservatives, who say Twitter is biased against them, citing numerous actions taken by the platform, including the banning of former president Donald Trump following the January 6 Capitol riot and claims of ‘shadow banning’ the profiles of conservative figures.
Conservatives have also complained about Twitter’s non-transparent content moderation practices.
In 2018, Twitter CEO Jack Dorsey said he “fully admits” that the site’s employees share a left-leaning bias, but insisted that as a platform it was not doing anything to amplify one political ideology or viewpoint over the other.
Also on rt.com
Twitter’s algorithmic biases – whatever they actually are – are also having ramifications for the platform abroad. Later this year, draft legislation is expected to be introduced in Russia’s parliament that will enable the government to regulate and restrict algorithms like those employed by Twitter, Facebook, YouTube, and others.
Following Russian parliamentary elections last month, former president Dmitry Medvedev accused these algorithms of “blatant interference” and said they were distorting the conversation online and furthering the interests of other nations.
Similar legislation is being mooted in the European Union and the US. In April, the EU proposed regulations on artificial intelligence that would aim to “minimize the risk of algorithmic discrimination” while a 2019 US proposal requiring Big Tech to audit their machine-learning systems for “bias and discrimination” and give oversight power to the US Federal Trade Commission is still pending approval.
Think your friends would be interested? Share this story!