US-based microblogging and social media service Twitter has introduced an initiative on “responsible machine learning”, which will provide algorithmic fairness ratings on the social media website.
According to the California messaging service, the initiative aims to increase transparency in its artificial intelligence and address “the potential harmful effects of algorithmic decisions.”
The move comes amid rising concerns about online service algorithms, which some argue can encourage violence or terrorist content, as well as reinforce racial and gender bias.
“Responsible technological use includes studying the effects it can have over time,” said a blog post by Ms. Jutta Williams and Ms. Rumman Chowdhury of Twitter’s ethics and transparency team.
“When Twitter uses machine learning (ML), it can impact hundreds of millions of tweets per day, and sometimes, the way a system was designed to help could start to behave differently than was intended,” stated Twitter.
The initiative calls for “taking responsibility for our algorithmic decisions” with the aim of “equity and fairness of outcomes,” as per the researchers.
The company commented that, “We’re also building explainable ML solutions so you can better understand our algorithms, what informs them, and how they impact what you see on Twitter. Similarly, the algorithmic choice will allow people to have more input and control in shaping what they want Twitter to be for them. We’re currently in the early stages of exploring this and will share more soon.”
Twitter has an ML Ethics, Transparency, and Accountability (META) team that consists of a dedicated group of researchers, data scientists, and engineers looking into these ML-related challenges.
Ms. Williams and Ms. Chowdhury noted that the team would be sharing what it learns with outside researchers “to improve the industry’s collective understanding of this topic, help us improve our approach, and hold us accountable.”
Related: Twitter reportedly weighed buying social audio app Clubhouse