Social media users stumbled upon discrepancies in how Twitter displays people with different skin tones, reopening a debate over whether computer programmes – particularly algorithms that “learn” – manifest or amplify real-world biases such as racism and sexism.

The problem was first discovered when education tech researcher Colin Madland posted about how video-calling software Zoom cropped the head out of a black person on the other side of a call, seemingly unable to detect it as a human face. When Madland posted a second photo combination showing the acquaintance visible, Twitter’s image display algorithm appeared to show his face in the preview. 

Madland appeared to be a Caucasian with white skin. 

Soon, several users replicated Twitter’s seemingly discriminatory manner of prioritizing faces. In one of the most shared tweets, posted by cryptography engineer Tony Arcieri, Twitter only showed the face of Republican senator Mitch McConnell – a Caucasian — as the preview of a combo photo that also involved former US President Barack Obama, who is of partly African descent. 

ALSO READ:  ISRO's GSAT 6A Communications Satellite In Deep Trouble And Lost

A Twitter spokesperson acknowledged the problem and said the company was looking into it. “Our team did test for bias before shipping the model and did not find evidence of racial or gender bias in our testing. But it’s clear from these examples that we’ve got more analysis to do. We’re looking into this and will continue to share what we learn and what actions we take,” this person told HT.

Twitter’s chief design officer Dantley Davis responded to some of the tweets, detecting variations in how the system responded based on further manipulations of the image. Davis also linked to an older blog by Twitter engineers that detailed how the auto-cropping feature worked. The feature uses neural network algorithms, a type of a machine learning approach that attempts to mimic how the human brain processes data.

ALSO READ:  Now, Ladies & Gentlemen, Presenting, Our Own - The ‘Made In India’ Bitcoin 'Lakshmi'

Multiple groups of researchers have found that such technologies, which usually rely on artificial intelligence are prone to reflecting sociological biases, in addition to flaws in design.

“Automated systems are not inherently neutral. They reflect the priorities, preferences, and prejudices – the coded gaze – of those who have the power to mould artificial intelligence,” said the authors of the Gender Shades project, which analysed 1,270 images to create a benchmark for how accurately three popular AI programmes classified gender. 

The researchers used images of lawmakers from three African and three European countries, and found that all three popular software most accurately classified white and male faces, followed by white women. Black women were most prone to be incorrectly classified, found the research led by authors from MIT in their 2018 paper. 

“Whatever biases exist in humans enter our systems and even worse, they are amplified due to the complex sociotechnical systems, such as the Web. As a result, algorithms may reproduce (or even increase) existing inequalities or discriminations,” said a research review note by Leibniz University Hannover’s Eirini Ntoutsi and colleagues from multiple other European universities. 

ALSO READ:  Facebook Privacy Drive May Crimp Some Political Campaigns

This, they added, could have implications for applications such as where AI-based tech such as facial recognition is used for law enforcement and health care.

An American crime risk-profiling software, Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), was found to have a bias against African-Americans, the authors noted as an example. “COMPAS is more likely to assign a higher risk score to African-American offenders than to Caucasians with the same profile. Similar findings have been made in other areas, such as an AI system that judges beauty pageant winners but was biased against darker-skinned contestants, or facial recognition software in digital cameras that overpredicts Asians as blinking.”