The Coded Gaze.

Last Year a Canadian Student was experimenting with Twitter’s Algorithm, what he noticed was something worth conferring. He observed that Twitter’s algorithm continually selected his face instead of his darker-skinned friend’s from the photo of the two, to show on their feeds.  This fuelled tonnes of Twitter users’ curiosity and led many to experiment with Twitter’s algorithm.  Below is an example of another experiment –

These experiments kept proving Twitter’s ‘Algorithmic Bias’. Researchers decided to deal with a variety of diverse people, ethnicities, genders and saw evidence that Muslims, disabled and older people faced this discrimination as well. 

It is fascinating and alarming, at the same time, to see automated systems training and learning existing data resources and reinforcing such social bias in society. What we see here is an actualisation of people’s concepts and ideas. Parham Aarabi, a professor at the University of Toronto and director of its Applied AI Group, says that “Unintended racism wasn’t unusual, Programs that learn from user’s behaviour almost invariably introduce some kind of unintended bias.”

Algorithmic Bias or ‘The coded gaze‘ as Joy Buolamwini has coined, can lead to exclusionary experiences for communities consequential to discriminatory practices. These highlight a broader problem in the tech industry.

According to Chabert in Pasquinelli, “Algorithms have been around since the beginning of time […]. Algorithms are simply a set of step by step instructions, to be carried out quite mechanically, so as to achieve some desired result.” Similarly, in modern times, while the Algorithm is an AI tool (machine learning), it doesn’t exist in a vacuum. Algorithms as AI tools undergo ‘training’ of sorts as machine learning tools. They are exposed to a lot of big data (could be any kind), and then it learns to make predictions and judgements according to the patterns it notices. In this particular case of Twitter, while it shows personalised Advertisements based on our likes and dislikes on the platform, it has learned (through data and collective online behaviour) to internalize prejudices that would’ve never been written into the system intentionally. In simpler words, it is a reflection of our society. 

Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.”

A deeper exploration is required into historical and social conditions that lead to reinforcement of such social prejudices. the connection between the Algorithmic Bias and tropes of White Supremacy (a belief where white people are superior to other races and thus should dominate them. In a modern context, it means maintaining the power and privilege held by white people.), as well as age-old hegemonic notions, is very obvious and can’t be ignored, as they are directly reflected in the technology design. 

It has been pointed out before that because these platforms have become such an important part of our daily lives we somewhere start believing that the information being provided to us is depoliticised or neutral, which is absolutely not the case. Just because it is accurate, doesn’t mean it is ethical and fair. These systems can be biased based on who builds them, how they’re developed, and ultimately who uses them, especially since this technology often operates in a corporate black box. We frequently don’t know how a particular artificial intelligence or algorithm was designed, what data helped build it, or how it works.

Completely eradicating algorithmic bias sounds impossible but we need to start somewhere. The first step can be complete transparency and accountability. While Twitter has apologised there were no repercussions to the platform. These corporates should be completely transparent about what data they are using to train these algorithms.