It’ll be fun they said…

I have been using Instagram for several years now. In fact, the first app I open in the morning is Instagram. I, one of the many, also interact with the platform by generating and circulating content online

Today I woke up, scrolled through my Instagram feed, while sipping my coffee, liked a lot of pictures, saw some videos on my explore page. Saw some ads – one was from an app called Vinted (they are a platform where people can sell their clothes to give them a new life, like thrifting.), next ad I remember seeing was from amazon, and I also saw an ad by getir_uk (this ad I have seen multiple times a day.) These ads are indicative of my online behaviour. It is what marketers call ‘personalised retargeting’. 

I use Instagram for multiple things. To record and archive my life in images, to share posts and videos and interact with friends back home and in London. While my profile is very private, I tend to share many personal details on the app. But, this doesn’t stop the advertisers from tracking my online behavioural patterns. I, as a user, generate information that is constantly collected, examined, sold, and presented back to me in the form of targeted advertisements. 

As the researcher Matthew P. McAllister has pointed out, “Social media offer businesses new, boundary-pushing opportunities to tap into people’s online activity by collecting enormous amounts of personal information and seamlessly integrating advertising and social networks.” Users formerly only valued as consumers are now a part of the production process. On these social media platforms, consumers become producers of content, integral to the platform’s existence. It is worth mentioning that the users participate voluntarily in these activities and rather enjoy posting and engaging online. But this productivity is transformed into profit by these media giants. 

“The sites then capitalize on the time users spend participating in the communicative activity.”

Matthew P. McAllister

Now that value is generated by content creation, they also generate a new commodity form, known as ‘cybernetic commodity‘. Dallas Smythe was the first to identify the role the audience plays for media companies. “The notion of double commodification speaks to the dual role of social media users: a source of free labour as well as providers of information that is sold for profit or used in the process of profit generation. This practice reflects larger patterns of capitalist exploitation, under which general social relations are increasingly becoming productive.” Therefore now engaging with social media sites has been conceptualised as free labour or unpaid work time – it provides monetary value.

Consumption and Production have blurred lines because of user-generated content, we are calling it labour not because we are creating free content, but also because we are creating ‘information commodities‘.

These sites collect cookies, which track text and patterns on the website. These tools enable advertisers to personalise targetting. Advertisers can learn about what you view online, and deliver a related advertisement in real-time, tailored to your location, income, shopping interests, etc. This surveillance culture is prominent to promote behavioural targetting, an intrusion to privacy but sold as capital.

We as users/labourers do not have any control over how this data is used and indicates the significant power imbalance at work. It makes me think about how I am not rewarded, in any way, for the commodity that I produce. However, this process of value extraction through the commodification of data is a fair demonstration of Platform Capitalism. We scroll and post for leisure but it’s actually work. It is a type of work where the process of commodification extends beyond the traditional workplace and wage-labour relationship, extracting value from ever-widening aspects of our lives with just a couple of likes as a reward.

The capture of productive activity online reflects the condition of value extraction in contemporary capitalism, where work seeps into leisure time and leisure time becomes work, where autonomous communicative creation and alienation overlap, and, critically, where processes of commodification extend beyond the traditional workplace and wage-labour relationship, extracting value from ever-widening aspects of our lives.

The Coded Gaze.

Last Year a Canadian Student was experimenting with Twitter’s Algorithm, what he noticed was something worth conferring. He observed that Twitter’s algorithm continually selected his face instead of his darker-skinned friend’s from the photo of the two, to show on their feeds.  This fuelled tonnes of Twitter users’ curiosity and led many to experiment with Twitter’s algorithm.  Below is an example of another experiment –

These experiments kept proving Twitter’s ‘Algorithmic Bias’. Researchers decided to deal with a variety of diverse people, ethnicities, genders and saw evidence that Muslims, disabled and older people faced this discrimination as well. 

It is fascinating and alarming, at the same time, to see automated systems training and learning existing data resources and reinforcing such social bias in society. What we see here is an actualisation of people’s concepts and ideas. Parham Aarabi, a professor at the University of Toronto and director of its Applied AI Group, says that “Unintended racism wasn’t unusual, Programs that learn from user’s behaviour almost invariably introduce some kind of unintended bias.”

Algorithmic Bias or ‘The coded gaze‘ as Joy Buolamwini has coined, can lead to exclusionary experiences for communities consequential to discriminatory practices. These highlight a broader problem in the tech industry.

According to Chabert in Pasquinelli, “Algorithms have been around since the beginning of time […]. Algorithms are simply a set of step by step instructions, to be carried out quite mechanically, so as to achieve some desired result.” Similarly, in modern times, while the Algorithm is an AI tool (machine learning), it doesn’t exist in a vacuum. Algorithms as AI tools undergo ‘training’ of sorts as machine learning tools. They are exposed to a lot of big data (could be any kind), and then it learns to make predictions and judgements according to the patterns it notices. In this particular case of Twitter, while it shows personalised Advertisements based on our likes and dislikes on the platform, it has learned (through data and collective online behaviour) to internalize prejudices that would’ve never been written into the system intentionally. In simpler words, it is a reflection of our society. 

Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.”

A deeper exploration is required into historical and social conditions that lead to reinforcement of such social prejudices. the connection between the Algorithmic Bias and tropes of White Supremacy (a belief where white people are superior to other races and thus should dominate them. In a modern context, it means maintaining the power and privilege held by white people.), as well as age-old hegemonic notions, is very obvious and can’t be ignored, as they are directly reflected in the technology design. 

It has been pointed out before that because these platforms have become such an important part of our daily lives we somewhere start believing that the information being provided to us is depoliticised or neutral, which is absolutely not the case. Just because it is accurate, doesn’t mean it is ethical and fair. These systems can be biased based on who builds them, how they’re developed, and ultimately who uses them, especially since this technology often operates in a corporate black box. We frequently don’t know how a particular artificial intelligence or algorithm was designed, what data helped build it, or how it works.

Completely eradicating algorithmic bias sounds impossible but we need to start somewhere. The first step can be complete transparency and accountability. While Twitter has apologised there were no repercussions to the platform. These corporates should be completely transparent about what data they are using to train these algorithms.