Add The Wire As Your Trusted Source
For the best experience, open
https://m.thewire.in
on your mobile browser.
AdvertisementAdvertisement

It Begins in the Code – That's Where AI Must be Scrutinised

To hold Artificial Intelligence accountable, it must be scrutinised at the level of design and not after users and society have faced its ill effects.
To hold Artificial Intelligence accountable, it must be scrutinised at the level of design and not after users and society have faced its ill effects.
it begins in the code – that s where ai must be scrutinised
Midjourney, Public domain, via Wikimedia Commons.
Advertisement

As today’s popular narratives about technology suggest, we are well into the Artificial Intelligence age, and calls for better regulation of AI are only growing louder. However, it is important to ask – are we asking the right questions about AI regulation? Are our calls for regulation cognisant enough of the larger politics of all modern technologies, not just AI?

Recent viral trends involving masses of people willingly uploading their images into AI tools seem to suggest otherwise. Viral trends such as the saree trend with Google’s Gemini and the earlier Studio Ghibli trend with OpenAI’s ChatGPT encouraged users to mindlessly upload their photos to AI systems without considering the privacy risks, while also creating a FOMO (fear of missing out) factor that made people eager to copy their friends.

There appears to be a large blind spot blocking the public view when it comes to understanding AI systems, and corporate marketing strategies are further promoting a kind of digital illiteracy, pushing users towards an increasingly limited understanding of how these systems work while encouraging them to incorporate and depend on them in every part of their everyday life.

Owing to being programmed to validate every single idea of the user, no matter how dangerous, AI platforms such as ChatGPT have encouraged obsessions with conspiracy theories, dangerous medical misinformation and suicidal thoughts, which have led to several deaths. Despite this, corporate leaders continue to push for these technologies to be widely adopted, even as their harms become increasingly serious.

To understand the workings of these systems and how they impact users, we need to look deeper into how we understand technology itself – using the theory of affordance, first introduced by American psychologist James J. Gibson. Any tool, be it a chair, a gun or a computer, must enable or constrain a particular kind of behaviour to be considered a tool at all. Every tool, whether intentionally or unintentionally, must change something. This act of enabling or constraining particular behaviours will always be political in nature, reflecting the worldview of its creators.

Advertisement

No tool can thus be ‘neutral’. Different tools open up as well as foreclose different possibilities and options – which are termed affordances. When it comes to modern technological systems, looking at them through the lens of affordances reveals a wealth of intentional design decisions in play that impact billions of people around the globe every second.

When someone becomes addicted to using an AI tool, therefore, our question needs to reach all the way to why that tool was designed to be addictive in the first place – opening up a debate about potentially rejecting certain tools altogether.

Advertisement

For example, OpenAI has recently released an app called Sora, which is supposed to make it easy for users to generate AI videos and share them on social media. We need to ask why such an app was designed, who benefits from it, why it needs to exist, what purpose it serves, what new behaviours it will encourage and what possibilities it will discourage.

Also read: India Needs a Law to Govern Generative AI, But a Blanket Ban Won't Work

Advertisement

Fundamentally, the questions asked need to go down to the level of code and software design, and that must happen before any particular software is released to the world. Designers of AI tools must be held accountable for the downstream impacts instead of offloading the costs and consequences onto users and society at large.

Advertisement

In his book Digisprudence: Code as Law Rebooted, legal scholar Laurence Diver argues for developing a system of legitimacy for code itself that takes into account the impacts of affordances in technology design. Diver’s main argument is that any mechanism or code with the power to shape people’s actions and behaviours must be subject to the same scrutiny as law.

Laws have a time gap – between the introduction and enforcement of a Bill, for example – and this gives people time and the right to interpret, contest or criticise it. Diver calls this a ‘hermeneutic gap’. But when a code is designed and brought in for ‘innovation’, it is implemented directly, without a procedure and without users having any authority to question its legitimacy. They have barely any option but to accept unwanted ‘innovations’.

This practice has worsened in the AI era, as mostly unnecessary ‘AI-enabled’ features are added to every technology, from TVs to phones to social media apps – though many of them receive that label purely as a marketing strategy.

Despite AI being widely popularised, its workings have remained deliberately opaque to most people – while a handful of powerful tech companies profit from it, society struggles with the social and political consequences.

To avoid accepting technology as “just the way things are”, Diver says, there should be some ideals through which a code must pass at the time of production rather than relying on a post hoc remedy model. The opacity of code means that usually only the most illegitimate ones are identified, and that is why people still get trapped by dark patterns or deceptive design methods developed by commercial enterprises, such as food delivery apps.

Also read: To Understand the Risks Posed by AI, Follow The Money

Owing to this, AI is unwillingly ‘accepted’ by the majority – despite its drastic effects on them, it continues to grow. This is not just because people do not recognise its harms, but because of the fantasies it creates in their minds. The possibilities of awareness and strategies to counter its misimplementation thus appear limited, even if they are not. Technology scholar Meredith Whittaker has called it “a tool which creates a perception that there is always a new future guaranteeing the present trajectory, making it hard to not use”.

Affordances thus need to be given legal recognition, as present AI regulation continues to lean towards ‘balancing’ innovation and regulation, even if the former has endless negative impacts on society.

Tech law scholars Rebecca Crootof and B.J. Ard, while strongly supporting the precautionary approach, argue that a better-safe-than-sorry, cautious mindset is preferable, as the risks involved with technology are so probable, significant and mostly irreversible.

Most tech-neutral laws focus only on consequences – trying to develop post hoc solutions as remedies for users. Another imagination of digital rights is therefore needed. It should focus on the stage when code is produced, not merely executed. Such a vision of tech law would classify the nature of technological affordances, focus on the range of behaviours they enable and disable, and the consequences of each.

To help build a more just and equitable AI future, companies and developers must be held accountable when they start going wrong – such as when affordances unexpectedly enable problematic behaviour, or, indeed, when their innovations head in the wrong direction from the outset, being designed only to exploit and manipulate users.

Ramsha Sartaj is a law graduate from Aligarh Muslim University.

This article went live on November first, two thousand twenty five, at zero minutes past two in the afternoon.

The Wire is now on WhatsApp. Follow our channel for sharp analysis and opinions on the latest developments.

Advertisement
Advertisement
tlbr_img1 Series tlbr_img2 Columns tlbr_img3 Multimedia