+
 
For the best experience, open
m.thewire.in
on your mobile browser or Download our App.

Is the European Union's AI Act Attempting a Magna Carta for the Digital Age?

tech
A look at the draft Act as an important cultural text of our time for what it seeks to do, and how it fits in with a larger emphasis on rights and risks to those rights. 
Illustration: The Wire, with Canva.

On December 9, 2023, the world’s first legislation on AI was accepted by the European Parliament and Council. As it awaits ratification by the member states to become an Act to be implemented in 2026, the legal ramifications of the Act require, naturally, more discussion by cybersecurity and legal scholars. Flaws will no doubt be pointed out and weaknesses that need to be addressed.

That said, it is worth looking at the draft Act and related documents as an important cultural text of our time for what it seeks to do, and how it fits in with a larger emphasis on rights and risks to those rights. 

The draft defines AI as 

“software that is developed … for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with…”

The insistence on the AI as fulfilling or seeking to fulfil ‘human-defined objectives’ is a crucial one, because it assumes that AIs are created and run on aims and purposes that are human in origin. Which means, the programming can be subject to the same biases and prejudices as humans, as critics such as Ruha Benjamin have noted, and against which safeguards have to be put in place to prevent discrimination and harm to groups and individuals. 

The most striking aspect of the ‘Proposal for a regulation of the European parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts’ (the proposal was amended extensively so that the draft expanded from 108 pages to 350!) is the twin poles around which its assumptions and suggestions revolve: rights and risks. The very first objective stated in this document says that the regulation aims to ensure that AI systems placed on the [European] Union market and used are safe and respect existing law on fundamental rights and Union values…

In its opening statement the ‘Amendments adopted by the European Parliament on 14 June 2023 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts’ declares:

“The purpose of this Regulation is to promote the uptake of human centric and trustworthy artificial intelligence and to ensure a high level of protection of health, safety, fundamental rights, democracy and rule of law and the environment from harmful effects of artificial intelligence systems in the Union while supporting innovation and improving the functioning of the internal market.” [Emphasis in the original]

That paragraph more or less encapsulates the vision: trustworthy AI which will be subjected to the rule of law and cannot be allowed to disrupt foundational principles of rights and has to be human-centric. This emphasis on rights is reiterated across the document, with a particular inflection on ‘the fundamental rights of natural persons’, a phrase the draft is particularly keen on. 

The proposed Act seeks to encourage the production of and innovation in AI technologies, but casts them within a risk-and-rights framework:

“By laying down those rules as well as measures in support of innovation with a particular focus on SMEs and start-ups, this Regulation supports the objective of promoting the AI made in Europe, of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence, as stated by the European Council, and it ensures the protection of ethical principles, as specifically requested by the European Parliament.”

This emphasis on rights and risk is very welcome indeed as the Act seeks to become a Magna Carta for the digital era wherein, as numerous scholars have pointed out, AI threatens to encroach upon rights. The draft document itself makes this threat clear when it writes:

“Aside from the many beneficial uses of artificial intelligence, that technology can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and abusive and should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights, including the right to non-discrimination, data protection and privacy and the rights of the child.”

The draft document categorises AI in terms of its potential for ‘low’ or ‘high’ risk, and the forms of AI that belong to the latter category are important for their potential to directly or indirectly impinge on rights. Later it defines what it means by high-risk AI:

“it is appropriate to classify them as high-risk if, in the light of their intended purpose, they pose a significant risk of harm to the health and safety or the fundamental rights of persons.”

In the list of high-risk AI, the draft regulations seek a ban on ‘systems with the objective to or the effect of materially distorting human behaviour, whereby physical or psychological harms are likely to occur’. It goes on to define these systems in some detail

“neuro-technologies assisted by AI systems that are used to monitor, use, or influence neural data gathered through brain-computer interfaces insofar as they are materially distorting the behaviour of a natural person in a manner that causes or is likely to cause that person or another person significant harm.”

That is, human behaviour cannot be subject to AI control – the stuff of sci-fi, of course – and any such interventionary systems should be banned. 

Moving from behaviour to identification of group behaviour, the document makes a case for both individuals and members of a group where an AI system may “perceive or exploit vulnerabilities of individuals and specific groups of persons due to their known or predicted personality traits, age, physical or mental incapacities, social or economic situation”. 

Proceeding from the above awareness to the exploitation of vulnerabilities, the draft regulations then worry about AI being used to foster or reinforce discrimination of social groups:

“AI systems that categorise natural persons by assigning them to specific categories, according to known or inferred sensitive or protected characteristics are particularly intrusive, violate human dignity and hold great risk of discrimination. Such characteristics include gender, gender identity, race, ethnic origin, migration or citizenship status, political orientation, sexual orientation, religion, disability or any other grounds on which discrimination is prohibited”

Clearly then, the AI Act in its draft – the final version has not been published yet – is keenly alive to the social and cultural dimensions, from discrimination to social sorting to privacy. 

This makes it topical, necessary and in line with the most dominant – at least in the Global North, let us face it – discourse of the age: that of Human Rights. It also shows an awareness of historical wrongs and systemic injustices that have disenfranchised particular groups:

“When improperly designed and used, such systems can be particularly intrusive and may violate the right to education and training as well as the right not to be discriminated against and perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation.”

The European Union Flag. Photo: Pexels

In this awareness of historical risks and current risks and the concern over rights, the draft Act is in tune with what commentators like Safiya Umoja Noble predict:

“Artificial intelligence will become a major human rights issue in the twenty-first century. We are only beginning to understand the long-term consequences of these decision-making tools in both masking and deepening social inequality.”

The draft document echoes Noble and others when it writes: “AI systems providing social scoring of natural persons for general purpose may lead to discriminatory outcomes and the exclusion of certain groups… Such AI systems should be therefore prohibited.”

Social sorting even for law enforcement purposes is a grave risk, and AI which makes this easy – and carries, let us emphasize, the same racial, sexist and ethnic biases that humans possess and exercise (the ‘human-defined objectives’ which the draft states very early) – should be prohibited:

“AI systems used by law enforcement authorities or on their behalf to make predictions, profiles or risk assessments based on profiling of natural persons or data analysis based on personality traits and characteristics, including the person’s location, or past criminal behaviour of natural persons or groups of persons for the purpose of predicting the occurrence or reoccurrence of an actual or potential criminal offence(s) or other criminalised social behaviour or administrative offences, including fraud prediction systems, hold a particular risk of discrimination against certain persons or groups of persons, as they violate human dignity as well as the key legal principle of presumption of innocence. Such AI systems should therefore be prohibited.”

Even CCTV and surveillance systems that have been put in place ‘to create or expand facial recognition databases add to the feeling of mass surveillance and can lead to gross violations of fundamental rights, including the right to privacy’ should be prohibited, according to the Act. 

The draft Act also worries about the now ubiquitous biometric identification and databasing determines access to public spaces, and so wants these regulated:

“The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces is particularly intrusive to the rights and freedoms of the concerned persons… The use of those systems in publicly accessible places should therefore be prohibited.” 

Here it has accounted for the right to privacy and the right to public spaces. 

The AI Act also worries that the systems will monitor humans as never before: “AI systems intended to be used … to detect the emotional state of individuals should be prohibited.”

The Act then lists all the rights that must be guarded from what it terms ‘high-risk’ AI:

“the right to human dignity, respect for private and family life, protection of personal data, freedom of expression and information, freedom of assembly and of association, and non-discrimination, right to education consumer protection, workers’ rights, rights of persons with disabilities, gender equality, intellectual property rights, right to an effective remedy and to a fair trial, right of defence and the presumption of innocence, right to good administration. In addition to those rights, it is important to highlight that children have specific rights”.

It then identifies domains in which the deployment of AI, without regulation and appropriate safety measures, can produce harm: education, employment, health care services, law enforcement processes where it may ‘affect the lives or the fundamental rights of individuals’, the ‘fields of migration, asylum and border control management’.

When we think of how already existing systems have produced additional forms and degrees of discrimination, the Act’s concerns appear legitimate. To take just one example, systems such as Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), studies demonstrate, were built to make the judicial processes more objective and therefore, by implication, more just. Law enforcement authorities administered a questionnaire to those people who had been arrested, and their responses fed into a computer. COMPAS then ‘predicted’ how likely the person was to commit a crime in the future. These scores and predictions were submitted to the judiciary, ostensibly making their rulings founded on data. The ironic horror was that it resulted in African Americans receiving longer jail sentences than white persons.  

The draft Act is thus alert to biases in the datasets and the process of databasing, and calls for: ‘specific attention to the mitigation of possible biases in the datasets, that might lead to risks to fundamental rights or discriminatory outcomes for the persons affected’. Here it again implicitly signals machine bias that stems from ‘human-defined objectives’.

Machine biases such as COMPAS, the data-based predictive systems whether in migration/crime control or calculations of creditworthiness emerge not from the machines but their programming by humans. As Meredith Broussard puts it in Artificial Unintelligence:

“When you believe that a decision generated by a computer is better or fairer than a decision generated by a human, you stop questioning the validity of the inputs to the system. It’s easy to forget the principle of garbage in, garbage out—especially if you really want the computer to be correct. It’s important to question whether these algorithms, and the people who make them, are making the world better or worse.”

§

The draft Act document in itself isn’t trying to wind us up and make us even more paranoid than we are (if we are not paranoid, then something is wrong with us, ain’t that so?). Its focus, as it states, is to alert us to the ‘reasonably foreseeable misuse of the system’ which is what literary and cultural texts, particularly sci-fi and apocalyptic works, do. This is why the draft document is a major cultural document of the age.

If, as commentators fear, ‘technological redlining’ (Safiya Umoja Noble’s term for digital racial profiling) is widespread, and rights will be increasingly encroached upon, then the AI Act, albeit restricted to the European Union, for now, is going a considerable way to assuage the fears, manage the anxieties and regulate what AI can be permitted to do.

We are aware of surveillance, from WhatsApp to commercial service providers, the tracking of Aadhaar and bank accounts, most of this enabled by an Orwellian state that then also powers not just targeted advertising but also misinformation and the infocalypse of hate that is addressed to specific groups. AI, as the documents cited suggest, will amplify this scenario many times over. Whether governments across the world choose to acknowledge, and employ, the risks-and-rights mode as the EU Act seeks to do, or merely sit back and allow more inequities to enter the system remains to be seen. 

Pramod K. Nayar teaches at the University of Hyderabad.

Make a contribution to Independent Journalism
facebook twitter