+
 
For the best experience, open
m.thewire.in
on your mobile browser or Download our App.

Artificial Intelligence, Real Consequences: Rethinking Accountabilities in AI-related Litigations

Consider a scenario where you own the newest self-driving car. The vehicle, whilst on its auto-pilot mode, crashes into another vehicle; the court is now tasked with determining liabilities and damages.
Photo: Gerd Altmann/Pixabay
Support Free & Independent Journalism

Good morning, we need your help!

Since 2015, The Wire has fearlessly delivered independent journalism, holding truth to power.

Despite lawsuits and intimidation tactics, we persist with your support. Contribute as little as ₹ 200 a month and become a champion of free press in India.

Can computers think? Can submarines swim? 

With the proliferation of Artificial Intelligence (AI), our existence turns fascinating yet stranger day by day. It was predominantly supposed to be a mere tool to replicate human intelligence and create efficiency. It can be said that AI has sufficiently achieved its purpose. The average human’s life is arguably easier today, yet it contemporaneously burdens the judiciary and policymakers with novel dilemmas. AI’s applications and the (relative) newfound autonomy in its operations blurs the lines of accountability, particularly in the context of adverse incidents.

So the next time your chatbot defames individuals, your med-tech software produces erroneous diagnoses, or your car’s autopilot harms innocent pedestrians, courts still struggle to determine, who or what shall take the blame.

The harms resulting from an AI’s malfunction are often conflated to fall solely within the technological context, such as, loss of personal data. However, the ease with which AI is being integrated into cars, surgery robots etc., is alarming as regards its liability implications. 

Burdens and ethical dilemmas

Consider a scenario where you own the newest self-driving car. The vehicle, whilst on its auto-pilot mode, crashes into another vehicle; the court is now tasked with determining liabilities and damages. Despite not being in control of the wheel, you are after all the owner of the vehicle and should be held liable.

Conversely, by virtue of the AI being in control of the car at the time of the accident, the car manufacturer could also face the brunt. This scenario underscores the persistent ambiguities in relevant liability frameworks, leaving difficult questions for judges and lawmakers to answer. 

Liability decisions are further complexified as developers attempt to integrate moral choices into their AI systems. The supposed moral choices rest with the ‘artificial human’, but it is ultimately a burden on the developers designing these systems, to make such choices for them. Furthering the example above, would the AI controlled car either hurt innocent pedestrians to save its passengers, or crash into a tree and hurt its own passengers, whilst saving the pedestrians? Car manufacturers are forced to programme answers to the ‘Trolley Problem’ for when the AI unfortunately ends up in such a situation. 

Also read: From ChatGPT to o3: Revolutionary AI Model Achieves Human-Level General Intelligence

The advent of AI ought to make us contemplate upon revamping existing liability frameworks. With reference to adversities that do not concern AI based systems, there exists a clear chain of accountability and the responsibility of a wrongdoer is easily determined. If there appears to be a defect in a specific component, the liability is typically assigned to the party responsible for such component. 

Blurring the lines

With AI technologies however, there are three distinct features that come into play and dissolve boundaries of accountability.

Firstly, an AI system bears the potential to learn, adapt and improve by itself. It can automatically alter its conduct, making its behaviour unpredictable. Secondly, its decisions hinge upon ‘black box’ algorithms, which uncover hidden relationships within datasets beyond human comprehension. Interestingly, one cannot precisely determine why and how it reached a particular outcome. Thirdly, the output from an AI system is not solely attributable to the entity distributing such technology. There are several players concerned with specific processes, such as; providing datasets to train the algorithm, designing the algorithm, integrating the algorithm and compiling all such functions into a single software/system. 

The blurring lines of accountability with AI systems (as explained above) makes it impossible to create a causal link between the harm caused and the fault of the AI involved. Consumers are met with the hurdle of establishing this link, since it’s usually impossible for an AI system to delineate exactly how it arrives at an outcome. Consequently, the European Union’s AI Liability Directive (“Liability Directive”) aims to create an automatic presumption of this particular causal link.

The Liability Directive proposed in September 2022, has set the groundwork as regards liability claims for damages caused by an AI system. Article 3 of the AI Liability Directive is merely procedural, in so far, it prescribes courts to order disclosure of evidence by AI systems alleged of causing damages. Article 4 of the Liability Directive, is however a significant regulatory first. Although the provision retains the claimant’s duty to establish a causal link, it establishes an automatic presumption of the link between fault by the AI system provider and the damages caused to claimants. This presumption is nonetheless rebuttable, balancing the procedural rights of both parties in dispute. 

The European Union’s Artificial Intelligence Act, California’s AI Safety Bill and other relevant frameworks (proposed as well as implemented) levy penalties upon the distributors of AI systems causing such harm. However, courts are not obligated to wait for legislations to be passed in order to proceed with such claims. The common law rule of negligence makes the ‘negligent’ wrong-doer liable to compensate the other party with damages. This tort law theory rests upon the tenet that parties are obligated to conduct themselves with due care. Victims/plaintiffs shall prove their case adhering to the ‘preponderance of evidence’ threshold i.e., there shall be a 50% or more probability of truth within the claim. Whether an AI system provider has exercised requisite due care, shall be determined based upon thorough diligence in designing, testing, training and maintaining their AI program.

Interestingly, industry use cases of AI systems are merely assistive. Hence, by an extension of the negligence rule, it shall be the users of AI systems held liable for causing harm. The frameworks however, disregard this particular aspect and make ‘distributors’ of AI systems liable for fault. The regulations notably classify the users of such AI systems to be the ‘end-users’, whereas individuals utilising these AI systems shall still be held liable for negligence. The ordinary consumer could therefore sue AI system providers for causing harm, but litigating such disputes are far more complicated. 

Building on an earlier hypothetical, suppose the consumer trusts its self-driving car to autonomously park the vehicle. The AI somehow hallucinates and hurts nearby pedestrians, for which a litigation is initiated against the car manufacturer. In response to the same, the car manufacturer shifts the burden of liability towards the third-party AI software (which was allegedly in control of the vehicle) integrated within the car. This blame allocation is already effectively acknowledged by the consumer, as they constructively take notice of the indemnity clause within the terms of service agreement with the car manufacturer. 

EU’s revised product liability directive: Mostly bane

With reference to the above scenario, EU’s Revised Product Liability Directive 2024 (“Revised PLD”) is another pivotal regulatory development. The regulation allows third-parties responsible for specific defective components to be made liable under the directive. In other words, it strengthens the position of the car manufacturer and helps allocate the liability burden upon the third-party developer responsible for the AI software.

Following the EU’s lead, Brazil’s Artificial Intelligence Act adopts a similar approach in exempting AI system providers, once they establish that the harm resulted from third-party defective components. There further exists an inherent conflict in burden allocations within the Revised PLD. The regulation considers any modifications to a third-party component, as a means to remove such the safe harbour accorded to car manufacturers. In other words, a mere customisation effort from the car manufacturer would then relieve the third party AI-software developer of any liabilities, and hold the manufacturer liable. Courts are yet to delineate the scope of ‘modifications’ within the revised PLD, and achieving the right balance would be a challenging pursuit. 

Also read: AI Does Not Have the Answers to India’s ‘Aspirational’ and Frustrated Economy

A thorough exploration of the directive reveals that it is fraught with inequities against the consumers. The revised PLD not only endorses burdening a third-party for the defect, but also provides the defendants with additional grounds for disclaiming liabilities.

AI hallucination is one prominent liability exemption, seemingly arising out of the directive. The regulation allows AI system providers to escape liabilities arising out of ‘a defect which did not exist at the time it was placed on the market.’ An AI system’s ability to learn and adapt using its black box algorithms makes it impossible to determine if the defect really existed at the time it was ‘placed on the market.’

It needs to be underscored that AI is capable of ‘hallucinating’ even when it’s designed and implemented with meticulous caution. Defendants claiming this exemption and proving that such defect did not exist in technologies capable of producing vague and unpredictable outcomes, is a straight-forward endeavour. In all honesty, it is a relatively effortless task. 

The ease with which defendants can invoke above-illustrated exemptions raises significant concerns as regards their accountability. California’s AI Safety Bill seems to be a step in the right direction, as it holds indemnity clauses by AI distributors to be void as a matter of public policy. But this only accords minimal safeguards in the law of equity. Future legislations across the globe ought to refine such exemptions, and ensure that consumers are not left vulnerable in the face of rapidly advancing AI systems. The current frameworks desperately requisite an overhaul, as the future might just belong to AI, but the responsibility still remains human.

Bharat Manwani is a student at GNLU with a keen interest in Litigation and Technology Laws.

Make a contribution to Independent Journalism
facebook twitter