
New Delhi: Supreme Court Justice B.R. Gavai has expressed apprehension while using artificial intelligence (AI) in judiciary, saying that as a machine, it lacks human emotions and moral reasoning to grasp the nuances of legal disputes. >
Speaking at a conference organised by the Supreme Court of Kenya, Justice Gavai acknowledged the benefits of using AI in easing administrative burden of case management and for effective scheduling of cases. However, he warned about the risks inherent in over dependence on AI as it can generate fake news and fabricated facts.>
“Relying on AI for legal research comes with significant risks, as there have been instances where platforms like ChatGPT have generated fake case citations and fabricated legal facts,” Justice Gavai said, as quoted by LiveLaw. >
He added, “While AI can process vast amounts of legal data and provide quick summaries, it lacks the ability to verify sources with human-level discernment. This has led to situations where lawyers and researchers, trusting AI-generated information, have unknowingly cited non-existent cases or misleading legal precedents, resulting in professional embarrassment and potential legal consequences.”>
The Supreme Court judge, who is next in line to be the Chief Justice of India in May, said the integration of AI in judiciary “must be approached with caution” so that it serves as an aid, and not as a replacement for the human mind. >
“AI is increasingly being explored as a tool to predict court outcomes, sparking important debates about its role in judicial decision-making. This raises fundamental questions about the very nature of justice. Can a machine, lacking human emotions and moral reasoning, truly grasp the complexities and nuances of legal disputes? The essence of justice often involves ethical considerations, empathy, and contextual understanding—elements that remain beyond the reach of algorithms. The integration of AI in the judiciary, therefore, must be approached with caution, ensuring that technology serves as an aid rather than a replacement for human judgment,” he said.>
Justice Gavai’s statement comes after a curious case of a recent tax tribunal order that reportedly cited fictitious court judgments before it was hastily withdrawn, raising suspicions about the potential use of generative AI.>
On December 30, the Bengaluru bench of the Income Tax Appellate Tribunal (ITAT) passed an order involving the Buckeye Trust, Karnataka. According to a report by Mint, the order relied on three Supreme Court and a Madras High Court – all of which were either fake or inaccurate. >
The four orders cited were K Rukmani Ammal vs K Balakrishnan (1973) 91 ITR 631 (Madras High Court), S Gurunarayana vs S Narasimhulu (2004) 7 SCC 472 (Supreme Court), Sudhir Gopi v Usha Gopi (2018) 14 SCC 452 (Supreme Court) and 57 ITR 232 (Supreme Court). The report said that while two of these did not exist, the other two led to completely different cases in the official record. >
The order was withdrawn in January citing ‘inadvertent errors’.>
The Mint reported that these led to concerns that the tax department’s representative may have used an AI tool for preparing his arguments. However, no official statement was issued by either CBDT or ITAT regarding the case.>