India’s Schools Are Embracing AI. But Without Guardrails, There's a ‘Hallucination' Crisis
Santosh Koshy Joy
Real journalism holds power accountable
Since 2015, The Wire has done just that.
But we can continue only with your support.
The exponential rise of Generative Artificial Intelligence (GenAI) spearheaded by tools like ChatGPT and Gemini has swept into every corner of modern life and nowhere is its disruption more profound, and potentially perilous, than in India’s school education system. Government at every level is fast paced to find a balanced path to use of AI in education in India.
Earlier in October a seven-member committee formed by the Gujarat government on AI held its first meeting with the director of Gujarat Council of Educational Research and Training (GCERT) to assess its paced up plans to adopt AI in the State schools. Other states may follow suit.
The efforts of the governments are not out of nowhere. This comes soon after the US-based ChatGPT, developed by Open AI announced the opening of its first office in India this year. The company announced that they will expand their footprint with a clear objective, scope and timeline and have called it the India AI Mission aiming “to build an ecosystem for trusted and inclusive AI.”
Another US-based company, NVIDIA, a world leader in AI computing, announced a partnership with India’s Reliance in 2023. Business giants like Google and Microsoft have invested heavily in its AI marketing in India. An AI hub in Visakhapatnam is the latest in the series while the Union Minister for Electronics & Information Technology has remarked that India is uniquely positioned to drive the next wave of AI-led transformation. But are our schools and their teachers ready for it?
The enthusiasm for AI adoption among Indian educators is palpable and understandable. India is home to the world's largest digital user base for many AI platforms, and the tools promise an immediate, if illusory, solution to chronic teacher workload issues.
Surveys, such as the one conducted by the Centre for Teacher Accreditation (CENTA) this year shows that over 70% of teachers across India are now using AI tools, with nearly 60% employing them specifically for lesson planning. For an overburdened teacher managing large class sizes, generating a standards-aligned quiz or a detailed lesson summary in minutes is a powerful lure, allowing them to supposedly “gain time for other work”.
While National Education Policy (NEP) 2020 celebrates digital tools as a potential equaliser and a force multiplier for a nation grappling with the sheer scale of its educational demands from the Himalayan valleys to the shores of Kerala, the unchecked adoption of AI in classrooms has birthed a silent, insidious crisis, the AI hallucination.
This phenomenon, where AI confidently presents utterly fabricated information as fact, is being silently introduced into millions of lesson plans, quizzes, and study guides by an increasingly over-reliant teaching community, creating a profound challenge to factual integrity and critical thinking that the current regulatory landscape is entirely unequipped to handle. The very foundation of what Indian students are taught is now at risk of being undermined by a ghost in the machine.
This wholesale outsourcing of content creation by teachers to AI has an immediate and catastrophic side effect. Since a GenAI model is fundamentally a statistical engine that predicts the next plausible word rather than verifies objective truth, it frequently fabricates data, cites non-existent studies, or merges disparate facts into coherent-sounding nonsense – the very definition of a hallucination.
Also read: AI Is Changing Our Understanding of Earthquakes
When a teacher, seeking efficiency, copies an AI-generated note on the Mughal empire or a complex algebraic problem without rigorous verification, these confident falsehoods are immediately formalised as classroom truth impressing permanently in the minds of children and creating an educational time-bomb. The real-world impact of this teacher over-reliance is subtle yet destructive, particularly in a system already battling foundational learning deficits.
While direct, headline-grabbing court cases of AI-induced factual errors in Indian schools are scarce due to the diffused nature of the misuse, the cognitive and academic evidence validates the systemic danger. Firstly, the core academic integrity is compromised. As research on the cognitive impact of GenAI demonstrates, heavy student reliance on AI is correlated with lower brain connectivity, reduced executive function, and poorer memory recall. When teachers, too, begin to rely on AI to generate their content, they tacitly endorse this cognitive outsourcing. They are no longer teaching the critical skill of source verification but instead, they are presenting content whose origin is inherently opaque and unreliable.
For a country aiming to foster a generation of innovators as envisioned by the NEP 2020, this erosion of foundational reasoning is a severe handicap, especially considering that the national assessments already show alarmingly low proficiency levels in core subjects like mathematics and social sciences. The content generated might sound eloquent, but it is often factually unsound, embedding errors that are difficult for both the teacher and the student to detect because of the AI's persuasive, confident voice.
Moreover, the unchecked importation of AI-generated content into the curriculum introduces profound biases and cultural inaccuracies. The large language models teachers rely on are predominantly trained on vast datasets reflecting the values, histories, and dominant narratives of the Global North.
When these models are prompted to generate material for an Indian history class or a social science topic, they risk subtly or overtly embedding racial, cultural, or gender biases. A 2024 consultation in New Delhi involving UNESCO and the Union government stressed that current AI models often reflect dominant world views, which can contradict India's own commitment to diversity and inclusion.
The teacher, unaware of the subtle biases or the cultural irrelevance, simply accepts the generated text, turning the classroom into an unwitting vector for the propagation of potentially skewed global perspectives that bypasses the oversight of national curriculum boards. The problem, therefore, shifts from simple inaccuracy to a crisis of educational sovereignty and cultural representation.
This complex challenge is compounded by a deep policy vacuum in the Indian school ecosystem. The rapid development of AI has created a severe regulatory ‘pacing problem’. The technology is advancing far faster than the government’s ability to legislate and establish control measures.
While the NEP 2020 promotes digital learning and AI awareness, it lacks the fine-grained, enforceable mandates required to address the specific peril of hallucinations and over-reliance. There are no mandatory national frameworks or institutional policies dictating the "responsible use" of GenAI for content creation.
The contrast is stark, a new textbook must pass through numerous rounds of expert and government scrutiny but an AI-generated lesson plan, potentially riddled with errors, can be deployed instantly, lacking any formal checks or balances. This regulatory inertia leaves a critical accountability gap.
If a student fails a public examination because the study material provided by the teacher was generated by a hallucinating AI, where does the fault lie? Is it the teacher, the school administration who allowed the tool, or the company that provided the flawed model? In the absence of clear policy, the burden of verification and the risk of failure are unfairly placed on the individual teacher and, ultimately, the student.
The urgency of this crisis demands a clear and multi-pronged national response, drawing lessons from global efforts to integrate AI responsibly. India must move beyond mere guidelines to implement enforceable policy pillars that prioritise human agency and factual integrity.
Firstly, a nationwide commitment to mandatory ‘AI Vetting Training’ for Teachers is essential. This must be a structured professional development programme, potentially resulting in a ‘Digital Vetting Certification’, focusing specifically on the limitations of GenAI such as the mechanics of hallucination, error detection, and source triangulation techniques. As numerous global policy bodies, including UNESCO, strongly recommend, teachers cannot be expected to manage a technology they do not fully understand. The training should aim to make the teacher the "Human-in-the-Loop", not a mere copy-paster.
Secondly, the adoption of a formal "Human Vetting" protocol is required for all AI-generated content. Any material intended for direct student instruction including lesson plans, test questions, summary notes must be clearly labelled as "AI-Assisted Content’. This simple tagging mechanism introduces transparency and critically places legal and professional accountability back on the human educator. This mirrors the principles of the proposed European Union AI Act, which champions a human-centric approach to AI, insisting on oversight and traceability for high-impact applications.
Moreover, the "Chain-of-Thought Prompting" technique, where the AI is forced to explain its reasoning, should be taught as a standard verification process to expose logical flaws and factual errors before content leaves the teacher’s desk.
Thirdly, AI Literacy must be immediately and systematically integrated into the Indian school curriculum. Students must be taught how AI works, why it hallucinates, and how to be critically skeptical of its output. This is about building a new form of digital and media literacy from the ground up, teaching students to spot deepfakes and fabricated citations.
Global frameworks for AI in Education, such as those recommended by the World Economic Forum, stress the need to equip students to be informed consumers, not just passive recipients, of AI-generated information. This proactive measure not only mitigates the risk of hallucination but also aligns perfectly with the NEP 2020’s goal of fostering future-ready critical thinkers.
Finally, there must be strong policy guardrails for EdTech procurement for Indian schools. India’s regulatory bodies, such as the Ministry of Education, CBSE and UGC must define AI tools used for direct content generation and assessment as "high-risk" applications.
Also read: Big Tech’s New Ambitions: The Bloody Adventures of AI
EdTech vendors supplying these tools to schools must be mandated to meet stringent, transparent accuracy and bias-mitigation standards, with penalties for systemic failure. This is a crucial step towards implementing a responsible AI ecosystem, ensuring that the drive for efficiency does not override the fundamental right to accurate, unbiased education.
India’s digital transition in education is at a critical inflection point. AI offers the potential to personalise learning and streamline administrative tasks, but its current, unregulated integration into content creation by overworked teachers is creating a systemic risk of factual error and cognitive erosion through the phenomenon of hallucination. The challenge is no longer about whether to use AI but how to govern it.
By adopting swift, mandatory, and globally informed policies on teacher training, content verification, and student literacy, India can transform the ghost in the syllabus from a silent crisis into a tool for responsible, high-quality education, safeguarding the future of millions of students against the perils of confident misinformation. The time for a robust, accountable, and human-centric AI policy for school education in India is not in the future, it is now.
Santosh Koshy is a doctoral scholar with the Faculty of Social Sciences, University of Delhi. His work investigates the impact of AI on learning systems.
This article went live on October twenty-ninth, two thousand twenty five, at twenty-four minutes past seven in the evening.The Wire is now on WhatsApp. Follow our channel for sharp analysis and opinions on the latest developments.
