Add The Wire As Your Trusted Source
For the best experience, open
https://m.thewire.in
on your mobile browser.
AdvertisementAdvertisement

Green Light for AI, Orange for Rights

The 'India AI: Governance Guidelines' curbs rights for so-called innovation.
The 'India AI: Governance Guidelines' curbs rights for so-called innovation.
green light for ai  orange for rights
Photo: https://betterimagesofai.org/
Advertisement

On 5.11.2025, MeitY released the India AI Governance Guidelines, 2025. These guidelines are a blueprint for how AI will be deployed in India. The document repeats familiar ideas about “Responsible AI” but adds a new core principle of “innovation over restraint”, which places speed of deployment ahead of precaution and rights-based safeguards. The India AI Governance Guidelines Report rely heavily on voluntary self-regulation and existing laws rather than creating enforceable obligations, including for high-risk uses of AI by the state in welfare, policing, and digital public infrastructure. Many concerns that IFF and others raised during consultations, including the need for (i) a constitutional grounding, (ii) stronger protections against surveillance and discrimination, and (iii) binding duties for high-risk AI systems.

Background

When the Ministry of Electronics and Information Technology (MeitY) unveiled the India AI Governance Guidelines in November 2025, it framed them as a “balanced, agile, flexible, pro-innovation” blueprint for safe, inclusive and trusted AI in India. The Guidelines are meant to steer how AI is built and deployed across sectors, from finance and health to welfare and law enforcement, and to support India’s long-term vision of “Viksit Bharat” by 2047. For us at the Internet Freedom Foundation (IFF), this is also a constitutional moment when these choices will shape how AI interacts with the fundamental rights to (a) equality, (b) free expression and (c) privacy for years to come. Our earlier submissions to MeitY on the draft AI governance framework pushed for a rights-anchored, enforceable model rather than one that relies mainly on voluntary ethics.

The final Guidelines are built around seven “sutras” that are said to define India’s approach.

MeitY also makes an explicit policy choice not to propose a dedicated AI law right now, but to lean on existing frameworks such as the Information Technology Act, 2000 (IT Act), the Digital Personal Data Protection Act, 2023 (DPDPA), consumer laws and other sectoral regulations, adding new rules only where gaps are apparent.

This innovation's first posture is reinforced by the way the Guidelines treat compliance. Both the draft and the India AI Governance Report are talking about voluntary and technical measures such as transparency reports, model cards, third-party audits, etc., as tools to build trust. This “trust but verify later” model stands in tension with global evidence that self-regulation in fast-moving, high-profit sectors often delays or dilutes accountability rather than strengthening it. While in theory, these guidelines seem like a step towards building sustained state capacity on AI. However, in practice, these bodies are all envisaged within MeitY’s administrative ambit, without a statutory guarantee of independence, clear enforcement powers, or mandated representation for civil society and affected communities.

Advertisement

Analysis

From a digital rights perspective, the most consequential doctrinal innovation is the sutra (Part 1 of the Guidelines), or principle of “Innovation over Restraint” (3rd Sutra). The Guidelines state that, “all other things being equal, responsible innovation should be prioritised over cautionary restraint”. In effect, this establishes a presumption in favour of deployment when regulators face uncertainty or competing interests. Global debates on AI governance often juxtapose the EU’s precautionary model, where high-risk or “unacceptable risk” systems are restricted or banned ex ante, with more permissive approaches that favour “permissionless innovation” subject to ex post correction. Our analysis notes that the Indian framework explicitly aligns with the latter: harms are expected to be addressed when they are “real and specific”, not anticipated and prevented.

This raises the risk that citizens, especially those dependent on the State for essential services, will become de facto test subjects for high-risk deployments in (i) welfare, (ii) policing, and (iii) digital public infrastructure. The Guidelines attempt to temper this innovation-first stance by repeatedly invoking “responsible innovation” and “trust”. But in operational terms, responsibility is largely channelled through voluntary measures. Both the draft and final Guidelines endorse tools such as transparency reports, “model cards”, third-party audits, red-teaming, and incident databases as ways to build trust.

Advertisement

The final Guidelines go further in specifying these as “alternative mechanisms” for accountability self-certifications, internal policies, peer review, and technical safeguards rather than immediate legal obligations. They envisage a sequence in which firms (a) first adopt voluntary commitments, (b) publish red-teaming results or impact assessments, and (c) subject themselves to (i) public, (ii) peer and even (iii) parliamentary scrutiny; only if this proves inadequate over the next 9-12 months would MeitY consider converting some of these into mandatory requirements. IFF’s submission to MeitY had warned that this heavy reliance on self-regulation is structurally ill-suited to protecting rights. We had pointed to global scholarship showing that high-level ethical principles such as transparency, accountability, and fairness often remain aspirational without statutory backing, concrete enforcement tools, and independent oversight.

A second structural concern is the weak constitutional anchoring of the Guidelines. IFF’s submission argues that AI governance in India must be grounded explicitly in the Constitution, particularly Articles 14, 19 and 21, and the Preamble’s commitment to justice, liberty, equality and dignity. The Draft Report had briefly acknowledged this by referencing NITI Aayog’s earlier work on “Responsible AI” and its engagement with constitutional morality. The final Guidelines, however, avoid framing AI harms as potential violations of fundamental rights. Instead, they rely on a vocabulary of ethics and policy, “people first”, “fairness & equity”, “trust is the foundation”, and refer to fundamental rights only indirectly, for instance by mentioning that fairness should avoid discrimination or that safety is important for society. This has practical implications, as if failures of fairness or transparency are conceived primarily as failures of “responsible innovation” rather than as legally wrong.

Advertisement

The institutional architecture proposed by the Guidelines illustrates this tension between technocratic governance and rights-based oversight. The Draft Report recommended a multi-stakeholder advisory group headed by the Principal Scientific Adviser, supported by a technical secretariat, to adopt a whole-of-government approach. The final Guidelines make this more concrete, recommending the creation of an AI Governance Group (AIGG) as a high-level inter-ministerial body, a permanent Technology & Policy Expert Committee (TPEC) for domain expertise, and a strengthened AI Safety Institute (AISI) operating on a “hub-and-spoke” model. This is a welcome move in terms of building state capacity and coordination. Yet, as IFF’s analysis notes, these bodies are all envisaged within MeitY’s administrative orbit, without an independent statutory mandate, guaranteed resources, or clear powers to halt or modify harmful deployments, particularly by the State itself.

Advertisement

There is also no commitment to multi-stakeholder composition beyond general references to involving industry and experts; structured representation for civil society, affected communities, or constitutional bodies is not built into the design. The Guidelines’ techno-legal approach raises further questions for digital rights. Both the draft and final documents emphasise using digital artefacts, governance technology layers (techno-legal) and automated compliance tools to manage the complexity and speed of AI ecosystems. For example, they suggest that consent-like artefacts (DEPA) could be used to establish immutable identities for actors and track liability chains across the AI value chain. In principle, such tools can scale oversight in a fragmented ecosystem. But as we had cautioned, techno-legal solutions are not neutral as they encode particular assumptions about authority, traceability, and control.

When combined with digital public infrastructure such as Aadhaar, UPI and sectoral stacks, there is a real risk that AI-enabled governance will deepen surveillance and exclusion if not bounded by robust, enforceable human-rights safeguards. These risks are especially acute in public-sector AI. Both the Draft Report and the final Guidelines fail to fairly acknowledge the risks of AI when it is used in areas such as welfare, policing, education and disaster management. The final Guidelines encourage all deployers, including public agencies, to implement grievance mechanisms and to provide explanations that are “understandable by design”. Yet they stop short of creating special obligations for State actors, despite IFF’s recommendation that government use of AI should be held to the highest constitutional standard and subject to enhanced transparency and oversight. For instance, there is no requirement for public registers of algorithms used in welfare or policing, no explicit right to a human alternative for essential services, and no dedicated ombuds or tribunal for algorithmic harms. The Guidelines favourably cite global examples such as New Zealand’s Algorithm Charter, but do not propose a binding equivalent for Indian government agencies.

Transparency is another area where the distance between principle and practice remains wide. The Guidelines rightly identify transparency about the AI value chain as “foundational” for effective and proportionate governance, and they urge greater clarity on how developers, deployers and users interact across layers of data, models and applications. However, as IFF’s submission underscores, legal changes such as the amendment to Section 8(1)(j) of the RTI Act through the DPDP have already weakened information rights, particularly when private contractors operate public systems. Further AI projects like Digi Yatra, structured as private non-profit entities, have been declared outside RTI’s scope, justified due to trade secrets and corporate structures. The Guidelines do not address these practical barriers to algorithmic transparency, nor do they propose mechanisms such as statutory obligations to publish impact assessments, audit summaries or algorithmic registers that would make transparency more than an internal best practice.

Finally, the Guidelines’ limited engagement with digital exclusion and structural discrimination is a missed opportunity. IFF warned that “digital-by-default” automation in essential services, without strong safeguards, will entrench the existing digital divide and reproduce caste, gender and regional inequalities at scale. The final text repeatedly affirms “AI for All”, calls for skilling and awareness programmes, and stresses “People First” and fairness in outcomes. But it does not specify concrete obligations such as mandatory testing for disparate impact along protected grounds, requirements for offline and non-AI alternatives in welfare delivery, or protections against proxy discrimination in scoring and profiling systems. In a context where biometric mismatches can already exclude people from food rations, these omissions are not abstract, and they determine who is seen, counted and served by automated systems.

It is not as if all is terrible with the India AI Governance Report, as it does improve the language on content certification from the draft that was clearly calling for fingerprinting AI-generated content with identity markers due to the public panic around deep fakes. Furthermore, there also exists a concrete move beyond the draft in the creation of an institutional architecture. Further there is a repeated recognition that AI is a high risk, probablistic technology and hence can read to harms and risks. However, there is limited recognition and clear acceptance of a rights based framework.

Action

From our perspective, many of the key asks we placed before MeitY remain open. Taken together, the India AI Governance Report gives a clear green light to rapid AI deployment under a “techno-legal” model in which governance is embedded in digital architecture and existing laws, rather than structured through a new, rights-first statute. They incorporate some of the ideas that IFF and other civil society organisations have pushed with references to risk, fairness, accountability, human oversight and grievance redress, but they rarely convert these ideas into binding, enforceable obligations, especially for high-risk uses by the state. That is why we describe them as an industrial strategy that is more complete than the rights framework that should accompany it.

The work now moves outside the four corners of the India AI Governance Report. In the months ahead, IFF will continue to advocate for a statutory basis and the legal recognition of AI-based harms as we develop our own understanding of what specific shape it should take. We will continue advocating against high-risk uses such as live facial recognition in public spaces and caste based profiling, push for stronger transparency and RTI guarantees for all public sector algorithms and AI procurement, seek horizontally enforceable rights protections against algorithmic discrimination by private actors, and an explicit constitutional anchoring of AI governance. As per us, these are not barriers to innovation but a necessity to fight back against technology-based determinism in the digital world that serves our constitutional framework and the interests of the citizens of India.

This article was originally published in Internet Freedom Foundation.

This article went live on November twenty-seventh, two thousand twenty five, at thirty-five minutes past four in the afternoon.

The Wire is now on WhatsApp. Follow our channel for sharp analysis and opinions on the latest developments.

Advertisement
Advertisement
tlbr_img1 Series tlbr_img2 Columns tlbr_img3 Multimedia