We need your support. Know More

India's AI Advisory: Vague Clauses and Terms Don’t Help Anyone

author Anwesha Sen
Mar 16, 2024
Regulations cannot be formulated and implemented in a haste and cannot be an afterthought also.

The recently issued AI advisory from the Ministry of Electronics and Information Technology (MeitY) and the chaos that followed in the AI ecosystem have clearly outlined the need for policymakers to involve various stakeholders in public consultations actively.

Startup founders took to various platforms to share their anxieties and call on MeitY to amend or repeal the advisory. Without diverse perspectives, regulations tend to be vague and confusing for industries and result in an extended duration of consultations and amendments, delaying compliance. Moreover, such advisories complicate the process of compliance, especially for the startup ecosystem. Public consultations need to be the starting point of the policy-making process, and not an afterthought.

The two-page AI advisory issued on March 1, which is in the form of a Word document, and has not been announced via an official press release yet, was circulated across various communication channels before it was confirmed by news articles. The preceding advisories merely emphasised existing clauses within the IT Act and Rules to help curb the spread of deep-fakes. However, the latest advisory comprises of vague clauses and terms aimed at regulating “AI model(s)/LLM/Generative AI, software(s) or algorithm(s)” without processes for compliance and, hence, has created confusion in the industry.

For instance, Clause 2(c) of the advisory asks “all intermediaries/platforms” to seek government approval before deploying “under-testing/unreliable” AI models, software, and algorithms on Indian users. But, what is a “platform”? Which regulatory body “approves” models? What is the evaluation process? What is an unreliable model? No one knows.

The advisory was immediately interpreted, and rightfully so, as a return to ‘license raj’ by asking presumably everyone to seek government approval on their models. From the previously hands-off approach to AI regulation, this advisory is a step to the other extreme of over-regulation.

Following the backlash from the tech industry, minister of state, MeitY Rajeev Chandrasekhar posted a clarification on X  (previously called Twitter). He stated that the advisory is only for “significant platforms” and will not apply to startups. However, the advisory has not been updated to reflect these clarifications, thereby putting the two at odds. Why are clarifications to official advisories being made on social media? What are the criteria for being significant? Again, no one knows.

There are further discrepancies and vagueness in the advisory. Clause 2(b) requires that intermediaries and platforms “do not permit any bias or discrimination or threaten the integrity of the electoral process”. However, as per research conducted by AI Democracy Projects, current AI models do not accurately respond to prompts regarding elections approximately 50% of the time.

The models tested included Anthropic’s Claude, Google’s Gemini, OpenAI’s GPT-4, Meta’s Llama 2 and Mistral’s Mixtral. Considering they do not fully comply with the advisory and likely fall under the ambit of “significant platforms”, this raises the question regarding penal consequences. According to Rajeev Chandrasekhar, while the advisory is not legally binding, it indicates the future of AI regulations in India. The advisory also states that non-compliance with IT Rules and Act provisions would result in penalties. As some of the content generated by such AI tools violates provisions of harmful content under said regulations, will all these platforms be penalized? This again begs the questions regarding the vague definitions of “intermediaries” and “platforms” and whether these tools fall under these categories.

A move to rein in the AI industry to ensure the safe and ethical development of tools is welcome, but this advisory and the precedent it sets for future regulations is counterproductive. Overarching and vague regulations with heavy penalties for non-compliance would disincentivise companies and individuals to build their own AI tools and innovate. Developing regulations to balance the drive to innovate and the need for responsible technology requires open conversations with diverse stakeholder groups such as developers, lawyers, social scientists, doctors, teachers, etc. from diverse socio-economic backgrounds. There already exist multiple such communities and fora that can be leveraged for the same.

Also read: Google Search: Modi Govt Now Says AI ‘Advisory’ Only Advisory, Not Regulatory Framework

Moreover, the regulatory requirements to ensure responsible AI differs depending on the type of tool and the domain within which it is being deployed. This follows from the risk-based approach adopted in the EU AI Act, as well as frameworks for safety-based software verification and validation globally (such as DO178B/C). The regulations that would apply to high-risk tools such as those used in healthcare would not be relevant to a chatbot like ChatGPT. Similarly, healthcare tools and fintech tools have different risks and requirements. 

AI regulations that are to be applied across the industry should specify a minimum standard of safety that must be met, rather than overregulating emerging technologies. For higher levels of risk, additional regulations and guardrails specific to these risks can be put in place throughout the development lifecycle. Compliance and regulations are often viewed through the lens of penalties and legalese, but should instead be a means to ensure that the companies can comply with ease and develop safe technologies. To do so, checklists to gauge compliance and continuous policy engagements with the ecosystem are required. 

Apart from legal and compliance-related issues, there are a multitude of technical concerns regarding this advisory. It lacks details on the application process for approval from the government, transparency in reporting the result of the same, methods for testing the reliability and “harmlessness” of models, and the 15-day deadline for reporting compliance is highly inadequate, especially when no one knows how to comply. Without clarity and amendments on these aspects, the advisory cannot be effectively implemented. 

The industry’s backlash due to this advisory outlines the need for policymakers to include the broader tech industry in the policy-making process and vice versa. Developers and designers too need to be aware of and actively engage in consultations that are carried out.

A positive outcome of this advisory has been the awareness raised around the tech policy regime among AI developers and startups. This momentum can be used to draft effective regulations through multi-stakeholder consultations and comments from the industry to promote trust, safety, and responsible technology.

The process of regulations and involvement of regulatory bodies is a continuous and iterative process during any technological development that has a large impact on society. Regulations cannot be formulated and implemented in haste and also cannot be an afterthought. Further, the process of regulations and verification of compliance should evolve to include diverse stakeholders to ease the formulation of effective and widely accepted regulations. 

Anwesha Sen is a researcher and community organiser, associated with Hasgeek.

Make a contribution to Independent Journalism