What Happens to Article 19 as News Media Battles AI?
On being prompted to express its views on the freedom of the press, ChatGPT responded, “I facilitate the dissemination of knowledge and support journalistic processes.” While the response is courteous and forward-looking, it overlooks the complex pressures confronting Indian journalism today. Over 300 regional newspapers shut down during the pandemic, and more than 1,000 media and entertainment personnel were laid off between January and March 2025, citing cost-cutting and restructuring. Simultaneously, national broadcasters like Aaj Tak have deployed multilingual AI anchors such as Sana, a shift that marks not mere technological assistance, but a replacement of human editorial roles.
While the Delhi high court is grappling with the applicability of the traditional copyright law to scraping of data by AI from news websites without paying compensation to the news agencies in ANI v. OpenAI, there are growing reasons to ask whether the issue runs deeper – whether AI’s use of journalistic content poses a threat to the fundamental rights of the press and the public.
While article 19 (1) (a) is a brief 12-word long provision saying "all citizens shall have the right to freedom of speech and expression," its judicial amplitude, both qualitatively and quantitively, has been extremely expansive. Among its many other elements, the media’s freedom to report and the public’s right to know is one of the most ground-breaking constitutional developments since independence. And AI poses a threat to both.
The Autonomy of the Editors
In K.K. Birla v. Press Council of India (1975), the Delhi high court affirmed, “The editor of a newspaper has the right to gather the news, the right to select the news for inclusion in the newspaper, the right to print the news so selected, and then the right to comment or express his own views on all matters of public importance.”
Therefore, editorial judgment is not just a matter of discretion, it is a constitutional entitlement.
Now that editorial tasks are being delegated to AI, media leaders have declared, “AI in newsrooms is not optional, it is existential”. However, AI affects the selection of stories as well as the manner in which they are presented. Algorithmic biases prioritise stories based on engagement metrics, distorting the news presented and marginalising important stories. Generative AI can produce automated pieces based on what is trending and popular, without editorial oversight.
These opinions that are generated based on large data sets without transparency – which is also called the black box problem – can be ridden with biases. They can shape and skew public opinion in particular ways. The lack of clarity on AI’s decision making makes it difficult for the editors to track these biases and override them.
One might argue that voluntary adoption of AI amounts to waver of editorial autonomy. However, these decisions are increasingly shaped by market forces. Owing to the rising competition and the demand for faster content production. Editors face pressure to use AI to stay relevant and meet commercial goals.
The implications of this are structural. Editorial discretion, a cornerstone of a free press, is being eroded at the level of planning meetings, news selection, and headline writing. AI now assists in drafting news reports, scripting bulletins, and optimising headlines for engagement metrics. These tasks, once handled by human editors, are increasingly driven by systems that prioritise click-through rates and trending keywords. The result is a shift from editorial judgment to algorithmic logic. This raises serious concerns when the underlying training data is shaped by patterns of past coverage that are already skewed by commercial and political pressures.
Consequently, unlike a reporter or editor who can be questioned, corrected, or held accountable, AI models offer no meaningful transparency in why they frame issues a certain way. The lack of legal standards for explainability in news-related AI further compounds this.
Therefore, AI is not merely replacing human resources, it is also curtailing their autonomy.
The lnowledge of the public
The public’s right to know has been held to be a part and parcel of the right to speech, as per the ruling in People’s Union for Civil Liberties v. Union of India. Three practical manifestations of this right are noteworthy.
First, the public’s right to access the true and correct information, as recognised by the Kerala high court in Public Eye v. Union of India in 2024.
Second, the public’s right to have access to plurality of views and a range of opinions on all public issues, which was recognised in The Secretary, Ministry of Broadcasting v. Cricket Association of Bengal.
Third, the journalists obligation to disclose sources when it is required by the law agencies, which was emphasised in a recent matter before the Delhi chief judicial magistrate.
AI tramples upon these rights in different ways.
First, it can inadvertently generate and spread false information. For example, Google's AI-powered search claimed that astronauts met cats on the moon, and “Barack Obama was a Muslim president”. These AI "hallucinations," where systems produce answers based on their training data without verifying their accuracy, undermine the public's right to receive truthful information.
Second, it can be manipulated by political actors to control dominant narratives and amplify the voices of the powerful. For instance, in the 2024 election cycle, AI-generated deep fakes and memes were reportedly used to present distorted political facts and manipulate public opinion. Third, the opacity of large language models that are used to train AI and the processes utilized by AI to generate results, impairs the users right to know the source of information.
AI intensifies the velocity paradox – the idea that the faster information travels, the less accurate and more damaging it often becomes. A landmark study by the Massachusetts Institute of Technology, published in Science and reported on by the BBC, found that false news on Twitter was 70% more likely to be retweeted than the truth and reached its first 1,500 people six times faster. The most viral misinformation reached over 100,000 people, while verified news rarely passed the 1,000 mark. The reason? Novelty. As professor Sinan Aral has explained, false stories are more surprising and emotionally charged, making people more likely to share them, regardless of the truth in them.
This problem compounds when AI generates and disseminates such content. For instance, during the 2023 Karnataka state elections, deepfake videos targeting opposition leaders spread on social media within hours. By the time fact-checkers issued corrections, millions had already viewed the false content. In this warped ecosystem, traditional journalism, bound by verification and ethics, simply can't keep up. The result is a public fed not accurate information but algorithmically boosted “gossip,” as one psychologist in the MIT study noted. When surprise replaces truth and speed trumps accuracy, the right to know is gutted in real time.
If Article 19(1)(a) includes the right to receive information, then the tools that filter and present that information cannot remain beyond the scope of constitutional scrutiny. The law cannot protect speech while ignoring the architecture that now mediates its flow.
While the Delhi high court is adjudicating ANI v. OpenAI as a copyright dispute involving freedom of press as one of the many offshoots, we must not lose sight of the fact that freedom of press is implicated in more direct ways by the use of AI, which poses a challenge to the bloodlife of democracy that rests in an autonomous press and informed public.
Anchal Bhatheja is research fellow at the Vidhi Centre for Legal Policy, New Delhi. Rahul Bajaj is senior associate fellow (disability rights) at the Vidhi Centre for Legal Policy.
The Wire is now on WhatsApp. Follow our channel for sharp analysis and opinions on the latest developments.