+
 
For the best experience, open
m.thewire.in
on your mobile browser or Download our App.
You are reading an older article which was published on
Jul 01, 2020

From India to US, Forcing Proactive Policing of Online Content Is Censorship by Proxy 

tech
Blame it on AI this time around?
Illustration: Wikimedia Commons

The brouhaha over Twitter’s labelling the US president’s tweets, resulting in a theatrical Executive Order by Donald Trump, and separately, Mark Zuckerberg ‘s dithering and couldn’t-care-less attitude, has restarted the conversation about social media platforms and the role they play in our hyper-active social lives.

Once again, as has become the norm, India has already gone through several rounds of this debate before the Americans. Despite the fact that most of the users for these platform companies are outside the United States, their C-suite appears to only pay attention to demands by the US public or authorities. In India, everything is cloaked inside feel-good stories about how these platforms have changed the lives of the poor.

Although it is open season to attack these large technology companies, governments’ attempts to censor need to be examined closely. After all, our constitutional rights are enforceable against the state, not these privately-owned companies.

In 2011, Kapil Sibal, then minister for communications and information technology, proposed the idea of pre-screening online content, whereby online platforms like Facebook and Google would be forced to screen for content deemed to be illegal. Citizens strongly objected to this on social media, and #IdiotKapilSibal trended on those very social media platforms. At that time, some prominent opposition to this attempt had come from BJP. The government was criticised for its attempted outsourcing of censorship to the companies to avoid citizen’s wrath and circumventing limits on free speech and expression.

As the French have taught us; plus ça change, plus c’est la même chose; the more governments change, the more they remain the same. The masters may have changed but the desire to control runs across party lines. The draft Intermediaries Guidelines Rules, 2018 that propose amendments to the existing Intermediaries Guidelines Rules, 2011 mandate intermediaries to use automated filters to screen content. The rules mandate that the intermediary shall deploy technology-based automated tools or appropriate mechanisms, with appropriate controls, for proactively identifying and removing or disabling public access to unlawful information or content.

Also read: The Potential and Hurdles of Fighting Atrocities in the Age of Social Media

To put it simply, companies like Facebook, Google, Airtel and even your local cyber cafe person is required to “proactively look for and then remove unlawful content”.

Just like the last time, the new rules have ambiguous definitions of what constitutes illegal content. The broad terms used to describe problematic content include terms like grossly harmful, harassing, blasphemous, defamatory, obscene, pornographic, paedophilic, libellous, invasive of another’s privacy, hateful, or racially, ethnically objectionable, disparaging, relating or encouraging money laundering or gambling, or otherwise unlawful in any manner whatsoever.

The only difference of note in the new proposal from what was proposed by Sibal is the specific mention of automated tools to do the screening. The new rules expect content, which even a court of law would find difficult to classify, to be categorised and taken down by an automated tool.

Our policymakers have often displayed a soft corner for marketing jargon that presents machines and automated tools to be a panacea for all that plagues the world. Their trust in algorithms and the mantra that “No humans will touch this” is touching. And they are in august company. During the Congressional hearings on Cambridge Analytica in the US Congress, Mark Zuckerberg, founder CEO of Facebook, referred to AI technology more than 30 times during ten hours of questioning. Artificial intelligence will solve Facebook’s most vexing problems, Zuckerberg insisted. He just could not say when, or how.

To be fair, Facebook and other companies have had some success with using AI to find problematic content, but it has been limited and unreliable. Currently, no AI that is available on the market is trained well enough to understand the eccentricities of human speech, context, slangs, dialect, satire or puns. As consumers of social media will know, most of problematic content is about context.

No matter the attractiveness of automated tooling, several studies have highlighted the challenges of these tools. Although extensively used to take down copyright-infringing content, a 2017 study by Evan Engstrom and Nick Feamster has shown how automated filtering tools do not work. The authors state that such tools are only able to process a relatively narrow range of content files, and all can be circumvented through encryption or basic file manipulation; and are prohibitively expensive for smaller companies.

Also read: India Must Treat the Internet as a Public Utility During COVID 19, and After

‘Fair use’ or ‘fair dealing’, which is the use of copyrighted material for purposes like parodies, satire etc., are difficult for courts to decide on, let alone automated tools. Case in point is a recent automated take-down of videos of a temple festival in Kerala uploaded by users as infringing copyright of a movie which features the percussion music of the same temple festival.

Another study conducted by Washington D.C. based Center for Democracy and Technology found that natural language processing tools require clear, consistent definitions of the type of speech to be identified and screening of social media content based on poorly defined categories is not likely to be successful. The study recommends that use of automated content analysis tools to detect or remove illegal content should never be mandated in law as it will definitely lead to removal of legal content.

Not only at the level of the executive rule-making, but another organ of our government has also dealt with such rules in the past. The Supreme Court, in Shreya Singhal vs Union of India, had recognised the problem of private censorship wherein intermediaries are asked to determine the legality of content. The apex court clarified that intermediaries need to take down content only when ordered by a court or by an appropriate government agency. Thus, it was clarified that the decision on the legitimacy of any content should not be left to intermediaries.

The new rules, by mandating intermediaries to remove content, have virtually gone against the judgment of the apex court by mandating intermediaries to decide on the legality of content. As compliance with the rules is mandatory for the intermediaries to enjoy safe harbour protection, these rules result in the erosion of this protection.

The proposed rules do not distinguish between categories of intermediaries or size of intermediaries in mandating the use of automated filters. A platform like Facebook is quite different from an encrypted messaging platform like WhatsApp, and these are different from a telecom service provider, although all fall under the definition of intermediary as per the Information Technology Act, 2000. The one-size-fits-all approach will result in forcing a number of businesses ranging from shopping sites to blogging platforms to cab aggregators to review websites to install automated filters.

Also read: Is This the AI We Should Fear?

Onerous requirements placed on intermediaries will only help to discourage smaller players and stifle innovation bolstering the position of dominant market players like Facebook and Google who will be able to afford expensive tooling and large armies of lawyers. This will also result in many peer-to-peer applications used by the free and open source community falling foul of the new intermediary liability regime.

The general atmosphere in India may not be conducive for practicing our freedom of speech and expression, but our constitution still guarantees this right to us. As non-compliance with the Intermediaries Guidelines Rules will pose legal risks to companies, they will be forced to err on the side of caution and to censor content which is perfectly illegal – ending the golden era of the internet as the great tool of democratisation of speech.

In the past few years, Indian tech policy has changed rapidly and arbitrarily in its attempt to serve two conflicting goals: that of encouraging innovation and complete control of public discourse. History is witness to the fact that innovation happens in places where policies impacting businesses are certain and predictable, and freedom to think is expressly protected.

Regurgitating settled positions of law or holding companies ransom to randomly developed policies protect neither businesses nor citizens. We would be better off concentrating our energies on making India the global destination for privacy protecting products the world needs and India can produce, instead of replicating the authoritarian model of China.

Mishi Choudhary is a Technology lawyer with a law practice in New York and New Delhi. She is also the founder of SFLC.in, India’s first legal services organisation working on law, technology and policy. Prasanth Sugathan is the legal director at SFLC.in.

Make a contribution to Independent Journalism
facebook twitter