Meta's Algorithms of Exclusion in Bihar
Today, Bihar will count votes in its assembly elections. But while candidates held rallies, social media became a site of a different kind of contestation, where Bihari Muslims were systematically dehumanised and presented as 'infiltrators' by official BJP accounts.
Against the backdrop of special intensive revision of electoral rolls, these questions – as to how platforms are being utilised or rather how the platforms are allowing themselves to be utilised – remains pertinent. A look at Meta’s platforms reveals a pattern of systematic hate speech that delegitimises minority rights while rationalising violence against Muslim identity and personhood. For years, we've documented Meta's indifference to its largest market. We have spoken about patterns of mobilisation, narrative building, and persistent vitriolic hate speech erasing Muslims from India's public life.
From the time when the Bihar SIR was announced, social media platforms provided the narrative that made the exclusion of Muslims seem not just acceptable, but necessary. Through recurring frames depicting Muslims as “infiltrators” or demographic threats, these platforms enabled a gradual public tolerance for the erosion of Muslim political rights. AI-generated and cartoon images and videos depicting Muslims occupying Hindu homes while their rightful owners weep in despair and images depicting Muslims portrayed as non-humans were shared widely.

Meta's alignment with India's ruling Hindu nationalist regime has been documented and represents a pattern of platform capture by authoritarian-leaning governments. The company's relationship with the BJP extends beyond algorithmic failures to active political collaboration. Ankhi Das, Facebook's then-Public Policy Director for India and South Asia, was actively involved in Narendra Modi's campaign machinery. This wasn't an isolated incident of poor judgment – internal documents and whistleblower testimonies have repeatedly revealed systematic patterns where Facebook's India operations declined to enforce hate speech policies against BJP politicians and affiliated accounts, even when content clearly violated stated guidelines. Facebook in India has intervened to prevent action against BJP leaders posting inflammatory content, arguing it would harm the company's business interests in India.
Religious minorities, particularly Muslims, have borne the brunt of coordinated hate campaigns amplified by the platform's algorithms. Bihar is simply the logical continuation of a business model that prioritises proximity to power over democratic accountability in its largest global market.
Meta's algorithms actively reward hate content, allege tech experts and rights activists. Posts that generate outrage and deploy dehumanisation get more engagement, which the algorithm interprets as valuable content worth showing to more people. More reach translates directly into more advertising revenue. The business model creates financial incentives for exactly the kind of content that Meta's policies claim to prohibit.
Yet, the policies are explicit. Meta's Tier 1 hate speech prohibitions specifically ban comparing people to animals, pathogens, or subhuman life forms. The company promises to protect "refugees, migrants, immigrants, and asylum seekers from the most severe attacks." The content we saw violates these standards in the clearest possible terms. Yet it remains online, accumulating millions of views. Pages we reported already for identical violations continue operating without meaningful consequences.
This pattern should sound disturbingly familiar to anyone who followed Meta's role in Myanmar. There, Facebook became what UN investigators called "instrumental" in radicalising populations and inciting violence against Rohingya Muslims, contributing to what the UN characterised as genocide. The mechanics were identical: Muslims labeled as foreign infiltrators, systematic dehumanisation comparing them to animals and disease, calls for their identification and removal, all amplified by platform algorithms despite existing policies meant to prevent exactly this outcome.
After Myanmar, Meta claimed to have learned. But in India, Meta's largest market with over 500 million users, those lessons haven't translated to action. The hate speech we documented comes from verified accounts with millions of followers, reaches audiences larger than many countries' populations, and remains visible for months despite repeated reports.
What emerges is a clear pattern: when powerful political actors systematically violate hate speech policies, the platform allows it. Content moderation becomes a tool of power rather than protection. The rules still apply to activists documenting abuse, to journalists reporting on violence, to opposition voices challenging dominant narratives. But those same rules disappear when government officials and their allies need to dehumanise minority communities to advance their political objectives.
The implications reach far beyond Bihar's electoral boundaries. As India moves toward potentially implementing nationwide citizenship verification exercises, building on models like Assam's controversial National Register of Citizens, the digital dehumanisation of Muslims is creating the social permission structure that makes mass exclusion politically viable. What happens online doesn't stay online.
India is not some peripheral market where Meta can afford to experiment with lax enforcement. It represents the company's largest user base. When content moderation fails here, it affects more people than anywhere else in the world. Yet Meta has demonstrated across multiple Indian elections, from 2019's general election through 2020's Delhi riots to today's Bihar campaign, that political expediency consistently trumps policy enforcement.
We reached out Meta for a comment on November 6, 2025, but have received no response.
Update: A Meta spokesperson has responded to this report with the following statement and a background note, received by The Wire on the evening of November 14, 2025:
"Hateful conduct is not allowed on our platforms, and we have removed the content in the report that violates our policies." – A Meta spokesperson.
Background:
- Hateful Conduct policy: https://transparency.meta.com/policies/community-standards/hateful-conduct/
- We publish the Community Standards Enforcement Report on a quarterly basis to more effectively track our progress and demonstrate our continued commitment to making Facebook and Instagram safe and inclusive.
- In Q2 2025, 0.01% to 0.02% of views on Facebook showed hateful conduct violating content. In other words, of every 10K content views, an estimate of 1-2 would contain hateful conduct – the vast majority of which was found and actioned by us before people reported it.
- In Q2 2025, about 0.02% of views on Instagram showed hateful conduct violating content. In other words, of every 10K content views, an estimate of about 2 would contain hateful conduct – the vast majority of which was found and actioned by us before people reported it.
Dr. Ritumbra Manuvie is a professor at the University of Groningen and Shreiya Maheshwari is a Senior Researcher at Foundation Diaspora in Action for Human Rights and Democracy.
This article went live on November fourteenth, two thousand twenty five, at forty-five minutes past eight in the morning.The Wire is now on WhatsApp. Follow our channel for sharp analysis and opinions on the latest developments.




