For the best experience, open
https://m.thewire.in
on your mobile browser.
Advertisement

Is Agentic AI a Threat to the Indian Middle Class?

Automation’s spoils accrue to capital investors and a technocratic elite while seasoned clerical and customer‑facing executives face existential crisis.
article_Author
M. Muneer
Jul 03 2025
  • whatsapp
  • fb
  • twitter
Automation’s spoils accrue to capital investors and a technocratic elite while seasoned clerical and customer‑facing executives face existential crisis.
is agentic ai a threat to the indian middle class
Representational image. Photo: Unsplash
Advertisement

The monsoon clouds over India may give relief from the searing heat, but the cloud of agentic artificial intelligence (AI) over workplaces may wash away many jobs. The rainmakers of the Big 5 might position it as great for the future of work but it is a phenomenon that is threatening livelihood, eroding social contract and hollowing out professional and personal identities.

McKinsey’s recent report of a harmonious symbiosis between human employees and autonomous digital labour is alarming for India. The reality is far darker. The involvement of self‑directed AI agents leads to a wholesale reordering of our socio‑economic fabric, with looming mass unemployment, widening inequity, and fraying human dignity.

In a nation where roughly 38 crore people participate in the workforce, even a modest net job loss spells calamity. Moreoever, 1.2 crore youth enters the job market every year. McKinsey’s global forecast – 23% roles transformed, 83 million positions culled versus 69 million birthed – translates into millions of Indians scraped from payrolls as enterprises automate deterministic tasks.

Consider a tier‑1 banking back office, where reconciliation clerks face replacement by orchestration‑capable agents that ingest transaction feeds, flag anomalies, and autonomously execute corrective entries. When “digital labour” assumptions become head‑count targets, countless urban households will feel the wrenching shock of sudden joblessness, with scant social‑safety nets to catch the fall. 

The burgeoning middle class, propped up by service‑sector expansion, risks bifurcation. Automation’s spoils accrue to capital investors and a technocratic elite while seasoned clerical and customer‑facing executives face existential crisis. FinTech and the gigification through apps have already disrupted stable earnings.

The increasing use of agentic AI

Agentic AI now menaces white‑collar bastions by scheduling interviews, curating candidate shortlists, and even drafting performance appraisals. This leaves behind a two‑tiered workforce: a privileged vanguard commanding digital frameworks, and a sprawling disenfranchised cohort consigned to precarious, low‑wage gigs or chronic unemployment.

From IT engineers debugging labyrinthine code to front‑line BPO executives diffusing irate customer calls. our workforce has long resided in problem‑solving resilience. Agentic systems subsume routine decisions, relegating humans to passive overseers: clicking “approve” when an AI‑generated audit report passes muster.

As Wired notes, in “deterministic” environments such as ticket resolution or code refactoring, agentic AI is already displacing critical judgement. Over time, this transactional circuitry leaches creative rigour from our collective skillset with workers losing their sense of mastery and purpose, their identities reduced to algorithmic compliance.

Generative models trained on Anglophone corpora may systematically marginalise vernacular dialects; resume‑screening agents can inadvertently penalise candidates with non‑English names or atypical career trajectories.

In a country striving to democratise opportunity, the deployment of unchecked agentic AI in recruitment and performance management risks crystallising inequity. Without transparent audits, these systems will perpetuate entrenched disparities, awarding top roles to those who fit a narrow digital mould rather than reflecting our diverse talent reservoir.

Autonomous decision loops can imperil cyber‑resilience. An AI‑driven SOC (security operations centre) agent in a data‑centre may ingest outdated threat‑intelligence feeds, hallucinate non‑existent malware signatures, and trigger resource‑draining incident responses while overlooking genuine breaches. The Italian antitrust probe into DeepSeek shows that such hallucinations carry material legal and reputational risks. A single mis calibrated “automated cybersecurity agent” could cascade into outages or data leaks leading to even national security.

Trust in India’s institutions hangs by a thread. When inscrutable “black‑box” agents make determinations on creditworthiness, scholarship eligibility, or even immigration interviews, accountability vanishes into layers of algorithmic opacity. Employees, wary of flawed AI outputs, routinely duplicate tasks, nullifying any purported gains in efficiency. This corrodes confidence in technology and leadership, breeding cynicism.

Social cohesion and psychological anchorage stem from a meaningful workplace for most Indians. As AI agents supplant the human interactions – coaching, mentoring, conflict‑resolution – the workplace becomes a sterile transactional arena.

The resultant isolation and burnout echo through slum‑to‑silicon corridor pipelines, where displaced workers, lacking stable employment, spiral into despair. The lack of social security schemes offers scant refuge, precipitating a mental‑health crisis of unprecedented scale.

Despite sporadic guidelines from the Digital India initiative and nascent data‑protection drafts, our regulatory efforts remain ill‑equipped to govern agentic autonomy. No binding mandate exists for AI explainability, bias mitigation audits, or safety‑critical testing. Without comprehensive legislation including enforceable audit trails, impact disclosures, and robust cybersecurity certifications, corporations will sprint to monetise AI agents, heedless of societal risk. The lessons of industrial revolutions are clear: technological proliferation sans ethical guardrails yields social upheaval.

Further, the carbon cost of agentic AI is no abstract footnote: training GPT‑class models emits hundreds of tons of CO₂. We are already grappling with acute air pollution and overburdened energy grids, and cannot absorb the exponential rise in data‑centre power draw.

Policymakers and business leaders must reject the facile binary of human versus agent

As corporates spin up swarms of specialised agents for consumer personalisation or employee‑facing coaching, sustainability pledges fray, and the decarbonisation targets slip further.

Policymakers and business leaders must reject the facile binary of human versus agent. Instead, they should envisage a human‑centred ecosystem: invest in large‑scale reskilling for roles demanding EQ and moral discernment; mandate transparent AI audits to root out bias; enforce green‑compute standards to curb the climate toll.

Policymakers should expedite the Personal Data Protection Bill and AI Governance Framework, embedding rights to contest automated decisions and to demand algorithmic transparency.

The clock is ticking. India stands on the brink of an AI-induced upheaval, while the Big 5 consulting firms serenade CEOs with glossy slide decks and euphoric forecasts, selling agentic AI as the gospel of efficiency.

McKinsey and its ilk don’t just forecast the future; they manufacture it, peddling automation as a panacea while quietly eroding the very social contracts that underpin stable societies. In their world, cost centres must be culled, and digital labour never asks for a raise. But this isn’t transformation but extraction, dressed like innovation. When McKinsey tells you 'let the agents work,' just check they don’t mean 'let the humans walk!'

The Wire is now on WhatsApp. Follow our channel for sharp analysis and opinions on the latest developments.

Advertisement
tlbr_img1 Video tlbr_img2 Editor's pick tlbr_img3 Trending