Add The Wire As Your Trusted Source
HomePoliticsEconomyWorldSecurityLawScienceSocietyCultureEditors-PickVideo
Advertisement

AI’s Future Should Be Measured in Wisdom, Not Just Intelligence

In a country like India, where multiple layers of inequality and social vulnerability already shape access to finance, AI-driven decisions could exacerbate exclusion.
Rupam Roy
Aug 29 2025
  • whatsapp
  • fb
  • twitter
In a country like India, where multiple layers of inequality and social vulnerability already shape access to finance, AI-driven decisions could exacerbate exclusion.
Through signals from communication towers, people exchange ideas via digital transmission across different locations, accomplishing their work and building a digital community.
Advertisement

In 1996, the world met Dolly the sheep, the first mammal cloned from an adult somatic cell – a feat hailed as a landmark in genetic science. Just as quickly as celebrations erupted, so did the ethical debates. Concerns over identity, human dignity and unchecked experimentation led to swift calls for regulation and bans on human cloning in many countries.

We are facing a similar ethical inflection point in the digital age, not with DNA, but with data. The rise of Artificial Intelligence, particularly generative AI, is leading us into an era where digital clones of human cognition, emotion and creativity are being created, deployed, and scaled without restraint. 

As was the case with Dolly, the exhilaration of technological triumph is now clashing with profound societal, ethical and philosophical concerns. Cloning, in its essence, is replication. Dolly represented the biological replication of life. Today’s AI represents the cognitive replication of human faculties like thinking, creating, problem-solving and even empathy. 

Advertisement

AI models can now write poetry, diagnose illnesses, predict market trends and even simulate personalities of deceased individuals through “digital resurrection.” It is no longer science fiction.

Much like cloning raised alarms about the commodification of life, AI is prompting concerns about the commodification of consciousness. Are we simply teaching machines to think like us, or are we slowly replacing the need for us to think at all?

Advertisement

After Dolly’s cloning, bioethicists like Leon Kass and Daniel Callahan warned that biotechnological advances could lead to a slippery slope, where life is increasingly viewed as something to be engineered, optimised and commodified. Today, similar warnings are being echoed by AI ethicists, who caution against reducing human identity and value to algorithmic functions and data patterns.

Stuart Russell, a prominent AI researcher, has consistently warned about the risks of developing artificial general intelligence (AGI) without ensuring proper value alignment, meaning that AI systems should operate in accordance with human goals and values. He stresses that “we don't yet know how to control machines more intelligent than us” and views this uncertainty as profoundly dangerous. 

After Dolly, the idea of cloning humans was almost universally banned, not just because we couldn’t do it safely, but because we shouldn’t. That consensus was built on a deep respect for the uniqueness of human life. We face a similar moral imperative with AI: the recognition that human thought, emotion, and creativity should not be reduced to mere inputs in a model.

In the post-Dolly years, many nations imposed strict limitations on cloning research. But with AI, the race is largely unregulated. Tech companies harvest human content and behaviour at an unprecedented scale, collecting text, voice, images and biometric data and feeding it into systems that create human-like outputs.

Imagine if the biological materials used to clone Dolly were taken from millions of sheep without consent. That would be considered a violation of natural rights. Yet AI systems today are trained on copyrighted books, private conversations, medical records and creative works, often without permission or compensation. Data is the new DNA, and we’re watching it be extracted and replicated without any framework of dignity or fairness.

After Dolly, we asked, “What does it mean to be human?” That question is once again urgent. AI holds tremendous promise, but just like cloning, its unchecked use risks dehumanisation. We are already seeing AI-generated art replacing artists, AI-written news replacing journalists, and AI customer service replacing human interaction.

AI and banking

The financial sector offers a particularly sharp lens through which to examine these risks. There is growing interest in the applicability of AI in banking, not merely as chatbots for routine queries, but in shaping lending criteria, credit scoring and portfolio decisions. However, industry leaders have cautioned against it.

OpenAI CEO Sam Altman, addressing financial regulators last month, warned of an “impending fraud crisis” that “terrifies” him. He is not alone; in a survey earlier this year, 80% of bank cybersecurity heads admitted they fear AI is arming fraudsters faster than banks can respond. 

The Reserve Bank of India recently unveiled its “Framework for Responsible and Ethical Enablement of Artificial Intelligence” (FREE-AI). Unfortunately, the framework reflects a top-down approach, lacking meaningful consultation on effective safeguards. 

The RBI’s ‘Seven Sutras’ for AI adoption – Trust, People-First, Fairness, Accountability, Understandability, Safety, and Innovation – are commendable in principle. Yet, unless these principles are translated into enforceable rights for customers and binding protections for workers, AI deployment risks amplifying vulnerabilities rather than reducing them. 

In a recent statement, the All India Bank Officers’ Confederation cautioned that, without a guaranteed right to human review, particularly in decisions affecting MSMEs, retail borrowers, and farmers, AI could deepen existing social inequalities.

In a country like India, where multiple layers of inequality and social vulnerability already shape access to finance, AI-driven decisions could exacerbate exclusion if left unchecked. The risks are not just technical, such as data breaches from sharing sensitive loan portfolios with AI systems, but also ethical, where corrupted datasets or hidden algorithmic biases produce discriminatory outcomes. 

Without strong guardrails, banks could hide behind the “invisible hand” of AI to justify lending practices that ignore fairness or social responsibility, shifting the blame to machines while deepening structural inequities.

Dolly taught us that life is not to be replicated lightly. The cloning debate forced us to confront deep questions about ethics, identity and human dignity. We must now summon that same seriousness with AI. 

If we fail to heed the lessons from Dolly, we may find ourselves not only creating digital clones but also surrendering our own uniqueness in the process. Let us not wait for the irreversible moment before we ask these critical questions. The future should not just be intelligent; it must also be wise.

Rupam Roy is the General Secretary of the All India Bank Officer's Confederation.

This article went live on August twenty-ninth, two thousand twenty five, at fifty-three minutes past three in the afternoon.

The Wire is now on WhatsApp. Follow our channel for sharp analysis and opinions on the latest developments.

Advertisement
Make a contribution to Independent Journalism
Advertisement
View in Desktop Mode