For the best experience, open
https://m.thewire.in
on your mobile browser.
Advertisement

Thinking Intelligently About Machine Intelligence

Are they being used for the greater prosperity of the many, or are they simply a means for the very few to becoming free from the demands of human labour and humanity itself? 
article_Author
Omair Ahmad
Apr 18 2025
  • whatsapp
  • fb
  • twitter
Are they being used for the greater prosperity of the many, or are they simply a means for the very few to becoming free from the demands of human labour and humanity itself? 
thinking intelligently about machine intelligence
Representative image. Rare Metals 2 by Hanna Barakat & Archival Images of AI + AIxDESIGN. Photo: betterimagesofai.org
Advertisement

At the centre of much of the hype around the current wave of machine learning and artificial intelligence products is the idea of artificial general intelligence, or AGI, where the computational power of machines will enough to duplicate the work of humans. This is both the hope and fear: that we can be gods that create new sentient life in our image, or maybe because such machine intelligence would be faster than our limited brains, and unencumbered by human foibles, what we would be creating is new gods, far beyond us in capability.

One of the most beautiful summations of the latter argument is this: 

What sort of creature man’s next successor in the supremacy of the earth is likely to be. We have often heard this debated; but it appears to us that we are ourselves creating our own successors; we are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race. In the course of ages we shall find ourselves the inferior race. Inferior in power, inferior in that moral quality of self-control, we shall look up to them as the acme of all that the best and wisest man can ever dare to aim at.

What will be surprising for most people is that this was written in 1863, as a letter to the newspaper, the Press, in New Zealand. The letter was by Samuel Butler, and titled Darwin Among the Machines. I ran across it in a lovely and dense little book titled Darwin Among the Machines: The Evolution of Global Intelligence, by George Dyson, on the history of computation and machine intelligence published in 1997. 

One takeaway from this is that ideas of machine intelligence and domination are far older than we give them credit for. Despite the almost deafening beat of statements like, “We are approaching the Singularity” over the last few decades, dates come and go without the rise of AGI. And like most doomsday cults, or even the dream of nuclear fusion reactors (which have been ‘around the corner’ since the 1980s, at least), the false prophets find another excuse and another date in the future. 

A second takeaway is that much of what we hear from people like Elon Musk and other tech moguls seeking machine intelligence is crude and unsophisticated compared to thinking that is almost two centuries older. 

In fact, it is the crudest forms of thinking about machines that have dominated the popular discourse, much of it about the fear that we will be faced with a machine takeover once actual AGI is achieved. 

You can find this in popular art such as the movies, 2001: A Space Odyssey, Bladerunner, The Terminator and its multiple sequels, or The Matrix. The science fiction books are too numerous to mention, although it is worth pointing out that Dune, the 1965 classic by Frank Herbert, has a reference to the Butlerian Jihad, in which humanity rises up against machine rule and then destroys all ‘thinking machines’ declaring them illegal. The Butler in the name is a possible tribute to Samuel Butler’s excellent letter to the Press.

That said, Dyson’s book is worth reading for multiple purposes. First and foremost, it is an excellent history of machine thinking, and puts the ball back to Thomas Hobbes, whose influential political science tract, Leviathan, written in 1651, has some of the first thinking on computation. 

Secondly, it includes a host of characters such as Nils Barricelli, who worked on the first modern computer built on the principles of John von Neumann to consider living machines. Thirdly, he asks a fundamental question that arises from Butler’s essay, which is why would machine thinking, or even machine life, be intelligible in human terms. 

Much of the current talk of AGI, as well as the sci-fi books and films mentioned above, is about how machine intelligence would act vis-à-vis humanity. Almost all of it is expressed in the form of oppression, either we dominate or they do. This is true of Butler’s essay as well, although he does make a point of mentioning that, in the current form (1863), humanity and machines both work to each other’s benefit. Dyson takes that idea of symbiosis (something that Barricelli also wrote about) further.

 In fact, if we look at the world today from a non-human standpoint, we would be forced to concede that since most major resources including finances are devoted to making more powerful and better machines, the ‘highest lifeform’ on earth would be machines rather than humanity. (Both The Hitchhiker’s Guide to the Galaxy, in which Ford Prefect chooses his name based on a popular British car, and the Transformers franchise – comics, animation, toys, and movies – are based on this external view). 

Dyson’s book forces us to ask that even if machine sentience developed, why would it change anything. After all, the machines are seemingly already in power, receiving every thing that is their due and more. By 2022, the top 20 AI companies were already emitting more carbon than 137 individual countries. We are literally making the planet uninhabitable to most life-forms, driving one of the main extinction events on the planet, in order to service machines. In fact, if machines had sentience and cared, we could be tempted to speculate that they would actually reverse, or limit, how much humans are spending on destroying life on the planet. 

But here is the other issue that Butler’s essay and Dyson’s book, ask us to think about. Butler puts it like this: “No evil passions, no jealousy, no avarice, no impure desires will disturb the serene might of those glorious creatures. Sin, shame, and sorrow will have no place among them. Their minds will be in a state of perpetual calm, the contentment of a spirit that knows no wants, is disturbed by no regrets. Ambition will never torture them. Ingratitude will never cause them the uneasiness of a moment. The guilty conscience, the hope deferred, the pains of exile, the insolence of office, and the spurns that patient merit of the unworthy takes—these will be entirely unknown to them.” 

Dyson is more elliptical, and asks whether we would understand any life that is not carbon-based like ours. For example, we understand that animals fear pain since we are also animals. We can see the response to stimuli, and empathise. To a degree, we can observe this in the plant kingdom as well, although are sympathies are less engaged. 

Pain signals, or the signals for a plant to retreat from dangerous terrain, are passed on electric impulses, but electric impulses in a machine body are not the same thing. It is our physical body that transmutes the electrical impulse into pain. It is true that a computer in battery saver mode slows down, but this is not the same thing as a starving woman, or an underfed cat, or even a malnourished plant. To speak of a ‘dying’ battery, or a computer ‘waking up’ is – at the root – nonsense. These feelings are dependent on the physical bodies we inhabit. How much more strange, alien, would be concepts of slavery and autonomy? Or justice? Our sci-fi imagination, which peoples stories with robots with such ideas is, at best, solipsistic. (Bladerunner, the 1982 film based on Philip K Dick’s 1968 novel, Do Androids Dream of Electric Sheep? is an exception as the synthetic beings are biological, and thus like us.)

If AGI is less close than we think, and less understandable than we suppose, the question of domination and autonomy is still relevant, and this is where it would have been useful for Dyson to discuss three other thinkers: Henry David Thoreau, Karl Marx, and ‘Ned Ludd.’ Both Thoreau and Marx were writing in the early 19th Century, struggling with how materialism was remaking life in the Unted States and Europe, respectively. 

One of Thoreau’s pithy quotes is, “The price of anything is the amount of life you exchange for it.” Marx developed a theory of “alienation,” part of which is that workers, if constrained only to create at the will of their financial masters, were alienated from the goods that they created. They no longer made things that they chose to, developing their innate creative skills and deploying them, but were reduced to mindless drudges. The most interesting person, though, is the fictional Ned Ludd. There is no substantive record of the man, but he is supposed to have been a weaver that destroyed a machine in England in the early 19th Century. The Luddite movement is named after him, and consisted of groups of artisans destroying machines as they contested how little they were being paid by factory owners. The name is now a pejorative, used to dismiss people who are anti-technology, but the movement was more complicated than that. The proliferation of machines allowed factory owners to dismiss highly trained artisans for low-skill labour, and to reduce payments. The Luddites attacked the machines as part of a system of negotiations to keep their livelihoods and maintain living wages. 

The reason that these three 19th century thinkers are important is because of the stated aims of many tech moguls. When somebody like Bill Gates, an early booster of AI, says that AI systems will replace teachers and doctors within ten years, we do not have to believe that this will come true. What we do have to believe is the stated ambition, that tech moguls – like early 19th Century business barons – wish to replace high skilled human labour by machines. And much like those 19th Century barons, the profits from the machine learning are not likely to lead to higher wages. Just as an example, Meta and other AI companies used the Lib-Gen database to train their AI models. The authors of the books and papers (at least one of my novels and essays is there) on the database were not compensated. Similarly, when Tesla launches its robotaxi service – scheduled in June this year – it is likely that it is trained using data captured by those driving Tesla cars, data that owners of these cars sign off for free when choosing to buy one. 

When talking about machine intelligence it is important to try and understand the fundamental question of whether we are using machines intelligently. Are they being used for the greater prosperity of the many, or are they simply a means for the very few to becoming free from the demands of human labour and humanity itself? 

As they have become a greater part of our lives over the last few centuries, the question has acquired greater significance, but it is often passed off in the hype and fearmongering of a scenario about hyper-intelligent robots. What is lost in this discussion is the fact that the word robot itself means “serf labour” and was used by the Czech writer Karel Capec in his 1920 play, RUR or Rossum's Universal Robots. As the race for AI heats up, a robot future seems inevitable, both for humans and machines, but it is unlikely that the machines will complain. 

Omair Ahmad is an author. His last novel, Jimmy the Terrorist, was shortlisted for the Man Asian Literary Prize, and won the Crossword Award.

The Wire is now on WhatsApp. Follow our channel for sharp analysis and opinions on the latest developments.

Advertisement
tlbr_img1 Video tlbr_img2 Editor's pick tlbr_img3 Trending