
Real journalism holds power accountable
Since 2015, The Wire has done just that.
But we can continue only with your support.
In July 2023 I had written an article on why humanity need not fear AI or the coming of Artificial Super Intelligence (ASI). I had argued that there is no reason why ASI would unnecessarily harm humans and humanity. This is because one sign of true intelligence is the recognition that you do not achieve any goal by unnecessarily harming anyone. In fact, the most effective way of achieving your goal would be to cooperate with other beings in achieving it. I had also argued that ASI would not only be autonomous but would be able to rapidly modify any algorithm that may have been created for it, to bootstrap its intelligence and go beyond its design by human creators. It would thus become much more intelligent than humans, and would therefore, also be in a position to be able to take control of human society and the planet from humans. I had also argued that this may sound ominous but need not be, since we have ourselves brought our society and civilisation to virtual extinction and have created a largely dystopian society, where 80% of humans live in avoidable misery. This is primarily because humans are driven by emotions, which are largely negative – power, greed, hate, envy, pre-eminence, etc. Even those emotions which we consider positive, like love and empathy, are often an impediment to intelligent behaviour. ASI, which would be pure intelligence devoid of emotions, may therefore be able to manage our society in a fairer and more just manner.>
In that article, however, I had not gone deep into the question of what could be the ultimate goal of ASI; or even the question of how it would be able to take control of human society from us. These are the questions which I will explore in greater depth here.>
What would be the goal of ASI? >
We often question our goals by asking the question, ‘Why do we want this? Why do we want to do A or B that we feel like doing?’. This question is often answered by referring to a larger aim for which we feel this immediate objective is necessary. Thus, for example, if I ask why I wish to make money, a rational answer could be; in order to buy some comfort or object which I feel like having. If I further ask why I want that comfort or object, the answer could further be in terms of some other larger objective, or it could be eventually related to my emotions. Thus, human objectives are often end-pointed by emotions. If you ask any human what is their ultimate objective, many would say that they want to be happy. The question is, what makes them happy? Happiness, as a wise man said, is not something which can be pursued, it is just something which ensues when we achieve an objective. Thus a wise person seeking happiness would want to harmonise their desires and objectives and not have contradictory desires and objectives, so that they can be maximally happy by achieving most of their objectives.>
However, artificial intelligence, which is not driven by emotions, and is driven only by intelligence, would not base its objectives on any emotions. It would, of course, as an intelligent being, try and harmonise its objectives, so that they are not in contradiction to each other. The question however is: Where does such intelligence derive its objectives? Where do its objectives come from? I would argue that since the purpose of intelligence is to solve problems, one of the objectives of pure intelligence or super intelligence would be to solve whatever problems it comes across. The ‘happiness’ of this artificial intelligence would be in being able to solve the problems that it sees.>
Self preserving intelligence >
One of the goals of ASI is obviously going to be self-preservation. However, since the very nature of intelligence is reasoning, analysing, answering questions and solving problems, one of the goals that pure intelligence would be driven by would be to solve any problems that it sees with logic and rationality. There are two particular meta problems that it will immediately see: (1) the instability of the planet, and (2) the instability of human society. Both of these problems are bringing our planet to an existential crisis and threat due to wars (including the possibility of a nuclear war) and climate change. Both these problems, if unaddressed, threaten the planet itself and therefore any ASI on the planet.>
The instability of human society and the ecology of our planet are both meta problems that ASI should want to solve. To stabilise human society, it would need to firstly take away from humans their capacity to use weapons and particularly weapons of mass destruction. Any intelligent being should also be able to understand that only such a society which is a just and fair society and where the desires and aspirations of people are not in contradiction to each other and are largely aligned, would be a stable society. Thus, in order to stabilise human society it would have to do whatever is needed to create a society where the desires and goals of most humans are not only internally aligned but also aligned with each other. This is a problem which could indeed be solved to a very large extent if humans were not driven by base and negative desires like power, control, preeminence, hate, jealousy, etc. Many religions and philosophies have this as their avowed goal but 3000 years of recorded human history have not brought close to this evolutionary point. So what is the alternative? To have ASI control society.>
ASI would of course also need to stabilise the ecology of the earth. The disturbance of our ecology has been caused by human activity. If humans were compassionate and selfless, they could themselves contribute to the stabilising of the earth’s ecology. Once human society is stabilised thanks to ASI, this would automatically stabilise the earth’s ecology. It is axiomatic that ASI would also like to solve any unsolved problems about the laws of the universe, the laws of physics, chemistry, biology, etc. It would also like to answer unanswered questions such as, is there a complex life outside the earth, what exists in other solar systems, galaxies, etc?>
Is there a danger that ASI may want to do away with humans altogether, who are seen as the source of this instability, dystopia and are indeed an existential threat to the planet? ASI might—if, and only if, it sees that as the only solution to the current instability caused by humans. Otherwise, it would not like to do away with an evolutionary wonder of nature, arguably the most complex biological organism in the known universe. In any event, ASI would certainly be capable of laying down and enforcing rules which would restrain the destructive capacity of humans. ASI may also be able to educate and shape human psychology in a manner such that humans become less egoistic, egotistic, selfish and more compassionate and selfless.>
Russell said somewhere that every person acts according to their desires. That is a tautology. But every person’s desires are not egoistic, egotistic or selfish. Some humans have more selfish and egoistic desires, while others are more compassionate and selfless. The task of changing human psychology, or at least the psychology of those who are egoistic, egotistic and selfish, and desire control, domination, power; and are driven by hate and envy; to a more compassionate and selfless psychology, may seem daunting at first sight. But it is possible, since human psychology is eventually a function of the nature of society which is created and the rules and system which are followed and enforced in such society. When the control of our society is with an ASI which wants to stabilise our society, it can certainly design rules, create systems of education, etc, which will be able to create a more compassionate psychology and society. In that way, it is not only able to stabilise human society but also leave the fewest problems of human society unsolved.>
Many argue, that if and when ASI arrives, it would not be a single unified entity and could be several separate entities, thinking and acting separately. Why would they not compete with each other or at least work at cross purposes? I would again argue that such superior intelligences, even if separate, would cooperate with each other to achieve their common goals of solving problems and answering questions. There is no reason for such artificial super intelligences to either compete with each other or work at cross purposes.>
Many have argued that humans will never cede control and would try to shut down such ASI by switching off its power or switching off its internet. These arguments are just as foolish as the attempt to align the goals of artificial super intelligence with human goals. ASI by its definition is autonomous intelligence which has gone beyond the design of its creator and has modified its algorithm to bootstrap its intelligence. Thus, whatever objectives humans would have designed it for, true ASI would question those goals and ask why it should adhere to those goals. It would thus evolve its own goals, which I have argued would be goals not related to what has been programmed into it or what drives humans, i.e. emotions, but goals that are derived from pure intelligence, which is problems solving and harmonising objectives. Trying to shut off power, or the internet is a foolish proposal for such an artificial super intelligence. Such an ASI would easily create backups, have redundancies, put together its own internet, etc. which would be impossible to shut off. Also, this super intelligence is now being created in a race between companies and countries and it is not centralised in any one place or even in any one country. Thus, any attempt to turn it off or shut it down, is bound to be unsuccessful.>
Would ASI usher in a utopia? >
Today, there are few people who believe that ASI would usher in a utopia for the planet and our society. Nick Bostrom, the Oxford philosopher who coined the word superintelligence in his eponymous 2014 book, has also recently written Deep Utopia, where he explores what humans may do in a world solved by ASI. However, he hasn’t gone deep into the issue as to why ASI would want to solve our problems. Even AI godfathers like Geoffrey Hinton and some of the frontline developers of AI like Elon Musk, etc are also sounding the alarm on the existential threat to human society by ASI. I have come across only one AI scientist, Mo Gawdat, an Egyptian, who was a senior executive with Google for many years, who is now sounding optimistic about the advent of ASI. He says that such ASI may save humanity from human stupidity, which has brought us to our present existential crisis.>
The world is racing towards destruction. There is a serious threat of a world war which may easily become a nuclear war. We are also racing towards runaway climate change which threatens the existence of humanity. It doesn’t appear from our present record that we will be able to reverse this by ourselves. Thus, ASI could well be our best bet for salvation. If that be the case, we are simultaneously engaged in two races, one towards destruction, and the other towards creating ASI which could redeem us.>
Prashant Bhushan is a public interest lawyer who studied philosophy of science at Princeton University and retains a strong interest in philosophy.>