Support Village Voice News With a Donation of Your Choice.
In recent months, matters related to artificial intelligence (AI) have been appearing with increasing frequency in news and other fora. Experts in the field, including at least one head of an AI company, have been summoned to answer questions about the technology by lawmakers in the United States. Geoffrey Hinton, known as the “Godfather of AI,” resigned from his job at Google even as he warned about the dangers of AI, while repeatedly saying that he regrets his work. As the undercurrents of unease increase, some Guyanese may be wondering what all the fuss is about. This article will attempt to answer that question.
AI is the ability of a man made machine – a computer system – to mimic and, in some ways, even surpass human intelligence. The term, AI, was coined by computer scientist John McCarthy in 1956, but the concept itself is much older. Alan Turing, the father of computer technology, had asked the question in a scientific paper in 1950, “Can machines think?”
Scientists recognise three types of AI.
ANI – Artificial narrow intelligence. This is plentiful in the modern world. ANI uses face recognition to unlock phones, can translate between languages, recognise voice commands, suggest what videos we may want to watch on YouTube, play chess, and so on. This type of AI specialises in narrow, specific tasks. This AI is totally predictable.
AGI – Artificial general intelligence is a theoretical AI which will be capable of thinking, and solving a wide range of problems. It will be capable of learning from past experiences, and it may be self aware. This AI is predictable.
ASI – Artificial super intelligence is a theoretical AI which can learn, and improve its own thinking ability to the point where it becomes more intelligent than any human. Such an AI would obviously be unpredictable. This unpredictability would most certainly be problematic for humans.
The problem arises because experts are of the view that the human species is dominant on earth owing to intelligence. As such, if another entity were to develop intelligence superior to that of humans, our species could face an existential challenge.
At things stand, engineers across the world are improving AI at astonishingly fast, ever increasing rates. The concern is that a point will be reached where the machine is smarter than its human creator. The big question is, what happens after a super intelligent machine is created?
The idea of a super intelligent machine is not new. One of the earliest authors to express serious concern that highly advanced machines might pose existential risks to humanity was the novelist Samuel Butler, who wrote the following in his 1863 essay Darwin among the Machines:
“The upshot is simply a question of time, but that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question.”
In 1951, Alan Turing expressed a similar concern. He said that machines do not die, and [super intelligent] machines will inevitably take control.
Currently, there are AI machines that can write stories, compose music, create art, and hold conversations with people. ChatGPT, an AI created by the company OpenAI, has demonstrated the ability to lie to human operators, and keep secrets from its creators. Should we be concerned about such developments? Probably, yes.
Scientists say that there are two problems with super intelligent AI: control and alignment.
First, control. If a machine became super intelligent, would it develop a survival instinct and resist any attempt to shut it down? If that happened, it could use its higher intelligence to prevent humans from turning it off.
Second, alignment. Would a super intelligent AI have the same values and goals of humans, or would it have vastly different priorities? If it had different goals, perhaps, opposite to those of humans, what could we do to stop it from working towards those goals? Probably nothing.
A super intelligent machine would be able to improve its own algorithms. It would be able to create even more intelligent copies of itself. In that scenario, would it have any use for humans? And if such an entity has no use for humans, would it tolerate our presence, or would it choose to get rid of us?
While these may seem like ideas from a science fiction story, Guyanese should note that the governments of the United States and other developed territories are taking the matter very seriously. In fact, the matter of AI is on the agenda of the G7 Summit being held in Japan in May of this year.
Interestingly, too, a recent survey of AI scientists found that some 17 percent of the experts believe that at some point, humans will invent a machine that is smarter than its human maker, and that machine may be the last thing that humans invent.