Support Village Voice News With a Donation of Your Choice.
By Mark DaCosta- This series of three articles is about the dangers that the emergence of a superintelligent artificial intelligence (AI) machine would pose to humanity and other life forms. In Part I, the basics of such a device were explained. In Part II the widely varying range of specific dangers such a machine would pose to life on earth were described. Finally, in this – Part III of the series – how such a technology is likely to evolve from our present-day computing devices will be explored. In other words, how are humans likely to get from where we are now technologically to the point of having a superintelligent AI machine in our midst in the future. The information and opinions expressed in this series are based on academic papers, articles, and presentations by recognised researchers.
Swedish philosopher Nick Bostrom defines superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” Bostrom is the founding director of the Future of Humanity Institute at Oxford University. He is the acknowledged leading authority on the subject of superintelligent AI machines.
Researchers are not in agreement about how far humans are from creating a superintelligent machine. Some experts believe that it can happen suddenly, and soon – others have different opinions. In any case it may be useful – and certainly extremely interesting to research and explore the potential steps that could lead to the emergence of superintelligent AI machines. In this article, a hypothetical roadmap towards the development of superintelligent AI will be examined. This roadmap is based on information and ideas expressed by experts in various disciplines, as well as the author’s thoughts on the most likely pathway towards the emergence of a superintelligent machine, based on the myriad of expert views.
The first step towards superintelligence is to develop a foundation of General Artificial Intelligence (AGI). AGI refers to AI systems that possess human-level cognitive abilities across various domains. That is AI models that can understand, learn, and reason like humans across a wide range of problem-solving tasks in multiple intellectual disciplines.
Once AGI is achieved, the next step is to enhance its cognitive abilities. This involves improving and increasing memory, accelerating processing speed, integrating pattern recognition, and focusing on problem-solving skills. Reinforcement learning in which an AI learns from past experiences would be essential.
Superintelligent AI machines would necessarily possess the ability to learn continuously and improve itself iteratively. This step involves developing algorithms and mechanisms that allow AI systems to acquire new knowledge, adapt to changing environments, and refine their own learning architectures – such as writing their own code to improve themselves.
Superintelligent AI machines would require access to vast amounts of data – such as is available on the internet – to make informed decisions and predictions. Access to such data would facilitate the emergence of such systems.
As AI systems become more intelligent, human operators and regulators would be naturally incentivised to include ethical frameworks, and ensure that the machines’ values align with human goals. Experts agree that superintelligent AI should be programmed with a set of core values and principles that align with human values. This step involves addressing concerns related to bias, fairness, privacy, and accountability to ensure responsible development.
To accelerate the emergence of superintelligent AI machines, researchers are likely to collaborate and share information.
Thinkers believe that at this point, AI developers would emphasise the dangers that come with the technology. Researchers may then prioritise safety measures and risk mitigation strategies to prevent unintended consequences. This includes designing fail-safe mechanisms, establishing AI governance frameworks, and conducting rigorous testing, validation, and – of course – shutdown procedures.
The final step towards superintelligent AI may involve establishing a symbiotic relationship between humans and AI systems. This entails leveraging the strengths of AI to augment human capabilities. This step may include direct interfaces between superintelligent machines and human brains. It should be noted that billionaire Elon Musk owns a company – Neuralink – that has stated that it may be ready to test chip implants into human brains.
As has been stated, experts differ on how far away we are from having superintelligent AI machines. On the other hand, many authorities on the matter agree that once superintelligent AI emerges, what happens next cannot be predicted or extrapolated from currently available data.