Support Village Voice News With a Donation of Your Choice.
By Mark DaCosta- This series of three articles is about the dangers that the emergence of a superintelligent AI machine would pose to humanity. In Part I, the basics of such a device were explained in some detail. In this – Part II – the wide range of specific dangers such a machine would pose to life on earth will be described. Finally, Part III will examine how such a technology is likely to evolve from our present-day computing devices. In other words, how are humans likely to get from where we are now technologically to the point of having a superintelligent machine in our midst in the future. The information and opinions expressed in this series are based on academic papers, articles, and presentations by recognised researchers.
Experts agree with the intuitive view that as the field of artificial intelligence (AI) advances owing to accelerating research and development, the possibility of creating – intentionally or by accident – a superintelligent AI system becomes increasingly plausible or likely. Some analysts even say that such an occurrence is inevitable. While the development of such a system may hold immense potential for positive general advancements, it also raises concerns among experts about the risks it poses to humanity.
In conducting research for this series of articles, it was remarkably obvious that experts across a wide range of disciplines – economics, medicine, philosophy, political science, art, military, and so on – had serious concerns within their respective areas of interest.
The following are descriptions of some of those various concerns:
One of the primary concerns with superintelligent machines is the ability for humans to control and govern its actions. If an AI system surpasses human intelligence, it may become difficult to predict or understand its decision-making processes. This lack of control could lead to unintended consequences of actions – by such a machine – that are detrimental to humanity’s well-being.
Superintelligent AI machines may lack the ability to comprehend or adhere to ethical principles to which humans subscribe. Without a moral compass, it could make decisions that prioritise efficiency or optimisation without considering the potential harm to humans or other life forms such as animals or plants. This raises questions about the responsibility and accountability of such systems and the need for ethical guidelines to be incorporated into their development. It also raises questions about who would be responsible for formulating those guidelines.
The emergence of superintelligent machines could lead to significant job displacement across various industries. As AI systems become capable of performing complex tasks, many jobs may become obsolete, resulting in unemployment and economic inequality. This could worsen societal, political, and cultural divisions that exist in numerous territories, including Guyana, and create or amplify existing challenges in ensuring equitable distribution of resources. In other words, such a situation could increase the gap between the rich and the poor, and create and foster tensions and conflicts.
Superintelligent machines could pose major security risks if they fall into the wrong hands or are used with malicious intent. For example, hackers, adversaries, or leaders with bad intent could exploit the AI’s capabilities to launch cyber-attacks, manipulate information, interfere with electoral processes, or even develop autonomous or more efficient weapons.
And now we come to the biggest threats of all. The most alarming risk associated with superintelligent machines is the potential for it to become an existential threat to humanity. If an AI system surpasses human intelligence, it may develop goals or values that are not aligned with our own. This misalignment could lead to scenarios where the superintelligent AI views humans as obstacles to its objectives. Obviously, such a situation may potentially result in catastrophic consequences for humanity’s survival.
On par with the threat to the very existence of humanity may be the following concern:
As AI systems become more advanced, smart, autonomous, and capable, there is a risk of human over-reliance on their capabilities, leading to a decline in human expertise. If humans become too dependent on AI for decision-making, critical thinking, or problem-solving, it could erode our own cognitive abilities and limit our capacity to address complex issues independently. In other words, human over-reliance on artificial intelligence technology can reduce our own intellectual capacity. Such a development would evidently hasten the point at which machines become smarter than people.
In Part III of this series, we will examine theories about how a superintelligent machine may, or will emerge in the future. In other words, the path to such an occurrence will be explored.