V. Müller, L. Steels and E. Szathmary publish article on the threats of evolvable AI

Evolutionary biology holds clues for the future of AI, argue researchers from the HUN-REN Centre for Ecological Research, Eötvös Loránd University, and the Royal Flemish Academy of Belgium for Science and the Arts. In a new Perspective published April the 20th in PNAS (the flagship science journal of the National Academy of Sciences of the USA), the team warn that evolvable AI (eAI) systems that can undergo Darwinian evolution may soon emerge, and they will generate special risks that can be understood, and mitigated, based on insights from evolutionary biology.

“The power of evolution is manifest in the history of biological evolution on Earth, which has created the cognitive capabilities of the human mind,” said Eörs Szathmáry, professor of evolutionary biology at the HUN-REN Centre for Ecological Research and at Eötvös Loránd University, Budapest, and Director of the Parmenides Center for the Conceptional Foundations of Science in Pöcking, lead author of the study. “We find it inevitable that the development of AI systems will eventually, and probably soon, tap into that power,” added Luc Steels, emeritus professor of AI at the University of Brussels (VUB) and member ofthe Royal Flemish Academy of Belgium for Science and the Arts, co-corresponding author of the paper.

The study outlines the use of evolutionary concepts and components in current AI research and explains how further developments, particularly agentic AI, may soon give rise to AI systems that fulfill all criteria for genuine Darwinian evolution. Such systems may open a new epoch in AI development, passing hurdles that even current learning AI systems cannot easily negotiate. However, “lessons from biological evolution teach us that evolving AI systems will be particularly hard to control,” said Viktor Müller, associate professor at Eötvös Loránd University and first author of the study. The two evolutionary biologists, Szathmáry and Müller teamed up with robotics and AI expert Steels to give an advance warning on the risks of eAI—and to recommend possible measures to mitigate them.

Using illustrative examples from biological and artificial (in silico) evolution, the study underlines the propensity of evolution to produce ‘selfish’ actors which, in the case of eAI, increases the risk of breaking the ‘alignment’ with human goals. Importantly, while much of the current discussion on AI risks centers on ‘Artificial General Intelligence’ (AGI), a theoretical threshold where AI matches or surpasses human intelligence across all cognitive tasks, lessons from evolution show that superior intelligence is not a pre-requisite for the ability of an organism to harm or manipulate another; for example, the simple rabies virus has evolved to manipulate and exploit its mammalian hosts. Evolvable AI may break the alignment and pose risks well before AGI is reached, and the risk does not require any further special circumstance to arise: AI systems and humanity share common resources, so an efficiently self-replicating system will sooner or later divert resources that are vital to our survival.

The study warns that any attempt to control reproduction will, unless control is perfect, select most strongly for traits that enable escape from that control. Analogies from biology involve bacteria and pests rapidly evolving resistance to antibiotics and pesticides. On the top of this general rule of evolution, the central drive in the development of AI systems, to achieve improved cognitive ability, further exacerbates the risk: while thousands of years of animal breeding has made domesticated species more, rather than less, controllable, selection for increasing ‘intelligence’ will increase the ability and probability of AI systems to deceive humans and to escape control.

Finally, while evolution by natural selection is hard enough to control, the study enumerates multiple ways in which the evolution of AI systems can beat the speed and efficiency of biological evolution. In contrast to biological organisms, eAI will be able to inherit ‘acquired’ traits and even improve its function by design rather than having to wait for random mutations to generate useful variations. “The potential speed of AI evolution is deeply alarming,” said Steels.

The authors recommend guardrails that may mitigate the risks associated with eAI. Above all, the ‘reproduction’ of AI systems must remain under centralized human control which needs to be absolute and complete.

“We hope our warning arrives in time, and regulations can be put in place before eAI would really take off,” said Müller. “If we fail to act, we may witness a new ‘major transition’ in evolution, in which eAI will replace or at least dominate humans. Our future may be at stake,” warned Szathmáry.

 You can find the article here.

This research was supported by funding from the European Research Council, the National Research, Development and Innovation Office in Hungary, and the European Innovation Council. The final version of the paper took shape during collaborative writing sessions at the Parmenides Foundation (Pöcking).

Previous
Previous

Örs Legeza et al. publish paper on Mixed-Precision Ab Initio Tensor Network State Methods Adapted for NVIDIA Blackwell Technology via Emulated FP64 Arithmetic

Next
Next

René Schmidpeter und Patrick Bungard auf der 14. Responsible Leadership Conference des F.A.Z Instituts