Technology /

Top Scientists Warn AI Is An Existential Threat to Humanity

'Decisive action is required to avoid catastrophic global outcomes from AI'


Top Scientists Warn AI Is An Existential Threat to Humanity

Artificial intelligence scientists have issued a stark warning about the budding technology, prodding the global community to engage in cooperative efforts to create “red lines” governing its development.


In a statement following the second International Dialogue on AI Safety, which was convened in Beijing, the group of experts warned that AI safety is needed to prevent catastrophic or even existential risks to humanity.


The event was hosted by the Safe AI Forum, along with the Beijing Academy of AI, and focused on governance discussions about AI risk. Scientists also met with senior Chinese officials and CEOs to discuss proposed red lines, including prohibitive development of AI systems that can autonomously replicate, seek power or deceive their creators, or that can build weapons of mass destruction or conduct cyberattacks.


“In the depths of the cold war, international scientific and governmental co-ordination helped avert thermonuclear catastrophe. Humanity again needs to co-ordinate to avert a catastrophe that could arise from unprecedented technology,” the statement said.


“Unsafe development, deployment, or use of AI systems may pose catastrophic or even existential risks to humanity within our lifetimes,” the scientists cautioned. “These risks from misuse and loss of control could increase greatly as digital intelligence approaches or even surpasses human intelligence.”


Among the proposed ideas is a comprehensive governing body that requires domestic registration for AI models in order to ensure governments have visibility into the most advanced AI technology.


The scientists also proposed domestic regulators “adopt globally aligned requirements” via multilateral institutions and agreements to prevent companies from crossing the red lines.


Additional proposed measures include making investments in automating model evaluation with “appropriate human oversight” so that developers can demonstrate that red lines will not be able to be crossed by their AI platforms.


“Decisive action is required to avoid catastrophic global outcomes from AI,” the scientists stated.


“The combination of concerted technical research efforts with a prudent international governance regime could mitigate most of the risks from AI, enabling the many potential benefits,” they added. “International scientific and government collaboration on safety must continue and grow.”

*For corrections please email [email protected]*