SEOUL, South Korea (AP) — Leading artificial intelligence companies made a fresh pledge at a mini-summit Tuesday to develop AI safely, while world leaders agreed to build a network of publicly backed safety institutes to advance research and testing of the technology.
Google, Meta and OpenAI were among the companies that made voluntary safety commitments at the AI Seoul Summit, including pulling the plug on their cutting-edge systems if they can’t rein in the most extreme risks.
The two-day meeting is a follow-up to November’s AI Safety Summit at Bletchley Park in the United Kingdom, and comes amid a flurry of efforts by governments and global bodies to design guardrails for the technology amid fears about the potential risk it poses both to everyday life and to humanity.
Leaders from 10 countries and the European Union will “forge a common understanding of AI safety and align their work on AI research,” the British government, which co-hosted the event, said in a statement. The network of safety institutes will include those already set up by the UK, US, Japan and Singapore since the Bletchley meeting, it said.
UN Secretary-General Antonio Guterres told the opening session that seven months after the Bletchley meeting, “We are seeing life-changing technological advances and life-threatening new risks — from disinformation to mass surveillance to the prospect of lethal autonomous weapons.”
The UN chief said in a video address that there needs to be universal guardrails and regular dialogue on AI. “We cannot sleepwalk into a dystopian future where the power of AI is controlled by a few people — or worse, by algorithms beyond human understanding,” he said.
The 16 AI companies that signed up for the safety commitments also include Amazon, Microsoft, Samsung, IBM, xAI, France’s Mistral AI, China’s Zhipu.ai, and G42 of the United Arab Emirates. They vowed to ensure the safety of their most advanced AI models with promises of accountable governance and public transparency.
It’s not the first time that AI companies have made lofty-sounding but non-binding safety commitments. Amazon, Google, Meta and Microsoft were among a group that signed up last year to voluntary safeguards brokered by the White House to ensure their products are safe before releasing them.
The Seoul meeting comes as some of those companies roll out the latest versions of their AI models.
The safety pledge includes publishing frameworks setting out how the companies will measure the risks of their models. In extreme cases where risks are severe and “intolerable,” AI companies will have to hit the kill switch and stop developing or deploying their models and systems if they can’t mitigate the risks.
Since the UK meeting last year, the AI industry has “increasingly focused on the most pressing concerns, including mis- and dis- information, data security, bias and keeping humans in the loop,” said Aiden Gomez, CEO of Cohere, one of the AI companies that signed the pact. “It is essential that we continue to consider all possible risks, while prioritising our efforts on those most likely to create problems if not properly addressed.”
Governments around the world have been scrambling to formulate regulations for AI even as the technology makes rapid advances and is poised to transform many aspects of daily life, from education and the workplace to copyrights and privacy. There are concerns that advances in AI could eliminate jobs, spread disinformation or be used to create new bioweapons.
This week’s meeting is just one of a slew of efforts on AI governance. The UN General Assembly has approved its first resolution on the safe use of AI systems, while the US and China recently held their first high-level talks on AI and the European Union’s world-first AI Act is set to take effect later this year.