PARIS (AFP) – Artificial intelligence (AI) systems must come with “cast-iron guarantees” against mass harm to humans, especially as the likelihood of their integration into weapons grows, a leading expert has told AFP.
Stuart Russell, Berkeley computer science professor and co-director of the International Association for Safe and Ethical AI (IASEAI), will be in Paris Thursday for scientific talks in the run-up to a global summit on AI technology on February 10-11.
![](https://borneobulletin.com.bn/wp-content/uploads/2024/08/TECH-AI-SCIENCEJOURNAL2.jpg)
AFP: US tech giant Google appears to have walked back its commitment to avoid working on AI-powered weapons and surveillance systems. What was your reaction?
Stuart Russell: I think it’s unfortunate. The reason they instantiated their earlier policy against the use of AI in weapons was precisely because their own employees revolted… The Google employees were worried that their work would be used in weapons, not just for reconnaissance, but for killing people.
Now (Google) says they’re willing to override the views of their employees, also the views of the vast majority of the public, who are also opposed to the use of AI in weapons.
AFP: Why might Google have made this change?
Stuart Russell: The military market for AI is minuscule compared to the consumer market and the business market so this is not really about the opportunity to make money. This is really about improving their bargaining position with the US government.
It’s not a coincidence that this change in policy comes with a new administration that has removed all the regulations on AI that were placed by the Biden administration and is now placing a huge emphasis on the use of AI for military prowess.
AFP: What are the main dangers of using AI in weapons?
Stuart Russell: Small autonomous weapon systems… are the most dangerous in the sense that because they’re small and cheap, non-state groups, terrorists, for example, can buy them by the million and use them to carry out enormous massacres.
(Such weapons) could be used in much more dangerous and harmful ways. For example, “kill anyone who fits the following description”. And that description could be by age, by gender, by ethnic group, by religious affiliation, or even a particular individual.
AFP: Will AI be increasingly integrated into future weapons systems?
Stuart Russell: There were, at last count, about 75 countries that had either developed or were using remotely piloted weapons. And I think most of those are in the process of thinking about how to convert them to fully autonomous weapons.
Ukraine has been an accelerator… that conflict has forced these weapon systems to evolve very quickly. And everyone else is looking at this.
It’s quite possible that the next major conflict after Ukraine will be fought largely with autonomous weapons in a way that is currently unregulated. So we can only imagine the kinds of devastation and horrific impacts on civilians that might occur as a result.
But on the other hand, there are more than 100 countries that have already stated their opposition to autonomous weapons. And I think there’s a good chance that we’ll achieve the necessary majority in the United Nations General Assembly to have a resolution calling for a ban.
AFP: Should AI in general be more tightly regulated?
Stuart Russell: Human extinction could result from AI systems that are much more intelligent than humans, and therefore much more capable of affecting the world than we are.
Governments must require cast-iron guarantees in the form of either statistical evidence or mathematical proof that can be inspected, that can be checked carefully. And anything short of that is just asking for disaster.