Saturday, November 16, 2024
25 C
Brunei Town

Latest

Singapore urges global law against deepfakes

SINGAPORE (ANN/THE STRAITS TIMES) – The oversight of artificial intelligence (AI) is anticipated to vary across a spectrum, with deepfakes residing at the extreme end, potentially necessitating robust legal intervention, stated Singapore Communications and Information Minister Josephine Teo on January 17.

Describing deepfakes as an “assault on the infrastructure of fact,” she emphasised the need for legal measures to address the fraudulent use of AI tools in creating deceptive images resembling real individuals. Minister Teo shared these insights during a panel discussion at the World Economic Forum (WEF) in Davos, Switzerland, held from January 15 to 19, highlighting the global challenge posed by deepfakes to societal integrity.

Mrs Teo spoke in the session titled 360° on AI Regulations alongside European Commission vice-president for values and transparency Vera Jourova, White House Office of Science and Technology Policy director Arati Prabhakar, and Microsoft vice-chair and president Brad Smith.

Mrs Teo said a risk-based approach could be taken to regulate the AI industry without hampering innovation, with laws for extreme matters like deepfakes, and “lighter” frameworks and guidelines that can apply to innovation on the other end of the spectrum.

“There is a real sense that (deepfakes are) an issue that all societies, regardless of political model, will have to deal with. And what is the right way of dealing with deepfakes?”

She added: “I cannot see an outcome where there isn’t a law in place. Exactly in what shape or form it will take, we will have to see.”

Ms Jourova, who sits on the European Commission, said concerns about AI-driven disinformation have prompted European regulators to mandate that AI-made content be labelled.

The European Union’s AI Act passed in December will eventually require all AI-generated content to be watermarked.

She added: “For me, it is a nightmare (if) voters are manipulated in a hidden way by means of AI and a combination of targeted disinformation. It would be the end of democratic elections.”

Singapore’s Model AI Governance Framework for Generative AI, announced on January 16, identifies nine key dimensions of AI governance, like accountability and security, expanding on the existing framework from 2019 that covers only traditional AI amid rapid AI development.

Content provenance is a key way to address the misuse of AI, the framework stated, referring to technical solutions that clearly show the source of AI-generated content like digital watermarking and the ability to trace the source of such content.

This comes as a spate of deepfakes that have hit Singapore, including videos of Prime Minister Lee Hsien Loong and Deputy Prime Minister Lawrence Wong, whose likenesses were used in scam videos to promote investment products.

The authorities also announced on January 10 that USD20 million has been earmarked for a new research initiative to tackle the rising scourge of deepfakes and misinformation.

Communications and Information Minister Josephine Teo (second from left) at a panel discussion, moderated by Eurasia Group president Ian Bremmer (centre), at the World Economic Forum on Jan 17. PHOTO: ANN/THE STRAITS TIMES SOURCE

GLOBAL STANDARDS NEEDED

Asked about how easy it will be for Singapore to navigate differing AI standards globally, Mrs Teo said that views on the use of AI and the risks are split.

But this is a “divergent phase” and views will likely converge as AI’s uses and risks become clearer, she added.

“We can’t have rules that we made for AI developers deployed in Singapore only, because they do cross borders… These have to be international rules.”

Mr Smith of Microsoft said many of the regulations around the world have shaped up around similar concerns. These build on existing fundamental laws in data privacy, competition and consumer protection that already apply to AI, even though they may not have been written for the technology.

Asked about China’s role in influencing global AI, Ms Jourova said there were similarities in views between Europe and China on how AI should be used, but they differ in the use of AI for surveillance.

“The main issue was how far to let the states go in using AI, especially in law enforcement, because we want to keep this philosophy of protecting the individual and balancing it with national security measures,” she said. “So here, we cannot have a common language with China.”

Mrs Teo said China has been open regarding its use of AI and has published its expectations for businesses. “If you go to China and you talk to its AI developers, there is no misunderstanding on their part about the expectations that their government has on them.

“If your AI models are primarily going to be used within the enterprise sector, there is a light touch (in regulation). But if it is going to reach consumers in society, there are a whole host of requirements that will be made.”


spot_img

Related News

spot_img