Wednesday, May 31, 2023
33 C
Brunei Town
- Advertisement -

Regulators dust off rule books to tackle generative AI like ChatGPT

AP – As the race to develop more powerful artificial intelligence services like ChatGPT accelerates, some regulators are relying on old laws to control a technology that could upend the way societies and businesses operate.

The European Union (EU) is at the forefront of drafting new AI rules that could set the global benchmark to address privacy and safety concerns that have arisen with the rapid advances in the generative AI technology behind OpenAI’s ChatGPT.

But it will take several years for the legislation to be enforced.

“In absence of regulations, the only thing governments can do is to apply existing rules,” said a European data governance expert at consultancy BIP Massimiliano Cimnaghi.

“If it’s about protecting personal data, they apply data protection laws, if it’s a threat to safety of people, there are regulations that have not been specifically defined for AI, but they are still applicable.”

In April, Europe’s national privacy watchdogs set up a task force to address issues with ChatGPT after Italian regulator Garante had the service taken offline, accusing OpenAI of violating the EU’s GDPR, a wide-ranging privacy regime enacted in 2018.

The ChatGPT app on an iPhone. PHOTO: AP

ChatGPT was reinstated after the United States (US) company agreed to install age verification features and let European users block their information from being used to train the AI model.

The agency will begin examining other generative AI tools more broadly, a source close to Garante said. Data protection authorities in France and Spain also launched in April probes into OpenAI’s compliance with privacy laws.


Generative AI models have become well known for making mistakes, or “hallucinations”, spewing up misinformation with uncanny certainty.

Such errors could have serious consequences. If a bank or government department used AI to speed up decision-making, individuals could be unfairly rejected for loans or benefit payments. Big tech companies including Alphabet’s Google and Microsoft Corp stopped using AI products deemed ethically dicey, like financial products.

Regulators aim to apply existing rules covering everything from copyright and data privacy to two key issues: the data fed into models and the content they produce, according to six regulators and experts in the US and Europe.

Agencies in the two regions are being encouraged to “interpret and reinterpret their mandates,” said a former technology advisor to the White House Suresh Venkatasubramanian. He cited the US Federal Trade Commission’s (FTC) investigation of algorithms for discriminatory practices under existing regulatory powers.

In the EU, proposals for the bloc’s AI Act will force companies like OpenAI to disclose any copyrighted material – such as books or photographs – used to train their models, leaving them vulnerable to legal challenges.

Proving copyright infringement will not be straightforward though, according to Sergey Lagodinsky, one of several politicians involved in drafting the EU proposals.

“It’s like reading hundreds of novels before you write your own,” he said. “If you actually copy something and publish it, that’s one thing. But if you’re not directly plagiarising someone else’s material, it doesn’t matter what you trained yourself on.


French data regulator CNIL started “thinking creatively” about how existing laws might apply to AI, according to Bertrand Pailhes, its technology lead.

For example, in France discrimination claims are usually handled by the Defenseur des Droits (Defender of Rights). However, its lack of expertise in AI bias prompted CNIL to take a lead on the issue, he said. “We are looking at the full range of effects, although our focus remains on data protection and privacy.”

The organisation is considering using a provision of GDPR which protects individuals from automated decision-making.

“At this stage, I can’t say if it’s enough, legally,” Pailhes said. “It will take some time to build an opinion, and there is a risk that different regulators will take different views.”

- Advertisement -
- Advertisement -

Latest article

- Advertisement -