Sweeping EU rules on AI to pass final hurdle

1562
PHOTO: ENVATO

STRASBOURG, FRANCE (AFP) – EU lawmakers are poised to approve on Wednesday wide-ranging rules to govern artificial intelligence, including powerful systems like OpenAI’s ChatGPT, marking the final major hurdle before formal adoption.

Senior European Union officials say the rules, first proposed in 2021, will protect citizens from the possible risks while also fostering innovation on the continent.

Brussels has sprinted to pass the new law since OpenAI’s Microsoft-backed ChatGPT arrived on the scene in late 2022, unleashing a new global AI race.

There was a burst of excitement for generative AI as ChatGPT could spew out eloquent text within seconds, including poems and essays, as well as pass medical exams.

Further examples of generative AI models include DALL-E and Midjourney, which produce images, while other models produce sounds from a simple input in everyday language.

“The EU delivered. No ifs, no buts, no later,” said Dragos Tudorache, the lawmaker who pushed the text through parliament with another MEP, Brando Benifei.

“Europe is now a global standard-setter in trustworthy AI,” said the EU’s internal market commissioner, Thierry Breton.

The EU’s 27 states are expected to endorse the text in April before the law is published in the EU’s Official Journal in May or June.

The rules covering AI models like ChatGPT will enter into force 12 months after the law becomes official, while companies must comply with most other rules in two years.

AI policing restrictions

The EU’s rules known as the “AI Act” take a risk-based approach: the riskier the AI system, the tougher the requirements.

For example, high-risk AI providers must conduct risk assessments and ensure their products comply with the law before they are made available to the public.

“We are regulating as little as possible and as much as needed, with proportionate measures for AI models,” Breton told AFP.

Violations can see companies hit with fines ranging from EUR7.5 million to EUR35 million (USD8.2 million to USD38.2 million), depending on the type of infringement and the firm’s size.

There are also strict bans on using AI for predictive policing and systems that use biometric information to infer an individual’s race, religion or sexual orientation.

The rules also ban real-time facial recognition in public spaces but with some exceptions for law enforcement, although police must still seek approval from a judicial authority before any AI deployment.

Lobbies vs watchdogs

Since AI will likely transform every aspect of Europeans’ lives and big tech firms are vying for dominance in what will be a lucrative market, many players have lobbied the EU.

Watchdogs on Tuesday pointed to lobbying by French AI startup Mistral AI and Germany’s Aleph Alpha as well as US-based tech giants like Google and Microsoft.

They warned the implementation of the new rules “could be further weakened by corporate lobbying”, adding that research showed “just how strong corporate influence” was during negotiations.

“Many details of the AI Act are still open and need to be clarified in numerous implementing acts, for example, with regard to standards, thresholds or transparency obligations,” three watchdogs based in Belgium, France and Germany said.

Commissioner Breton stressed that the EU “withstood the special interests and lobbyists calling to exclude large AI models from the regulation”, adding: “The result is a balanced, risk-based and future-proof regulation.”

One of the main tech lobbying groups, CCIA, has warned however that many of the new rules “remain unclear and could slow down the development and roll-out of innovative AI applications in Europe”.

“The Act’s proper implementation will therefore be crucial to ensuring that AI rules do not overburden companies in their quest to innovate and compete in a thriving, highly dynamic market,” said Boniface de Champris of CCIA Europe.