- Lawmakers want ban on the use of AI in biometric surveillance
- Copyrighted data used for training needs to be disclosed
- Breton to meet Zuckerberg, Altman next week to discuss AI Act
Published on June 14th, 2023 📆 | 4064 Views ⚑0
EU lawmakers vote for tougher AI rules as draft moves to final stage
BRUSSELS/STOCKHOLM, June 14 (Reuters) - European Union lawmakers on Wednesday agreed changes to draft artificial intelligence rules to include a ban on the use of the technology in biometric surveillance and for generative AI systems like ChatGPT to disclose AI-generated content.
The amendments to the EU Commission's proposed landmark law aimed at protecting citizens from the dangers of the technology could set up a clash with EU countries opposed to a total ban on AI use in biometric surveillance.
The rapid adoption of Microsoft-backed OpenAI's ChatGPT and other bots has led top AI scientists and company executives including Tesla's Elon Musk and OpenAI's Sam Altman to raise the potential risks posed to society.
"While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose," said Brando Benifei, co-rapporteur of the bill.
Among other changes, European Union lawmakers want any company using generative tools to disclose copyrighted material used to train its systems and for companies working on "high-risk application" to do a fundamental rights impact assessment and evaluate environmental impact.
Systems like ChatGPT would have to disclose that the content was AI-generated, help distinguish so-called deep-fake images from real ones and ensure safeguards against illegal content.
"We believe that AI requires legislative guardrails, alignment efforts at an international level, and meaningful voluntary actions by companies that develop and deploy AI," a Microsoft spokesperson said.
The lawmakers will now have to thrash out details with EU countries before the draft rules become legislation.
'AI IS INTRINSICALLY GOOD'
While most big tech companies acknowledge the risks posed by AI, others like Meta (META.O), which owns Facebook and Instagram, have dismissed warnings about the potential dangers.
"AI is intrinsically good, because the effect of AI is to make people smarter," Meta's chief AI scientist Yann LeCun said at a conference in Paris on Wednesday.
In the current draft EU law, AI systems that could be used to influence voters and the outcome of elections and systems used by social media platforms with over 45 million users were added to the high-risk list.
Meta and Twitter will fall under that classification.
"AI raises a lot of questions – socially, ethically, economically. But now is not the time to hit any 'pause button'. On the contrary, it is about acting fast and taking responsibility," EU industry chief Thierry Breton said.
He said he would travel to the United States next week to meet Meta CEO Mark Zuckerberg and OpenAI's Altman to discuss the draft AI Act.
The Commission announced the draft rules two years ago, aiming to setting a global standard for a technology key to almost every industry and business as the EU seeks to catch up to AI leaders the United States and China.
Reporting by Foo Yun Chee and Bart Meijers in Brussels, Supantha Mukherjee in Stockholm; Additional reporting by Mimosa Spencer in Paris, Editing by Emelia Sithole-Matarise
Our Standards: The Thomson Reuters Trust Principles.