EU approves AI Act ‘to protect fundamental rights’

The European Parliament has approved its proposed Artificial Intelligence Act — which it said “aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field.”

The regulation, agreed in negotiations with member states in December 2023, was endorsed by MEPs with 523 votes in favour, 46 against and 49 abstentions.

The approval of the parliament is an important step towards introducing the legislation, which now requires the formal approval of ministers from EU member states.

“The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases,” said the parliament in a press release.

“Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.”

The use of biometric identification systems (RBI) by law enforcement would be prohibited in principle except in “exhaustively listed and narrowly defined” situations.

“Real-time” RBI could only be deployed if strict safeguards are met — for example, its use is limited in time and geographic scope and subject to specific prior judicial or administrative authorisation.

“Such uses may include, for example, a targeted search of a missing person or preventing a terrorist attack,” said the parliament.

“Using such systems post-facto (post-remote RBI) is considered a high-risk use case, requiring judicial authorisation being linked to a criminal offence.”

Clear obligations are also foreseen for other “high-risk” AI systems — due to their “significant potential harm” to health, safety, fundamental rights, environment, democracy and the rule of law.

“Examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. influencing elections),” said the parliament.

“Such systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight.

“Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.”

General-purpose AI (GPAI) systems, and the GPAI models they are based on, must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training.

“The more powerful GPAI models that could pose systemic risks will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents,” said the parliament.

“Additionally, artificial or manipulated images, audio or video content (deepfakes) need to be clearly labelled as such.”