European Parliament Adopts Landmark Law To Regulate AI And Ban Social Scoring
The European Parliament has adopted a “landmark” legislation regulating Artificial Intelligence, banning social scoring, limiting the use of biometric identification systems within law enforcement, and safeguarding the general purpose of AI.
The regulation aims to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI. The regulation establishes obligations for AI based on its potential risks and level of impact.
The legislation was already agreed during negotiations with member states in December 2023 and it has now been endorsed by MEPs with 523 votes in favour, 46 against and 49 abstentions.
The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.
Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.
Similarly, the use of biometric identification systems (RBI) – which uses unique biological features to identify and verify individuals – by law enforcement is prohibited in principle.
However, there will be exemptions according to “exhaustively listed and narrowly defined situations” related to law enforcement.
“Real-time” RBI can only be deployed if strict safeguards are met. For instance, its use is limited in time and geographic scope and subject to specific prior judicial or administrative authorisation.
Such uses may include, for example, a targeted search of a missing person or preventing a terrorist attack. Using such systems post-facto (“post-remote RBI”) is considered a high-risk use case, requiring judicial authorisation being linked to a criminal offence.
Obligations for high-risk systems
Clear obligations are also foreseen for other high-risk AI systems. Examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services (like healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes.
Such systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight. Citizens will also have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.
Transparency requirements
General-purpose AI (GPAI) systems, and the GPAI models they are based on, must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training. The more powerful GPAI models that could pose systemic risks will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents.
Meanwhile, artificial or manipulated images, audio or video content – “deepfakes” – need to be clearly labelled as such.
Measures to support innovation and SMEs
Regulatory sandboxes and real-world testing will have to be established at the national level, and made accessible to SMEs and start-ups to develop and train innovative AI before their placement on the market.
Next steps
The regulation is still subject to a final lawyer-linguist check and is expected to be finally adopted before the end of the legislature (through the so-called corrigendum procedure). The law also needs to be formally endorsed by the Council.
It will enter into force twenty days after its publication in the official Journal, and be fully applicable 24 months after its entry into force, except for: bans on prohibited practises, which will apply six months after the entry into force date; codes of practise (nine months after entry into force); general-purpose AI rules including governance (12 months after entry into force); and obligations for high-risk systems (36 months).
This action was co-financed by the European Union in the frame of the European Parliament’s grant programme in the field of communication. The European Parliament was not involved in its preparation and is, in no case, responsible for or bound by the information or opinions expressed in the context of this action. In accordance with applicable law, the authors, interviewed people, publishers or programme broadcasters are solely responsible. The European Parliament can also not be held liable for direct or indirect damage that may result from the implementation of the action.
Do you think this legislation will be effective?