The European Commission recently published its proposal for a Regulation on artificial intelligence (known as the Artificial Intelligence Act), which seeks to establish a harmonised legal framework to regulate AI across the EU.
The proposed Regulation would regulate AI systems according to risk—prohibiting those posing an “unacceptable” risk, imposing obligations and duties on those that are considered “high-risk”, and transparency requirements on certain systems.
With the long awaited proposal now out, how does it measure up from a human rights perspective? Based on our close analysis, we find it a positive step towards effective AI regulation. However, there are some elements of the proposal which fail to adequately mitigate the human rights risks posed by some AI applications. Further refinements are needed to better support and safeguard human rights.
Prohibited AI Practices
The proposal would prohibit certain AI practices on the basis that their “use is considered unacceptable as contravening Union values, for instance by violating fundamental rights”. These include AI systems that deploy subliminal techniques, or exploit a person or group’s vulnerabilities to distort or manipulate human behaviour causing them harm, and the use of AI by public authorities for social scoring. The proposal would also prohibit the use of ‘real-time’ remote biometric identification systems (in practice, facial recognition technology) in public spaces for the purposes of law enforcement, save for limited purposes and with judicial authorisation.
We welcome the fact that the Commission has considered the impacts that AI systems may have on human rights, and has made this a central component of its risk-based approach: and agree that these prohibited AI practices pose an unacceptable risk. However, alongside other civil society groups, we call for the Commission to go further in these prohibitions: imposing a ban on the use of facial recognition and social scoring by both private and public actors.
This section of the proposal would also benefit from further clarification, particularly on how additional AI practices might be added to the prohibited list. The EU could, of course, amend the Regulation and update the list of prohibited AI practices on an ad-hoc basis, but this would be time-intensive. Instead, we recommend that the proposal include a mechanism for the Commission to swiftly update the list and prohibit new AI practices when needed.
High-Risk AI Systems
As well as prohibiting certain “unacceptable” AI practices, the proposal would designate certain other AI systems as “high-risk”. The list of “high-risk” AI systems is extensive and includes, for instance, AI systems which are used in predictive policing and AI systems used to evaluate eligibility for public assistance benefits and services. The providers of high-risk AI systems would be required to establish risk assessment systems, and abide by transparency and monitoring obligations. The proposed Regulation also includes a mechanism for the Commission to designate additional AI systems as high-risk based on certain criteria. This includes an assessment of the risks of harms to health, safety, fundamental rights, the number of potentially affected persons and irreversibility of harm.
While we support the obligations placed on the providers of high-risk AI systems and welcome the inclusion of a mechanism to include further high-risk systems, the criteria for making these assessments lacks detail on the threshold and prevalence of harms. Further guidance is needed to provide clarity on high-risk designations moving forward, and to ensure that new high-risk AI systems don’t fall through the cracks. We are especially concerned that the proposal does not impose obligations on providers of AI systems which pose risks to human rights, but are not designated as “high-risk” such as where an emotion recognition system or a “biometric categorisation system” is used. We recommend the Commission address this gap by imposing additional obligations on providers, such as mandating impact assessments and mitigation of identified risks, even for AI systems that are not considered high-risk but may still pose risks to human rights.
The proposed Regulation would establish a public database containing information about high-risk AI systems supplied by providers. This would include the name and contact details of the provider, identifying details of the AI system, and a description of its intended purpose. We strongly support this effort by the EU to provide transparency on high-risk AI systems, and join other civil society organisations, such as Access Now and Algorithm Watch, in advocating for an EU-wide database which includes information on AI systems used in the public sector regardless of risk. Given that transparency is important for the private sector, we would also encourage the Commission to consider further mechanisms which support transparency in the use of particular AI systems by the private sector.
We are, however, concerned that the transparency obligations outlined in the proposal are too limited. First, they only apply to a relatively narrow subset of AI systems: those which interact with natural persons, those which use emotion recognition or a biometric categorisation, and those which employ “deep fakes”. These transparency obligations should apply to a wider set of AI systems.
Second, the obligations themselves are not stringent enough. Under them, providers would be obligated to inform users they are interacting with an AI system or content that has been artificially generated or manipulated. But these AI systems may still result in adverse impacts on human rights even with this mandated transparency. We therefore recommend heightened transparency requirements beyond those currently envisioned for these AI systems.
The proposed Regulation would establish an EU Artificial Intelligence Board, which would include one national supervisory authority from each EU member state. The Board would be charged with overseeing the effective implementation and consistent application of the Regulation, which would be further supported by enforcement at the national level. We believe that enforcement is a key to the success of the proposed legal framework, but stress that any mechanism must be adequately resourced to provide effective oversight. The mandate of these enforcement mechanisms should also consider the mandates of other relevant bodies, including those created under the existing General Data Protection Regulation (GDPR) framework and those potentially formed through passage of the Digital Services Act (DSA).
The European Commission is now soliciting feedback on the proposed Regulation with a deadline of 1 July. The feedback received will then be summarised by the European Commission and presented to the European Parliament and Council with the aim of feeding into the legislative debate.
At some point thereafter, the European Parliament and EU member states will convene to discuss the Commission’s proposal; as yet, the timeline is unknown. Once discussions begin, it is expected that they will work to amend the proposal and produce a final text, which if adopted, will be directly applicable across all EU member states. We’ll be following this process closely—sign up to our monthly Digest for regular updates.