In February, the European Commission published a White Paper on Artificial Intelligence (AI), the latest in a long line of strategies, reports and other documents looking at AI which have come from Brussels (see, for example, Artificial Intelligence for Europe, the Declaration of Cooperation on Artificial Intelligence and the Coordinated Plan on Artificial Intelligence).
This White Paper, more than those previous documents, sets out the proposals for measures relating to AI that the Commission intends to take over the next few years.
In many respects, the White Paper mirrors other National Artificial Intelligence Strategies (NASs). NASs are critically important in ensuring that policymakers take a comprehensive and coordinated approach to AI, which centres the protection of human rights (for more on this, see our new report, where we analyse existing NASs and make recommendations on how they can incorporate human rights).
With all this in mind, how should we assess the EU Commission’s new White Paper? Below, we examine its key proposals from a human rights perspective.
Strong recognition of the impact of AI on human rights
A welcome aspect of the White Paper is the strong and consistent recognition throughout of the importance of ensuring that human rights are protected. In its first paragraph, the White Paper notes the risks to human rights that stem from AI, such as discrimination and intrusion into our private lives. It is also unequivocal that AI in Europe must be “grounded in our values and fundamental rights such as human dignity and privacy protection”, and includes an entire section on the specific human rights risks that result from AI—with a particular focus on discrimination and bias, risks to privacy and data protection, and the difficulties in challenging decisionmaking (and therefore obtaining effective access to justice when there are negative impacts). This section also looks at some of the less considered human rights which may be affected by AI, such as freedom of expression and the right to a fair trial.
A series of concrete measures for ensuring the protection of human rights
Many areas of policy relating to AI lie with individual EU member states. However, others are either within the exclusive competence of the EU, or shared between the EU and its member states. So, while ensuring that human rights are protected is, in part, a responsibility of member states, the White Paper contains a series of specific proposals as to how the EU can also do so. These include:
- Transforming existing “ethical guidelines” for AI developers into a “curriculum” for training institutions to use;
- Reviewing existing EU legal frameworks (such as data protection and non-discrimination) to ensure that they sufficiently address risks to human rights posed by AI;
- A new legal requirement that those developing and using datasets to train AI take reasonable measures to ensure that the subsequent use of AI systems do not lead to discrimination;
- A new legal requirement that citizens be informed whenever they are interacting with an AI system and not a human; and
- Promoting an approach to AI based on human rights at international and regional forums such as the Council of Europe, the OECD, UNESCO, the ITU and the UN.
Unlike many NASs which talk about the importance of protecting human rights without setting out how this will be achieved, the EU’s White Paper contains a number of thoughtful and sensible ideas.
Questions still to answer
The White Paper is only an indication of the European Commission’s current thinking, and is open for consultation. As such, there are still some questions which remain unanswered.
One which attracted attention before the publication of the White Paper was the issue of facial recognition technology, with rumours that the White Paper would propose a moratorium on its use. The published version, however, does not go so far—instead announcing that the Commission will launch “a broad European debate” on the specific circumstances, if any, which might justify the use of facial recognition technology in public places, and what common safeguards would be needed.
Another question: what will the new EU legal instrument on AI actually look like? The White Paper sets out the Commission’s plan to develop a new regulatory framework for AI, and some indications of its thinking. The new instrument will likely take a risk-based approach, so that restrictions or interventions will be proportionate to level of risk. It will set out a range of requirements for high-risk AI applications, such as rules regarding the training data used, obligations relating to robustness and accuracy, and ensuring human oversight. With the instrument likely to be one of the first in the world to comprehensively regulate AI, the EU’s framework could well influence the development of other pieces of legislation around the world, as has been the case with the EU’s General Data Protection Regulation. Ensuring that this new framework includes sufficient protection of human rights is therefore critical—and so the devil will be in the detail.
While there is much to welcome in this White Paper from a human rights perspective, there are still many open questions—and it is critical that human rights defenders and other civil society organisations are consulted as it develops. Our new report on NASs and human rights provides a series of recommendations for policymakers, many of which could be taken forward by the Commission as it finalises its proposals.
The Commission is currently inviting comments on those proposals though an online consultation, which is open until 14 June 2020. We’ll be responding in full; and will make our submission public on the GPD website.