GPD has responded to a call for input by the UN High Commissioner for Human Rights to inform an upcoming thematic report on the right to privacy in the digital age. The call for input asks relevant stakeholders to provide their perspective on the right to privacy as it relates to artificial intelligence (AI).
In our submission, we evaluate the ways AI may promote or pose risks to the enjoyment of the right to privacy and other associated rights, highlight the applicability of existing legal frameworks, and scrutinise global trends in the adoption of laws, policies and practices by states and companies, drawing on GPD’s wider work on these issues.
In addition, we provide a number of recommendations for states and companies which seek to promote a rights respecting approach to the development and deployment of AI.
Key recommendations include:
- States should acknowledge their obligations to respect, protect and promote the right to privacy in the context of AI.
- States should develop, implement and effectively enforce data protection legislation as an essential prerequisite for the protection of the right to privacy in the context of AI.
- States should consider existing frameworks applicable to AI, and use these as a starting point to guide the development of any additional frameworks which seek to address the unique challenges posed by AI.
- States should ensure that legal frameworks require (where appropriate) meaningful consent to individuals whose data is used in AI technologies, including the ability to withhold consent. They must also ensure useful and meaningful transparency in the development and deployment of AI technologies, suitable for users and regulatory bodies.
- States should ensure that legal frameworks provide effective remedies from both the public and private sector when human rights are adversely impacted by AI technologies
- Companies of all sizes should be encouraged to develop policies which explicitly acknowledge their responsibilities to respect human rights in the context of AI, as opposed to simply taking an ethics based approach.
- Companies should engage in human rights due diligence efforts in the context of AI design, development and deployment which identify, prevent, mitigate and account for how they address their impacts on human rights.
- Companies should undertake human rights impact assessments and ensure that the findings of these assessments are fully integrated into corporate practice through mitigation efforts and remedial actions. These findings should also be made publicly available when appropriate and on a periodic basis to promote transparency.
- Companies should ensure that due diligence efforts are complemented by meaningful accountability and independent oversight, which should include those with expertise in technical and human rights issues. The development, implementation and oversight of self-governance approaches should involve all relevant stakeholders, including those most likely to be adversely affected by AI technologies.
This consultation will inform the development of the upcoming report on the right to privacy in the digital age, which will be presented at the 47th session of the Human Rights Council in June-July 2021. To continue following this process—and for more updates on artificial intelligence—sign up to our monthly Digest.