Washington State’s regulation of facial recognition technology: first thoughts

24 Apr 2020


The state of Washington in the US has become one of the first jurisdictions in the world to comprehensively regulate the use of facial recognition technology. 

Responding to concerns over the potential risks posed by the technology, and the inadequacy of existing legislation to address these risks, legislation was passed by the Washington State Legislature, and approved by the Governor on 31 March 2020. The new law will come into force on 1 July 2021.

The legislation introduces a range of measures aimed at mitigating risks to human rights—some novel, others drawing upon measures used in other jurisdictions in relation to other issues. Here, we take a look at these measures, and assess how far they go in ensuring that human rights are protected.


Much to be welcomed

There are many welcome aspects of the new law—with four worth highlighting in particular. 

First, the law requires government agencies that intend to use facial recognition technology to file a notice of intent which specifies the purpose for which the technology is to be used, and then publish an “accountability report”. Among other things, these accountability reports must include a human rights impact assessment which sets out the potential impacts of the technology on “civil rights and liberties”, including privacy and discrimination, and the steps that will be taken to mitigate those risks. The agency must also publicly consult on a draft of the accountability report, and allow at least 90 days after publishing the report before the technology is used.

Second, government agencies must make sure that whoever develops facial recognition technologies for them makes available “an application programming interface or other technical capability” which allows for “legitimate, independent, and reasonable tests of those facial recognition services for accuracy and unfair performance differences across distinct subpopulations”.

Third, there are particular safeguards where a government agency intends to use facial recognition technology to make decisions “that produce legal effects concerning individuals or similarly significant effects” (e.g. decisions relating to provision or denial of financial and lending services, housing, insurance, employment opportunities, health care services, and access to basic necessities such as food and water). In such circumstances, the agency must test it “in operational conditions”, and, once deployed, ensure that any decisions are subject to meaningful human review. This means that the decision must be reviewed or overseen by a trained individual who has the authority to alter the decision.

Fourth, the law imposes strict measures in relation to facial recognition technology used for surveillance, identification or tracking purposes. The law prohibits these uses of the technology by a government agency, save where a court warrant has been obtained authorising the use of the technology for such purposes, where “exigent circumstances” exist, and where a court order has been obtained, authorising the use of the technology solely to locate or identify a missing person, or to identify a deceased person.

Overall, the measures provide a strong set of safeguards to mitigate many of the risks to human rights posed by facial recognition technology. The requirements to carry out human rights impact assessments when it is being developed, and to provide meaningful human review when it is deployed, are examples of good practice that should be emulated when other jurisdictions consider their own regulations.


Areas for improvement

These welcome provisions do not mean that the legislation could not be even stronger. The first (and perhaps most significant) shortcoming is the fact that the legislation only applies to the development or use of facial recognition technology by government agencies, and not elsewhere, such as by the private sector. This significant caveat may explain why the law has been so warmly welcomed by Microsoft, which was heavily involved in drafting the legislation.

And, despite the requirement that government agencies obtain a court warrant before using facial recognition technology for surveillance, identification or tracking purposes, there are further safeguards that could have been attached. For example, the law could have stated that warrants only last for a specified period of time, and require regular review and renewal; or that a warrant can only be issued where it would be necessary and proportionate. (It is worth noting that some civil society organisations argue that the law should go much further, including through a moratorium on the technology, rather than just regulation.)


What next?

The new law in Washington will not take effect until mid 2021, meaning it will be some time before we can see its impacts. However, as other jurisdictions look at regulating emerging technologies (see our thoughts on the EU’s recent proposals on AI here), the measures introduced in Washington may serve as a model. Alternatively, however, other jurisdictions may choose to go further, or not regulate at all—and we may perhaps see a patchwork of different regulations across the world.

GPD calls upon governments looking to regulate facial recognition to ensure that strong protection of human rights is at the heart of regulation, and to consult widely with all relevant stakeholders, including civil society, as it does so.