Why are major tech companies halting sales of facial recognition technology?

26 Jun 2020


A number of major tech companies have made announcements in the last few weeks that they will be taking steps to halt or limit their development and selling of facial recognition technology. Here, we take a look at what the companies are doing, why they’re doing it, and our own thoughts on what’s going on.


Who is doing what?

IBM: On 8 June, IBM CEO Arvid Krishna wrote to members of the US Congress announcing that the company “no longer offers general purpose IBM facial recognition or analysis software” and that it “firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency”.

The company also called for “a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies”, as well as legislative action, arguing that “national policy also should encourage and advance uses of technology that bring greater transparency and accountability to policing, such as body cameras and modern data analytics techniques”.

Amazon: On the following day, 9 June, Amazon announced that it would introduce a ban on use of its own facial recognition software, Rekognition, for one year. The company also called for legislative action to be taken, and called for governments to “place stronger regulations to govern the ethical use of facial recognition technology”.

Microsoft: Two days later, on 11 June, Microsoft’s President, Brad Smith, announced that the company would not sell it will not sell facial recognition technology to police departments in the US until there is federal legislation regulating its use. Smith also called for such legislation to be “grounded in human rights”.


Why are they doing this?

There are probably two drivers behind these moves, one short-term and one longer-term. The short-term driver is undoubtedly the Black Lives Matter movement, and the high profile cases of violence and discrimination by law enforcement agencies towards black people. The concerns around racial bias in both the accuracy of facial recognition technology and its deployment by law enforcement agencies are well known. As is the case with many other companies right now, the actions by these three tech companies may simply reflect a greater consideration of risks of discrimination and bias stemming from their products and services.

It is also possible that these companies are also aware of the growing appetite among policymakers to regulate facial recognition technology. As we have noted, Washington State became the first jurisdiction in the world to adopt legislation on the technology earlier this year, and the European Commission is also considering EU legislation regulating the technology as part of its broader consultations on AI. Rather than wait for legislation, these companies may be keen to get ahead of the curve, and set their own rules.


What do we think?

We strongly welcome any moves by tech companies to better protect and respect human rights in their policies, products and services. At the same time, there are still many questions which are unanswered, and have been raised by other human rights organisations. What are Amazon planning to do after the one year moratorium? Why has MIcrosoft only prohibited sales of facial recognition technology to police forces in the US and not worldwide?

These questions highlight the weaknesses in the current self-regulatory approach to the governance of facial recognition and other forms of AI technology. While data protection and non-discrimination laws exist in many jurisdictions, they appear not to be fit for purpose in ensuring that risks to human rights stemming from these technologies are mitigated. This means that there are important questions of who gets to decide the rules over how facial recognition and other AI technologies can be developed, deployed and used. 

  • Should it solely be the companies themselves? 
  • Should governments be responsible for regulating the technology, and what does this mean in authoritarian states who actively want to use the technology for their own surveillance purposes, often leading to human rights violations? 
  • Should multistakeholder forums and processes set the rules, ensuring that all interested parties are at the table? 
  • Should it be a combination of the above? 
  • And should the rules vary from country to country, or are regional or global frameworks needed?

These are difficult questions, but ones that are now firmly on the minds of many. The UN Office of the High Commissioner for Human Rights has just published a report on new technologies which proposed a moratorium on the use of facial recognition technology during peaceful assemblies until regulation ensured sufficient safeguards for human rights.

This echoes a call from the UN Special Rapporteur on freedom of expression, David Kaye, last year for “an immediate moratorium on the export, sale, transfer, use or servicing of privately developed surveillance tools until a human rights-compliant safeguards regime is in place”. With this increased attention, it is essential that all affected stakeholders are able to play a part in helping address these challenging, but critically important, questions.