Controversial facial recognition tech from Clearview AI is facing mounting investigations.

Controversial facial recognition tech from Clearview AI is facing mounting investigations. Source: Shutterstock

Australia, UK to investigate Clearview AI facial recognition tech

  • Controversial facial recognition tech from Clearview AI is facing mounting investigations in Australia and the UK
  • Issues over the usage of personal data from public platforms like Facebook are the primary concerns
  • Similar investigations are currently underway in Canada, Clearview’s 2nd biggest market

Clearview AI’s controversial facial recognition technology is once again in the spotlight for the wrong reasons – this time, British and Australian data protection authorities are probing the handling of personal information by the company.

The Office of the Australian Information Commissioner (OAIC) has opened a joint investigation together with the United Kingdom’s Information Commissioner’s Office (ICO) late last week, after a number of law enforcement agencies began trialing Clearview AI’s software in cities and municipalities globally.

Facial recognition technology has had its share of use cases in recent times. Of particular note was the applications of facial recognition software to perform quick, verifiable identification checks, including to permit contactless payments. With COVID-19 requiring extra caution measures to be taken when outdoors, Chinese officials upgraded city cameras with sophisticated facial ID tech that could discern and recognize facial features, even when obstructed with face masks, which could assist with virus contact-tracing in the country.

Yet privacy and ethical concerns still dog the nascent technology, even more so in the case of Clearview AI, that is directly targeting its artificial intelligence (AI)-driven facial ID product to policing agencies, drawing the attention of privacy watchdogs to fears of developing a ‘big brother’ police state.

Clearview’s system works by scraping images of faces from publicly viewable online platforms such as Facebook and Google, creating a database of allegedly over three billion images of individuals’ faces that has been so far licensed to over 2,200 law enforcement agencies worldwide including the UK’s National Crime Agency, the Australian Federal Police, and the state police forces of at least three Australian states.

But critics of the technology state that there is a significant margin for misuse, noting that many of the law enforcement departments are using trial versions of the software, meaning that there is a wide availability of unsupervised access to Cleaview’s powerful system. In most cases, the potential police clients do not have to disclose that they are monitoring individuals without their consent, even on a trial basis.

Australian Federal Police and Victoria state police independently confirmed that they had run a number of searches on individuals they were looking for, after being sent trial invitations from Clearview AI. This follows other reports of trial tests, such as the New Zealand police force conducting facial recognition trials without informing senior police officials or the country’s privacy commissioner.

The concern over how the data is being used forms part of Australia’s OAIC and UK’s ICO joint investigation, with the two offices stating in a statement that: “The investigation highlights the importance of enforcement cooperation in protecting the personal information of Australian and UK citizens in a globalized data environment.”

For its part, Clearview AI is easing out of markets where it is facing investigation, such as Canada, it’s second biggest market after the US. The company suspended its contract with the Royal Canadian Mounted Police when Canadian data authorities began a similar probe, which will continue despite the pullout.

Tim Mackey, principal security strategist at the Synopsys Cybersecurity Research Centre, told Forbes that he expects Clearview AI to eventually pull out of the UK and Australia too. “Obtaining the legal rights for such a large dataset would be expensive, and it’s asserted that Clearview AI bypassed image licenses and simply scraped the data from websites,” he said.

The images scraped by the database often violate the terms of service of the originating platform, as most social media and image-storing sites like Facebook and Instagram prohibit such collection of their users’ image data without consent.

“This process would reduce the cost of image acquisition, but could also have allowed the Clearview AI team to identify weaknesses in social media applications,” observed Mackey. “It will be interesting to see how Clearview AI responds to the ICO’s investigation and what is discovered.”