A video monitor displays attendees captured with CyperLink's facial recognition tech at CES 2020

A video monitor displays attendees captured with CyperLink’s facial recognition tech at CES 2020 Source: AFP

Police trial controversial Clearview AI facial recognition tech in New Zealand

  • Clearview AI’s facial recognition technology is once again being used to fight crime, this time in New Zealand. 
  • NZ media claim that senior police & privacy officials were unaware of the trial
  • Clearview AI’s system has faced its share of privacy-related criticism

Facial recognition technology has attracted a lot of positive press. The biometric verification technology – once purely in the realm of dystopian science fiction – is now enjoying use as a trusted, ‘contactless’ payment system or as a means for quick and convenient ID checks at places like busy airports. Many of us probably use it to unlock our smartphones tens of times per day.

When the coronavirus pandemic broke out in China, authorities upgraded cameras in the country with more sophisticated facial identification tech to be able to capture facial features even when they are obstructed with protective gear such as face masks. Such advanced application of the technology might help with contact-tracing of the virus in a country that was facing high infection rates.

Of course, the technology – especially when in the hands of authorities and governments – presents big personal privacy considerations. That discussion rose its head once again this week when it was revealed that New Zealand’s police force have been trialing technology from controversial American firm Clearview AI in New Zealand, without the express approval from the police commissioner or the country’s privacy commissioner.

Clearview AI’s system is capable of identifying faces by comparing images lifted from surveillance camera footage with its database of about 2.8 billion faces. Clearview AI apparently built this face database by procuring images of individuals from social media sites including Facebook – a practice that violates many social sites’ terms of service.

When the allegations came to light last week, details were still murky on the extent to which NZ police had been testing the software, and who could access the technology. In the US, Clearview’s system is already being used by hundreds of police departments, and has been used by law enforcement to successfully identify numerous offenders.

“Police undertook a short trial of Clearview AI earlier this year to assess whether it offered any value to police investigations,” Detective Superintendent Tom Fitzgerald, the NZ police’s national manager of criminal investigations, said in a statement.

“This was a very limited trial to assess investigative value. The trial has now ceased and the value to investigations has been assessed as very limited and the technology at this stage will not be used by New Zealand Police.”

New Zealand’s privacy laws do not prohibit how facial recognition technology and footage are used in the country, and privacy commissioner John Edwards told RNZ that use of the technology for criminal investigations was inevitable.

However, both Edwards and police commissioner Andrew Coster they were unaware of the trial, and its scope, until after the fact. Although of course divulging such information would be drastically different for law enforcement, the World Economic Forum (WEF) recently drafted guidelines for the ethical development of facial recognition systems, which includes being transparent with end-users.

Facebook recently underwent its own facial recognition controversy, when the social media giant had to pay a group of US users US$50 million. The users’ group claimed that Facebook’s facial recognition feature, that automatically detected people’s faces in photos on the social media platform, was in fact in violation of their state’s privacy laws.