The ruling comes following the OAIC’s and UK Information Commissioner’s Office’s (ICO) joint investigation into Clearview AI’s controversial facial recognition search tool. Clearview AI’s services are globally used by law enforcement, as well as commercial and other non-government entities. Clearview AI states its software “can help accurately and rapidly identify suspects, persons of interest, and victims to help solve and prevent crimes”.
Clearview AI’s software was trialled by the New Zealand Police in 2020, without the approval of the senior police hierarchy, Privacy Commissioner or Cabinet, but was not adopted. While it doesn’t appear likely that New Zealand’s Office of the Privacy Commissioner (OPC) will take action at this time, the Australian decision could weigh on any local decision if future issues arise around the ‘harvesting’ of publicly available images by facial recognition software.
How does facial recognition technology work?
Clearview AI ‘scrapes’ or ‘harvests’ publicly available images of individuals’ faces from internet sources, such as social media, without express consent. The images are then stored on Clearview AI’s database and a vector (a mathematical representation) of the images is created. Clients, such as law enforcement, can then upload an image of an individual’s face and run a search against Clearview AI’s database of more than three billion images in order to find a match.
What drove the Australian privacy commissioner’s decision?
Australian Federal Police, Victoria Police, Queensland Police Service and South Australia Police used Clearview AI’s facial recognition tool on a free trial basis from October 2019 to March 2020.
Following the trial, the OAIC and ICO launched a joint investigation to consider whether Clearview AI had met the requirements of the Australian Privacy Principles (APPs).
First, Commissioner Falk had to establish whether Clearview AI carries on business in Australia, and as a result, is subject to the Australian Privacy Act 1988. Clearview AI asserted it wasn’t subject to the Act as, among other reasons, it conducts its business and stores its images on servers in the US, and collects images without regard to geography or source. It asserted that the trials with various Australian police agencies did not result in a continuing business relationship with any persons in Australia.
The Commissioner held that “the circumstances of this matter clearly demonstrate that the respondent carries on business in Australia”. Clearview AI’s services were actively marketed to Australian customers during the trial period, and Clearview AI collected images uploaded by Australian police agencies as part of the trial. The Commissioner also found that Clearview AI has collected and continues to collect Australians’ facial images.
Another issue dealt with by the Commissioner was whether Clearview AI collects or handles personal information. Personal information is defined in the Australian Privacy Act as information about an identifiable individual or an individual who is reasonably identifiable. Clearview AI submitted that it doesn’t collect or handle personal information as it collects publicly available images from the open web, and no data or associated information is maintained in relation to the images.
Commissioner Falk held that scraped images are ‘about’ an individual, satisfying the definition of ‘personal information’. She also found that an individual is reasonably identifiable from their facial image under the definition of personal information, as a facial image alone is sufficient to establish a link to a particular individual, and members of the Australian police were able to conduct successful searches using Clearview AI’s facial recognition tool.
Commissioner Falk ultimately found that Clearview AI breached a number of APPs2 by:
- collecting Australians’ sensitive information without consent;
- collecting personal information by unfair means;
- not taking reasonable steps to notify individuals of the collection of personal information;
- not taking reasonable steps to ensure that personal information it disclosed was accurate, having regard to the purpose of disclosure; and
- not taking reasonable steps to implement practices, procedures and systems to ensure compliance with the APPs.
In a separate provisional decision, the ICO recently announced its intention to fine Clearview AI over £17 million for failing to comply with UK data protection laws.3 In a move not dissimilar from the OAIC’s decision, the ICO has issued a provisional notice to Clearview AI to cease the processing of personal data of people in the UK and delete it. Clearview AI will now have the opportunity to respond to the ICO’s provisional decisions with a final decision by the ICO expected by mid-2022.
What does this image harvesting decision mean for New Zealand?
The New Zealand Police were found to have trialled Clearview AI software in early 2020. Minister of Justice Andrew Little told media that the trial was not endorsed by the senior police hierarchy, the Police Minister or Cabinet, and the OPC said neither it or the incoming Police Commissioner had been briefed. New Zealand Police deemed the software ineffective in New Zealand and no longer use it.
Given the similarities between the Australian Privacy Act 1988 and the recently introduced New Zealand Privacy Act 2020, it would not be unthinkable for a similar finding to be upheld in New Zealand given the extra-territorial effect of the Act. This is a new addition to New Zealand’s privacy framework and was introduced to reflect globalisation, particularly the free flow of data. Under the new provisions a foreign entity (like Clearview AI), which offers its services in New Zealand, and collects personal information in New Zealand would have to comply with the Privacy Act 2020.
Such an outcome would have been much less certain under the 1993 Privacy Act, which was becoming outdated in its dealings with cross border transfers of personal information.
It does not appear likely that the OPC will take action on this occasion. Following the news that Clearview AI software had been used in New Zealand, an OPC blog post recognised that the development of facial recognition technology is inevitable, and the technology will only become more advanced over time.4 The OPC went on to caution that any artificial intelligence should be used with careful reference to the OPC’s principles for the safe and effective use of data and analytics.
Those are to:
- Deliver a clear public benefit: this includes considering the views of relevant stakeholders, ensuring associated policies and decisions have been evaluated for fairness and potential bias, and embedding a te ao Māori perspective.
- Ensure data is fit for purpose: particularly that data is used in the right context and special care is taken when re-using data that was originally collected for another purpose.
- Focus on people: considering the methods used to protect personal information.
- Maintain transparency: ensuring people know what information is held about them, how it’s kept secure, who has access to the information, and how it’s used.
- Understand the limitations: including regular checks for bias and other harmful elements.
- Retain human oversight: automated decision-making processes should be regularly reviewed.
If you have any questions about the matters raised in this article, please get in touch with the contacts listed or your usual Bell Gully advisor.