Clearview AI Breach Raises Questions About Data Privacy Compliance

Clearview AI Breach Raises Questions About Data Privacy Compliance

Clearview AI, a facial recognition app the New York Times claimed “might end privacy as we know it,” has taken a massive step towards fulfilling that prophecy. Recently, the controversial New York-based company experienced a major data breach, with hackers leaking the company’s entire customer list as well as data pertaining to the number of accounts and searches attributed to each customer. The company boasts a database of over 3 billion open-source images from sites like Facebook and Twitter, which it claims was not compromised.

The comprised data is allegedly not attributable to any single person, so the company may avoid penalties under data privacy laws. But the public exposure has raised even more questions about the company’s business practices which involves selling facial recognition data to law enforcement agencies, including the Federal Bureau of Investigation (FBI), and US Immigration and Customs Enforcement (ICE). The breach showed the company’s client list to be much more diverse, however.

More Than Just Law Enforcement

While many have been suspicious of Clearview’s invasive database for some time, the company combatted doubts by insisting it only worked with domestic law enforcement agencies. While fighting crime has long been part of Clearview’s elevator pitch, the breach revealed a large number of commercial clients, including businesses in entertainment (Eventbrite), fitness (Equinox), and even cryptocurrency (Coinbase). Some clients, such as Walmart and Best Buy, were found to have used the company’s services hundreds of times.

It’s not the first time Clearview has been caught misrepresenting itself. In November 2019, the company sent a marketing email to law enforcement agencies in all 50 states with the subject line “How a Terrorism Suspect Was Instantly Identified With Clearview.” The email referenced an arrest in New York City, but the New York Police Department came out and said that Clearview’s technology was not used during the arrest.

Alleged Customers Deny Affiliation

It is telling that many of the companies accused of using the software were quick to publicly distance themselves. Best Buy, allegedly one of Clearview’s most prolific clients, denied being a client in an email to Recode, writing “we don’t use Clearview AI and don’t plan on using it in the future.” The National Basketball Association, another alleged client, admitted to conducting a test with Clearview but denied being a client.

Could the CCPA Punish Clearview?

In response to allegations that Clearview’s collection methods are invasive, founder Hong Ton-That has defended himself under the first amendment. However, while Ton-That might be correct about his right to access publicly available data, it is unclear whether that right extends to Clearview selling images for facial recognition programs.

There are no federal laws specifically addressing facial recognition, but a slew of states have legislation or proposed legislation designed to protect biometric data, including Arizona, Massachusetts, Florida Illinois, Washington, and Texas. And a total of 16 states have minor legislation that requires organizations to notify victims of biometric data breaches. Not all of these states specifically protect images of faces, and each prescribes a varying degree of privacy, with Illinois (which recently won a facial recognition lawsuit against Facebook), Washington, and Texas leading the pack.

The California Consumer Privacy Act (CCPA) is vague when it comes to specifying what types of biometric data it covers, but in an interview with Digital Trends, Stanford University consultant Albert Giradi claimed that Clearview’s insistence of its right to scrape and use photos could be a violation of another law in California which prohibits the use of a person’s image for someone else’s benefit, also known as “biometric use without consent.” Another point to consider is Clearview’s seemingly indiscriminate collection of images – including those of children. Even if Clearview didn’t lose those images in the breach, that doesn’t necessarily mean they were allowed to collect them in the first place. Both CCPA and the European Union’s General Data Protection Regulation (GDPR) contain very specific protections for data belonging to children, and GDPR protects biometric data that can be used to identify any person.

Consequences for Clearview

Clearview has already landed in hot water with some major players. Just days after news of the breach, Apple removed Clearview from its developer program for violating its terms and conditions, and Canada’s privacy authorities have launched an investigation into Clearview’s potential violations.

Other Clearview policies that have created backlash include scraping user’s photos without consent, only removing its images if requesters provide a government-issued ID and retaining photos in its database even after users delete them from social media platforms. The latter policy prompted cease and desist letters from Twitter, Facebook, and Google. States such as New Jersey even officially banned law enforcement from using the software. While Clearview has yet to face legal action for its unsettling practices, it may only be a matter of time until it does.

Max Totsky

Max Totsky is journalist based in Chicago. His writing can be found at Inc.com, PopMatters, and ADCG.org.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top