Clearview AI

Clearview AI, Inc. is an American facial recognition company, providing software primarily to law enforcement and other government agencies.

[1] Founded by Hoan Ton-That and Richard Schwartz, the company maintained a low profile until late 2019, until its usage by law enforcement was first reported.

[7][8][9] In 2022, Clearview reached a settlement with the ACLU, in which they agreed to restrict U.S. market sales of facial recognition services to government entities.

[13] In August 2021, Clearview AI announced the formation of an advisory board including Raymond Kelly, Richard A. Clarke, Rudy Washington, Floyd Abrams, Lee S. Wolosky, and Owen West.

[29][30] Law enforcement officers have stated that Clearview's facial recognition is far superior in identifying perpetrators from any angle than previously used technology.

The usage of Clearview's software in this case raised strong objections once exposed, as neither the users' supervisors or the Privacy Commissioner were aware or approved of its use.

[40] The letter cited public records retrieved by a local blogger, which showed one officer signing up for and repeatedly logging into the service, as well as corresponding with a company representative.

[5] In March 2022, Ukraine's Ministry of Defence began using Clearview AI's facial recognition technology "to uncover Russian assailants, combat misinformation and identify the dead".

[45] In a Florida case, Clearview's technology was used by defense attorneys to successfully locate a witness, resulting in the dismissal of vehicular homicide charges against the defendant.

[58] Senator Edward J. Markey wrote to Clearview and Ton-That, stating "Widespread use of your technology could facilitate dangerous behavior and could effectively destroy individuals' ability to go about their daily lives anonymously."

[8][61] Senator Markey wrote a third letter to the company with concerns, stating "this health crisis cannot justify using unreliable surveillance tools that could undermine our privacy rights."

[63] In October 2021 Clearview submitted its algorithm to one of two facial recognition accuracy tests conducted by the National Institute of Standards and Technology (NIST) every few months.

[29][64][65] In 2021, Clearview announced that it was developing "deblur" and "mask removal" tools to sharpen blurred images and envision the covered part of an individual's face.

Clearview acknowledged that deblurring an image and/or removing a mask could potentially make errors more frequent and would only be used to generate leads for police investigations.

[35] Assistant Chief of Police of Miami, Armando Aguilar, said in 2023 that Clearview's AI tool had contributed to the resolution of several murder cases, and that his team had used the technology around 450 times a year.

Aguilar emphasized that they do not make arrests based on Clearview's matches alone, and instead use the data as a lead and then proceed via conventional methods of case investigation.

[24] Several cases of mistaken identity using Clearview facial recognition have been documented, but "the lack of data and transparency around police use means the true figure is likely far higher."

[66] In response to the leaks, the United States House Committee on Science, Space, and Technology sent a letter to the company requesting further insight into their bio-metric and security practices.

[69] In addition to application tracking (Google Analytics, Crashlytics), examination of the source code for the Android version found references to Google Play Services, requests for precise phone location data, voice search, sharing a free demo account to other users, augmented reality integration with Vuzix, and sending gallery photos or taking photos from the app itself.

The company's claim of a First Amendment right to public information has been disputed by privacy lawyers such as Scott Skinner-Thompson and Margot Kaminski, highlighting the problems and precedents surrounding persistent surveillance and anonymity.

[92] Two lawsuits were filed in state courts; in Vermont by the attorney general and in Illinois on behalf of the American Civil Liberties Union (ACLU), which cited a statute that forbids the corporate use of residents' faceprints without explicit consent.

The court denied the use of Section 230 immunity in this case because Vermont's claims were "based on the means by which Clearview acquired the photographs" rather than third party content.

[99] In July 2020, Clearview AI announced that it was exiting the Canadian market amidst joint investigations into the company and the use of its product by police forces.

[102] In January 2021, Clearview AI's biometric photo database was deemed illegal in the European Union (EU) by the Hamburg Data Protection Authority (DPA).

The authority stated that the General Data Protection Regulation (GDPR) is applicable despite the fact that Clearview AI has no European branch.

[105] In May 2021, the company had numerous legal complaints filed in Austria, France, Greece, Italy and the United Kingdom for violating European privacy laws in its method of documenting and collecting Internet data.

[106] In November 2021, Clearview received a provisional notice by the UK's Information Commissioner's Office (ICO) to stop processing its citizens' data citing a range of alleged breaches.

[108] In September 2024, Clearview AI was fined €30.5 million by the Dutch Data Protection Authority (DPA) for constructing what the agency described as an illegal database.

[109] The DPA's ruling highlighted that Clearview AI unlawfully collected facial images, including those of Dutch citizens, without obtaining their consent.

This practice constitutes a significant violation of the EU's GDPR due to the intrusive nature of facial recognition technology and the lack of transparency regarding the use of individuals' biometric data.