New facial recognition app promises to solve crimes, but critics say it means no privacy

Advertisement

A class action lawsuit filed Thursday in New York's federal court alleges that an innovative facial recognition technology designed for police use is illegally taking people's biometric information without their consent.

The lawsuit was filed by two people from Illinois who claim that the company behind the technology, Clearview AI, illegally took pictures of their social media profiles and stored their biometrics – in this case, scans of their facial geometry – in a database. . The authors claim that this violates the Illinois Biometric Information Privacy Act.

Advertisement

ACTIVISTS REQUIRE FACIAL RECOGNITION PROHIBITION FOR LAW APPLICATION IN MAIN NEW FIRST STEPS

This is the latest attack on Clearview AI, a new facial recognition software that can identify anyone with a single photo. The founder, Hoan Ton-That, designed the app specifically to help law enforcement agencies solve crimes. Users insert a photo into the software and it instantly displays any photo on the Internet corresponding to that face, with links to websites that accompany it.

In a statement, the company stated that "Clearview's legal team will respond to this process in due time. The company is committed to operating within the limits of applicable laws and regulations."

Advertisement

Ton-That claims the app is 99.6% accurate in its correspondence and is being used by more than 600 law enforcement agencies, including the Chicago Police Department, which paid $ 50,000 for a two-year trial of the technology.

Clearview AI estimates that it has been used in thousands of cases to help identify thieves, murderers and pedophiles.

MOST AMERICANS TRUST THE APPLICATION OF THE LAW TO USE FACIAL RECOGNITION OF RESPONSIBILITY

Advertisement

"We believe that what we are doing is in the public interest," Ton-That said in an interview with Fox News. "When these pedophiles are captured, the investigators have all these photos, hundreds and hundreds of children, and for the first time, they are able to identify the victims."

Clearview AI runs photos through its database – which the company claims contains more than 3 billion photos taken from Internet sites.

"He only looks for publicly available material out there," Ton-That told Fox News in an interview before the lawsuit was opened. “They are public data. We are not taking personal data … things that are available on the Internet, in the public domain. "

However, Google, YouTube, Facebook, Twitter, Venmo and LinkedIn sent letters of withdrawal to Clearview AI in an effort to shut down the app. The companies said the photos that users put on their websites are not in the public domain and that taking pictures of people, a practice known as scraping, violates their terms of service.

"It's a little bit hypocritical," said Ton-That. “Google has a lot of personal and private information. They track where you browse the web and sell ads for you and they have your private emails. We are not receiving any personal data ".

SCANNERS OF FACIAL RECOGNITION IN AIRPORTS UNDER SCRUTINY AS PHILLY LAUNCHES PILOT PROGRAM

In 2011, Google chose not to seek facial recognition. At the time, Google CEO Eric Schmidt acknowledged the fear that mobile facial recognition could be used "in a very bad way".

"What we're doing is different," said Ton-That. "We are creating a tool for law enforcement and government to help solve crimes of public interest."

The 31-year-old founder told Fox News that his database only collects photos posted on the Internet. But he acknowledged that the database includes photos that were posted and later deleted – even if the photo is from a social media profile that has been changed to private. If it was published publicly, it could be in your database.

Clearview's AI was first discovered through a New York Times investigation that included sources from police departments across the country praising the app's effectiveness, identifying suspects from surveillance images and more.

"This technology is only used to generate leads for detectives investigating a case," said Howard Ludwig, a spokesman for the Chicago Police Department.

The information obtained with Clearview AI, he said, "is never used by themselves to detain or prosecute a suspect".

Last fall, LinkedIn sued hiQ, a data aggregator, accusing it of violating its user contract, scraping information from LinkedIn profiles and selling that data.

The US Court of Appeals for the 9th Circuit sided with hiQ. The court wrote that giving companies like LinkedIn the freedom to decide who can collect and use publicly available data for viewers "and that the companies themselves collect and use – risk the possible creation of monopolies of information that would divert the public interest".

The existence of the app has triggered a series of questions and not all police officers are welcoming technology with open arms.

New Jersey Attorney General Gurbir Grewal temporarily banned the request from state police departments, citing cybersecurity and privacy issues, despite the fact that the app has already been used by a department to help identify a pedophile.

"Some New Jersey law enforcement agencies started using Clearview AI before the product was fully examined," Grewal said in a statement. "The review is still ongoing."

Ton-That said the app is fully protected from hackers, boasting "we have never had any breaches".

Then there are fears that this application will be available to the public. Anyone at a bar, taking the train or walking down the street, can take your photo on an iPhone and instantly identify it.

Ton-This is adamant that it will not happen. "We will never make it a consumer app," he said. “We don't want this to be everywhere. It has to be used in a controlled manner. "

But the app has been made available to some banks and critics point out that Clearview AI investors are interested in making this technology available to everyone. "You don't always need to listen to your investors," joked Ton-That.

Sen. Ed Markey, D-Mass., Wrote a letter with a list of concerns for Clearview AI, warning that the product "appears to pose particularly frightening privacy risks", especially if this technology is widely available, said Markey.

"It is able to fundamentally dismantle the expectation of Americans that they can move, meet or simply appear in public without being identified."

Markey’s questions included concerns about whether Clearview AI’s technology identified children (it does) and whether the software was installed on surveillance cameras 24 hours a day, 7 days a week or on real-time police cameras (not ).

“Well, one thing to think about is that it’s not 24/7 surveillance. I think it would be a world we don't want to live in. That's how China is now, ”said Ton-That. “Nations like Russia, China and Iran are against US interests. We have no interest in doing business with them. Our customers are in the USA and we want to ensure that nothing is compromised. "

Brenda Leong, a senior AI and Ethics consultant at the Future Privacy Forum, believes that Clearview AI should not remove people's photos from websites. "They are stealing our personal data."

Leong added that there may be concerns about harassers or people who are victims of domestic violence. "Perhaps the photos were sent by a friend and it is a group photo, and they have no opinion on how this image is collected or used," said Leong.

Asked if he believed Clearview AI is the beginning of the end of anonymity, Ton-That stopped and said, "You know, people are posting information online all the time … so what I say is that maybe it's already has happened. "

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *