Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


posted by Fnord666 on Tuesday June 30 2020, @08:02AM   Printer-friendly
from the misidentified? dept.

AWS Facial Recognition Platform Misidentified Over 100 Politicians As Criminals:

Comparitech's Paul Bischoff found that Amazon's facial recognition platform misidentified an alarming number of people, and was racially biased.

Facial recognition technology is still misidentifying people at an alarming rate – even as it's being used by police departments to make arrests. In fact, Paul Bischoff, consumer privacy expert with Comparitech, found that Amazon's face recognition platform incorrectly misidentified more than 100 photos of US and UK lawmakers as criminals.

Rekognition, Amazon's cloud-based facial recognition platform that was first launched in 2016, has been sold and used by a number of United States government agencies, including ICE and Orlando, Florida police, as well as private entities. In comparing photos of a total of 1,959 US and UK lawmakers to subjects in an arrest database, Bischoff found that Rekognition misidentified at average of 32 members of Congress. That's four more than a similar experiment conducted by the American Civil Liberties Union (ACLU) – two years ago. Bischoff also found that the platform was racially biased, misidentifying non-white people at a higher rate than white people.

These findings have disturbing real-life implications. Last week, the ACLU shed light on Detroit citizen Robert Julian-Borchak Williams, who was arrested after a facial recognition system falsely matched his photo with security footage of a shoplifter.

The incident sparked lawmakers last week to propose legislation that would indefinitely ban the use of facial recognition technology by law enforcement nationwide. Though Amazon previously had sold its technology to police departments, the tech giant recently placed a law enforcement moratorium on facial recognition (Microsoft and IBM did the same). But Bischoff says society still has a ways to go in figuring out how to correctly utilize facial recognition in a way that complies with privacy, consent and data security.

Previously:

(2020-06-28) Nationwide Facial Recognition Ban Proposed by Lawmakers
(2020-06-11) Amazon Bans Police From Using its Facial Recognition Software for One Year
(2020-06-10) Senator Fears Clearview AI Facial Recognition Use on Protesters
(2020-06-09) IBM Will No Longer Offer, Develop, or Research Facial Recognition Technology
(2020-05-08) Clearview AI to Stop Selling Controversial Facial Recognition App to Private Companies
(2020-05-08) How Well Can Algorithms Recognize Your Masked Face?
(2020-04-18) Some Shirts Hide You from Cameras
(2020-04-02) Microsoft Supports Some Facial Recognition Software
(2020-03-23) Here's What Facebook's Internal Facial Recognition App Looked Like
(2020-03-21) How China Built Facial Recognition for People Wearing Masks
(2020-03-13) Vermont Sues Clearview, Alleging 'Oppressive, Unscrupulous' Practices
(2020-02-28) Clearview AI's Facial Recognition Tech is Being Used by US Justice Department, ICE, and the FBI
(2020-02-24) Canadian Privacy Commissioners to Investigate "Creepy" Facial Recognition Firm Clearview AI
(2020-02-06) Clearview AI Hit with Cease-And-Desist from Google, Facebook Over Facial Recognition Collection
(2020-01-30) Facebook Pays $550M to Settle Facial Recognition Privacy Lawsuit
(2020-01-29) London to Deploy Live Facial Recognition to Find Wanted Faces in a Crowd
(2020-01-22) Clearview App Lets Strangers Find Your Name, Info with Snap of a Photo, Report Says
(2020-01-20) Google and Alphabet CEO Sundar Pichai Calls for AI Regulations
(2020-01-17) Facial Recognition: EU Considers Ban of up to Five Years
(2019-12-14) The US, Like China, Has About One Surveillance Camera for Every Four People, Says Report
(2019-12-11) Moscow Cops Sell Access to City CCTV, Facial Recognition Data
(2019-12-07) Proposal To Require Facial Recognition For US Citizens At Airports Dropped
(2019-12-03) Homeland Security Wants Airport Face Scans for US Citizens

Original Submission

Related Stories

Homeland Security Wants Airport Face Scans for US Citizens 54 comments

Submitted via IRC for SoyCow1337

Homeland Security wants airport face scans for US citizens

Homeland Security is joining the ranks of government agencies pushing for wider use of facial recognition for US travelers. The department has proposed that US citizens, not just visa holders and visitors, should go through a mandatory facial recognition check when they enter or leave the country. This would ostensibly help officials catch terrorists using stolen travel documents to move about. The existing rules specifically exempt citizens and permanent residents from face scans.

It won't surprise you to hear that civil rights advocates object to the potential expansion. ACLU Senior Policy Analyst Jay Stanley said in a statement that the government was "reneging" on a longstanding promise to spare citizens from this "intrusive surveillance technology." He also contended that this was an unfair burden on people using their "constitutional right to travel," and pointed to abuses of power, data breaches and potential bias as strong reasons to avoid expanding use of the technology.

Via: TechCrunch


Original Submission

Proposal To Require Facial Recognition For US Citizens At Airports Dropped 8 comments

Arthur T Knackerbracket has found the following story:

US Customs and Border Protection said Thursday it will drop its plans to require that US citizens go through a biometric face scan when entering or exiting the country. Currently, citizens have the right to opt out of the scans, but a proposed rule indicated the agency was planning to make the program mandatory for all travelers.

The proposed rule was first published in spring 2018 in the Unified Agenda of Regulatory and Deregulatory Actions, a compendium the Executive Office of the President publishes every three months. A rule-making process that allows for public comment typically follows before a proposal can become a new regulation. The CBP's proposal was republished this fall, leading TechCrunch to ask the agency if it was still pursuing the rule.

"There are no current plans to require US citizens to provide photographs upon entry and exit from the United States," the agency said in a statement. "CBP intends to have the planned regulatory action regarding US citizens removed from the unified agenda next time it is published."

Related: Homeland Security Wants Airport Face Scans for US Citizens


Original Submission

Moscow Cops Sell Access to City CCTV, Facial Recognition Data 8 comments

Moscow Cops Sell Access to City CCTV, Facial Recognition Data

Investigative media outlet MBKh Media found that access to [the citywide CCTV camera system] and the live streams is being sold on underground forums and chat rooms.

Andrey Kaganskikh, the journalist that did the investigation says that the sellers are law enforcement individuals as well as government bureaucrats that can log into the Integrated Center for Data Processing and Storage (YTKD), the very system that keeps the data from cameras in Moscow.

Whoever wants to check the live stream from a camera receives a unique link to the City CCTV System that connects to all public cameras in Moscow. The URL works for five days, Kaganskikh says.

This is the same period mentioned on the city's CCTV section for storing footage from crowded places, shops, and courtyards. Data from educational organizations is saved for 30 days.

Furthermore, government officials or police officers sell their login credentials to the system to provide unlimited access to all cameras. The price of admission is 30,000 rubles ($470), according to Kaganskikh.

Apparently "restricted access" means restricted to those who can pay for it.


Original Submission

The US, Like China, Has About One Surveillance Camera for Every Four People, Says Report 16 comments

Submitted via IRC for chromas

The US, like China, has about one surveillance camera for every four people, says report

One billion surveillance cameras will be deployed globally by 2021, according to data compiled by IHS Markit and first reportedby The Wall Street Journal. China's installed base is expected to rise to over 560 million cameras by 2021, representing the largest share of surveillance devices installed globally, with the US rising to around 85 million cameras. When taking populations into account, however, China will continue to have nearly the same ratio of cameras to citizens as the US.

In 2018, China had 350 million cameras installed for an estimated one camera for every 4.1 people. That compared to one for every 4.6 people in the US where 70 million cameras were installed. Taiwan was third in terms of penetration with one camera for every 5.5 citizens in 2018, followed by the UK and Ireland (1:6.5) and Singapore (1:7.1).

China's installed base of cameras has recently risen 70 percent, while the US increased by nearly 50 percent.

Facial Recognition: EU Considers Ban of up to Five Years 7 comments

BBC:

The European Commission has revealed it is considering a ban on the use of facial recognition in public areas for up to five years.

Regulators want time to work out how to prevent the technology being abused.

The technology allows faces captured on CCTV to be checked in real time against watch lists, often compiled by police.

Exceptions to the ban could be made for security projects as well as research and development.

The Commission set out its plans in an 18-page document, suggesting that new rules will be introduced to bolster existing regulation surrounding privacy and data rights.

Don't get rid of your scramble suit just yet.


Original Submission

Google and Alphabet CEO Sundar Pichai Calls for AI Regulations 25 comments

Alphabet CEO Sundar Pichai says there is 'no question' that AI needs to be regulated

Google and Alphabet CEO Sundar Pichai has called for new regulations in the world of AI, highlighting the dangers posed by technology like facial recognition and deepfakes, while stressing that any legislation must balance "potential harms ... with social opportunities."

"[T]here is no question in my mind that artificial intelligence needs to be regulated. It is too important not to," writes Pichai in an editorial for The Financial Times. "The only question is how to approach it."

Although Pichai says new regulation is needed, he advocates a cautious approach that might not see many significant controls placed on AI. He notes that for some products like self-driving cars, "appropriate new rules" should be introduced. But in other areas, like healthcare, existing frameworks can be extended to cover AI-assisted products.

Also at The Associated Press.


Original Submission

Clearview App Lets Strangers Find Your Name, Info with Snap of a Photo, Report Says 18 comments

Clearview app lets strangers find your name, info with snap of a photo, report says:

What if a stranger could snap your picture on the sidewalk then use an app to quickly discover your name, address and other details? A startup called Clearview AI has made that possible, and its app is currently being used by hundreds of law enforcement agencies in the US, including the FBI, says a Saturday report in The New York Times.

The app, says the Times, works by comparing a photo to a database of more than 3 billion pictures that Clearview says it's scraped off Facebook, Venmo, YouTube and other sites. It then serves up matches, along with links to the sites where those database photos originally appeared. A name might easily be unearthed, and from there other info could be dug up online.

The size of the Clearview database dwarfs others in use by law enforcement. The FBI's own database, which taps passport and driver's license photos, is one of the largest, with over 641 million images of US citizens.

[...] The startup said in a statement Tuesday that its "technology is intended only for use by law enforcement and security personnel. It is not intended for use by the general public."

Law enforcement officers say they've used the app to solve crimes from shoplifting to child sexual exploitation to murder. But privacy advocates warn that the app could return false matches to police and that it could also be used by stalkers and others. They've also warned that facial recognition technologies in general could be used to conduct mass surveillance.


Original Submission

London to Deploy Live Facial Recognition to Find Wanted Faces in a Crowd 14 comments

London to deploy live facial recognition to find wanted faces in a crowd:

Officials at the Metropolitan Police Service of London announced last Friday that the organization will soon begin to use "Live Facial Recognition" (LFR) technology deployed around London to identify people of interest as they appear in surveillance video and alert officers to their location. The system, based on NEC's NeoFace Watch system, will be used to check live footage for faces on a police "watch list," a Metropolitan Police spokesperson said.

[...] In Las Vegas, a number of casinos have used facial-recognition systems for decades—not only to spot potential criminals but to also catch "undesirables" such as card counters and others who have been banned from the gaming floors. (I got a first-hand look at some of those early systems back in 2004

[...] private companies' own databases of images have begun to be tapped as well. Amazon's Rekognition system and other facial-recognition services that can process real-time streaming video have been used by police forces in the US as well as for commercial applications

[...] These systems are not foolproof. They depend heavily on the quality of source data and other aspects of the video being scanned. But Ephgrave said that the Metropolitan Police is confident about the system it's deploying—and that it's balancing its deployment with privacy concerns.

[...] Areas under the surveillance of the system will be marked with signs.

Previously:
America Is Turning Against Facial-Recognition Software
ACLU Demonstrates Flaws in Facial Recognition
Amazon and US Schools Normalize Automatic Facial Recognition and Constant Surveillance
Amazon Selling Facial Recognition Systems to Police in Orlando, FL and Washington County, OR


Original Submission

Facebook Pays $550M to Settle Facial Recognition Privacy Lawsuit 5 comments

Facebook pays $550M to settle facial recognition privacy lawsuit:

Facebook will create a cash fund of $550 million for its Illinois users who filed a lawsuit over its privacy practices, law firm Edelson PC said on Wednesday. The settlement came after Facebook was sued for collecting facial recognition data to use in tagging photos, which allegedly violated the Illinois Biometric Information Privacy Act.

Tagging someone in a photo on Facebook creates a link to his or her profile, with the feature finally made opt-in by Facebook last year. Facebook's photo tag suggestions come from collecting facial recognition data from other photos.

[...] The case has been ongoing since 2015, and the settlement has yet to be approved by the judge in the case. Illinois is the only state to have biometric privacy laws that allow people to sue for damages if their rights are violated.

Facebook settled because "it was in the best interest of our community and our shareholders to move past this matter," the company said in an emailed statement.


Original Submission

Clearview AI Hit with Cease-And-Desist from Google, Facebook Over Facial Recognition Collection 33 comments

The CEO of Clearview AI, a controversial and secretive facial recognition startup, is defending his company's massive database of searchable faces, saying in an interview on CBS This Morning Wednesday that it's his First Amendment right to collect public photos. He also has compared the practices to what Google does with its search engine.

Facial recognition technology, which proponents argue helps with security and makes your devices more convenient, has drawn scrutiny from lawmakers and advocacy groups. Microsoft, IBM and Amazon, which sells its Rekognition system to law enforcement agencies in the US, have said facial recognition should be regulated by the government, and a few cities, including San Francisco, have banned its use, but there aren't yet any federal laws addressing the issue.

Here is YouTube's full statement:

"YouTube's Terms of Service explicitly forbid collecting data that can be used to identify a person. Clearview has publicly admitted to doing exactly that, and in response we sent them a cease and desist letter. And comparisons to Google Search are inaccurate. Most websites want to be included in Google Search, and we give webmasters control over what information from their site is included in our search results, including the option to opt-out entirely. Clearview secretly collected image data of individuals without their consent, and in violation of rules explicitly forbidding them from doing so."

Facebook has also said that it's reviewing Clearview AI's practices and that it would take action if it learns the company is violating its terms of services.

"We have serious concerns with Clearview's practices, which is why we've requested information as part of our ongoing review. How they respond will determine the next steps we take," a Facebook spokesperson told CBS News on Tuesday. Facebook later said it demanded the company stop scraping photos because the activity violates its policies.

Clearview AI attracted wide attention in January after The New York Times reported how the company's app can identify people by comparing their photo to a database of more than 3 billion pictures that Clearview says it's scraped off social media and other sites. The app is used by hundreds of law enforcement agencies in the US to identify those suspected of criminal activities.

Previously:
Clearview App Lets Strangers Find Your Name, Info with Snap of a Photo, Report Says


Original Submission

Canadian Privacy Commissioners to Investigate "Creepy" Facial Recognition Firm Clearview AI 3 comments

Canadian Privacy Commissioners to Investigate Creepy Facial Recognition Firm Clearview AI:

Canadian authorities are investigating shady face recognition company Clearview AI on the grounds that its scraping of billions of photos from the web might violate privacy laws, Reuters reported on Friday.

According to Reuters, privacy commissioners from the Canadian federal government and of the provinces of British Columbia, Alberta, and Québec have all agreed to launch a joint investigation into the company's activities. In a statement, the commissioners wrote that Clearview's data scraping, along with admissions by Canadian law enforcement that they have used the service in police work, "raised questions and concerns about whether the company is collecting and using personal information without consent." Laws that they believe may have been violated include Canada's Personal Information Protection and Electronic Documents Act (PIPEDA) and regional laws concerning the use of user data in Quebec.

The privacy commissioners say they will also be looking into alleged use of Clearview's tools in the financial sector, though they did not release additional information about what practices they are investigating.

Previously:
Clearview AI Hit with Cease-And-Desist from Google, Facebook Over Facial Recognition Collection
Clearview App Lets Strangers Find Your Name, Info with Snap of a Photo, Report Says


Original Submission

Clearview AI's Facial Recognition Tech is Being Used by US Justice Department, ICE, and the FBI 22 comments

Clearview AI's Facial Recognition Tech Is Being Used By The Justice Department, ICE, And The FBI:

When BuzzFeed News reported earlier this month that Clearview AI had used marketing materials that suggested it was pursuing a "rapid international expansion," the company was dismissive, noting that it was focused on the US and Canada.

The company's client list suggests otherwise. It shows that Clearview AI has expanded to at least 26 countries outside the US, engaging national law enforcement agencies, government bodies, and police forces in Australia, Belgium, Brazil, Canada, Denmark, Finland, France, Ireland, India, Italy, Latvia, Lithuania, Malta, the Netherlands, Norway, Portugal, Serbia, Slovenia, Spain, Sweden, Switzerland, and the United Kingdom.

The log also has an entry for Interpol, which ran more than 320 searches. Reached for comment, the worldwide policing agency confirmed that "a small number of officers" in its Crimes Against Children unit had used Clearview's facial recognition app with a 30-day free trial account. That trial has now ended and "there is no formal relationship between Interpol and Clearview," the Interpol General Secretariat said in a statement.

It's unclear how Clearview is vetting potential international clients, particularly in countries with records of human rights violations or authoritarian regimes. In an interview with PBS, Ton-That said Clearview would never sell to countries "adverse to the US," including China, Iran, and North Korea. Asked by PBS if he would sell to countries where being gay is a crime, he didn't answer, stating once again that the company's focus is on the US and Canada.

Clearview, however, has already provided its software to organizations in countries that have laws against LGBTQ individuals, according to its documents. In Saudi Arabia, for example, the documents indicate that Clearview gave access to the Thakaa Center, also known as the AI Center of Advanced Studies, a Riyadh-based research center whose clients include Saudi Arabia's Ministry of Investment. Thakaa, which did not respond to a request for comment, was given access to the software earlier this month, according to the documents.

Previously:
Clearview AI Reports Entire Client List Was Stolen
Canadian Privacy Commissioners to Investigate "Creepy" Facial Recognition Firm Clearview AI
Clearview AI Hit with Cease-And-Desist from Google, Facebook Over Facial Recognition Collection
Clearview App Lets Strangers Find Your Name, Info with Snap of a Photo, Report Says


Original Submission

Vermont Sues Clearview, Alleging “Oppressive, Unscrupulous” Practices 22 comments

Vermont sues Clearview, alleging "oppressive, unscrupulous" practices:

Clearview AI's bread and butter is a tool providing facial recognition on a massive scale to law enforcement, federal agencies, private companies, and—apparently—nosy billionaires. The company has achieved this reportedly by scraping more or less the entire public Internet to assemble a database of more than 3 billion images. Now that there are spotlights on the secretive firm, however, Clearview is facing a barrage of lawsuits trying to stop it in its tracks.

The latest comes from Vermont Attorney General T.J. Donovan, who filed suit against Clearview this week claiming violations of multiple state laws.

The complaint (PDF) alleges that Clearview, which is registered as a data broker under Vermont's Data Broker Law, "unlawfully acquires data from consumers and business concerns" in Vermont.

Clearview built its massive database by gobbling up "publicly available" data from the Internet's biggest platforms—including Facebook, Google, YouTube, Twitter, LinkedIn, and others—most of whom have since issued cease-and-desist letters telling Clearview in no uncertain terms to knock it off. These images are frequently of minors, the complaint notes, and Clearview admitted in its state filing to knowingly having images of minors collected without anyone's consent. Vermont's data law prohibits "fraudulent acquisition of brokered personal information," and the state argues that Clearview's screen-scraping tactics are exactly that.

What Clearview does with its ill-gotten data is also a problem, the state argues. The Green Mountain State's first issue is from a security perspective: the company has already suffered at least one data breach, in which its client list—which it has repeatedly refused to make public—was stolen. The second issue is privacy.


Original Submission

How China Built Facial Recognition for People Wearing Masks 12 comments

Arthur T Knackerbracket has found the following story:

Hanwang, the facial-recognition company that has placed 2 million of its cameras at entrance gates across the world, started preparing for the coronavirus in early January.

Huang Lei, the company’s chief technical officer, said that even before the new virus was widely known about, he had begun to get requests from hospitals at the centre of the outbreak in Hubei province to update its software to recognise nurses wearing masks.

[...] If three or five clients ask for the same thing . . . we’ll see that as important,” said Mr Huang, adding that its cameras previously only recognised people in masks half the time, compared with 99.5 percent accuracy for a full face image.

[...] The company now says its masked facial recognition program has reached 95 percent accuracy in lab tests, and even claims that it is more accurate in real life, where its cameras take multiple photos of a person if the first attempt to identify them fails.

“The problem of masked facial recognition is not new, but belongs to the family of facial recognition with occlusion,” Mr Huang said, adding that his company had first encountered similar issues with people with beards in Turkey and Pakistan, as well as with northern Chinese customers wearing winter clothing that covered their ears and face.

Counter-intuitively, training facial recognition algorithms to recognize masked faces involves throwing data away. A team at the University of Bradford published a study last year showing they could train a facial recognition program to accurately recognize half-faces by deleting parts of the photos they used to train the software.


Original Submission

Here's What Facebook's Internal Facial Recognition App Looked Like 11 comments

Facebook developed an internal facial recognition app that allowed users to scan peoples' faces and identify them. Images obtained by Motherboard now show what that app looked like.

Business Insider first reported the existence of Facebook's facial recognition app in November last year. The app, made between 2015 and 2016, was available to Facebook employees and was designed to recognize employees and their Facebook friends who had facial recognition settings enabled, Facebook told Motherboard. Facebook uses facial recognition for spotting users in photos uploaded by themselves or others.

[...] When pointed at an individual it could recognize and link an account to, the app presented a pop-up over the person's face saying "You are friends." When the app could not identify someone, it displayed the message "Unable to recognize :(," according to another screenshot obtained by Motherboard.

A Facebook spokesperson provided the same statement the company did in response to Business Insider's original piece.

"As a way to learn about new technologies, our teams regularly build apps to use internally. The app described here was only available to Facebook employees, and could only recognize employees and their friends who had face recognition enabled," the spokesperson wrote in an email.

Source: https://www.vice.com/en_us/article/k7ekmv/facebook-facial-recognition-app


Original Submission

Microsoft Supports Some Facial Recognition Software 5 comments

Microsoft Cheers, ACLU Jeers Washington State Law Restricting Use of Facial Recognition Software

Microsoft cheers, ACLU jeers law restricting use of facial-recognition technology:

A new law in Washington state restricting the use of facial-recognition technology is drawing praise from Microsoft but criticism from civil liberties advocates. The law requires state and local governments to get a warrant before using the tech in many instances and provides more public reporting of its use. In January of each year, judges who issue warrants for the use of technology must report the existence of the warrant, details about what it covers, which governmental entities requested it and the public spaces under surveillance.

Microsoft, which is headquartered in Washington state and makes facial-recognition technology, praised the law as a "significant breakthrough" in a polarized debate. Microsoft President Brad Smith said he viewed the bill's approach as both "necessary and pragmatic" to protect the public while respecting their rights.

The American Civil Liberties Union of Washington disagreed, saying the law allows the government to use racially biased facial recognition technology.

"We will continue to push for a moratorium to give historically targeted and marginalized communities, such as black and indigenous communities, an opportunity to decide not just how face-surveillance technology should be used, but if it should be used at all," said Jennifer Lee, ACLU of Washington technology and liberty project manager, in a statement.

Microsoft Pulls Out Of Facial Recognition Startup Anyvision

Arthur T Knackerbracket has found the following story:

A Microsoft-funded investigation led by former US Attorney General Eric Holder determined that AnyVision's technology doesn't power mass surveillance in the West Bank. Nevertheless, Microsoft said it's divested itself of the AnyVision holding and won't be a minority stakeholder in any other facial recognition firms because it can't adequately oversee the companies that way. 

[...] Last year, Microsoft hired Holder to investigate whether AnyVision violated Microsoft's ethics. An October report by NBC News said facial recognition technology created by AnyVision had been used in a secret military effort to conduct surveillance of Palestinians in the West Bank; AnyVision rejected the report's claim. 


Original Submission #1Original Submission #2

Some Shirts Hide You from Cameras 23 comments

Some shirts hide you from cameras:

Right now, you're more than likely spending the vast majority of your time at home. Someday, however, we will all be able to leave the house once again and emerge, blinking, into society to work, travel, eat, play, and congregate in all of humanity's many bustling crowds.

The world, when we eventually enter it again, is waiting for us with millions of digital eyes—cameras, everywhere, owned by governments and private entities alike. Pretty much every state out there has some entity collecting license plate data from millions of cars—parked or on the road—every day. Meanwhile all kinds of cameras—from police to airlines, retailers, and your neighbors' doorbells—are watching you every time you step outside, and unscrupulous parties are offering facial recognition services with any footage they get their hands on.

In short, it's not great out there if you're a person who cares about privacy, and it's likely to keep getting worse. In the long run, pressure on state and federal regulators to enact and enforce laws that can limit the collection and use of such data is likely to be the most efficient way to effect change. But in the shorter term, individuals have a conundrum before them: can you go out and exist in the world without being seen?

[Ed Note - This is a two-page article.]


Original Submission

How Well Can Algorithms Recognize Your Masked Face? 2 comments

How well can algorithms recognize your masked face?:

Facial-recognition experts say that algorithms are generally less accurate when a face is obscured, whether by an obstacle, a camera angle, or a mask, because there's less information available to make comparisons. "When you have fewer than 100,000 people in the database, you will not feel the difference," says Alexander Khanin, CEO and cofounder of VisionLabs, a startup based in Amsterdam. With 1 million people, he says, accuracy will be noticeably reduced and the system may need adjustment, depending on how it's being used.

[...] "We can identify a person wearing a balaclava, or a medical mask and a hat covering the forehead," says Artem Kuharenko, founder of NtechLab, a Russian company whose technology is deployed on 150,000 cameras in Moscow. He says that the company has experience with face masks through contracts in southeast Asia, where masks are worn to curb colds and flu. US Customs and Border Protection, which uses facial recognition on travelers boarding international flights at US airports, says its technology can identify masked faces.

But Anil Jain, a professor at Michigan State University who works on facial recognition and biometrics, says such claims can't be easily verified. "Companies can quote internal numbers, but we don't have a trusted database or evaluation to check that yet," says. "There's no third-party validation."

A US government lab at the National Institute of Standards and Technology that functions as the world's arbiter on the accuracy of facial-recognition algorithms hopes to provide that external validation—but is being held up by the same pandemic that prompted the project.

Clearview AI to Stop Selling Controversial Facial Recognition App to Private Companies 9 comments

Clearview AI to stop selling controversial facial recognition app to private companies:

Controversial facial recognition provider Clearview AI says it will no longer sell its app to private companies and non-law enforcement entities, according to a legal filing first reported on Thursday by BuzzFeed News. It will also be terminating all contracts, regardless of whether the contracts are for law enforcement purposes or not, in the state of Illinois.

The document, filed in Illinois court as part of lawsuit over the company's potential violations of a state privacy law, lays out Clearview's decision as a voluntary action, and the company will now "avoid transacting with non-governmental customers anywhere." Earlier this year, BuzzFeed reported on a leaked client list that indicates Clearview's technology has been used by thousands of organizations, including companies like Bank of America, Macy's, and Walmart.

"Clearview is cancelling the accounts of every customer who was not either associated with law enforcement or some other federal, state, or local government department, office, or agency," Clearview's filing reads. "Clearview is also cancelling all accounts belonging to any entity based in Illinois." Clearview argues that it should not face an injunction, which would prohibit it from using current or past Illinois residents' biometric data, because it's taking these steps to comply with the state's privacy law.

Previously:
(2020-04-20) Security Lapse Exposed Clearview AI Source Code
(2020-04-18) Some Shirts Hide You from Cameras
(2020-03-13) Vermont Sues Clearview, Alleging "Oppressive, Unscrupulous" Practices
(2020-02-28) Clearview AI's Facial Recognition Tech is Being Used by US Justice Department, ICE, and the FBI
(2020-02-26) Clearview AI Reports Entire Client List Was Stolen
(2020-02-24) Canadian Privacy Commissioners to Investigate "Creepy" Facial Recognition Firm Clearview AI
(2020-02-06) Clearview AI Hit with Cease-And-Desist from Google, Facebook Over Facial Recognition Collection
(2020-01-22) Clearview App Lets Strangers Find Your Name, Info with Snap of a Photo, Report Says


Original Submission

IBM Will No Longer Offer, Develop, or Research Facial Recognition Technology 44 comments

IBM will no longer offer, develop, or research facial recognition technology:

IBM will no longer offer general purpose facial recognition or analysis software, IBM CEO Arvind Krishna said in a letter to Congress today. The company will also no longer develop or research the technology, IBM tells The Verge. Krishna addressed the letter to Sens. Cory Booker (D-NJ) and Kamala Harris (D-CA) and Reps. Karen Bass (D-CA), Hakeem Jeffries (D-NY), and Jerrold Nadler (D-NY).

"IBM firmly opposes and will not condone uses of any [facial recognition] technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency," Krishna said in the letter. "We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies."


Original Submission

Senator Fears Clearview AI Facial Recognition Use on Protesters 37 comments

Senator fears Clearview AI facial recognition could be used on protesters:

Sen. Edward Markey has raised concerns that police and law enforcement agencies have access to controversial facial recognition app Clearview AI in cities where people are protesting the killing of George Floyd, an unarmed black man who died two weeks ago while in the custody of Minneapolis police.

[...] "As demonstrators across the country exercise their First Amendment rights by protesting racial injustice, it is important that law enforcement does not use technological tools to stifle free speech or endanger the public," Markey said in a letter to Clearview AI CEO and co-founder Hoan Ton-That.

The threat of surveillance could also deter people from "speaking out against injustice for fear of being permanently included in law enforcement databases," he said.

Markey, who has previously hammered Clearview AI over its sales to foreign governments, use by domestic law enforcement and use in the COVID-19 pandemic, is now asking the company for a list of law enforcement agencies that have signed new contracts since May 25, 2020.

It's also being asked if search traffic on its database has increased during the past two weeks; whether it considers a law enforcement agency's "history of unlawful or discriminatory policing practices" before selling the technology to them; what process it takes to give away free trials; and whether it will prohibit its technology from being used to identify peaceful protestors.

[...] Ton-That said he will respond to the letter from Markey. "Clearview AI's technology is intended only for after-the-crime investigations, and not as a surveillance tool relating to protests or under any other circumstances," he said in an emailed statement.

Previously:

Amazon Bans Police From Using its Facial Recognition Software for One Year 24 comments

Amazon announces one-year ban on police use of facial recognition tech

Amazon is instituting a one-year moratorium on police use of Rekognition, its facial recognition software, the company announced on Wednesday.

"We've advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology," Amazon wrote in its blog post announcing the change. "Congress appears ready to take on this challenge. We hope this one-year moratorium might give Congress enough time to implement appropriate rules."

Amazon says that groups like the International Center for Missing and Exploited Children will continue to have access to the technology.

Sounds like a job for Palantir.

Also at CNBC, Ars Technica, and (older) Amazon Says The Face Recognition Tech It Sells to Cops Can Now Detect 'Fear'.

Previously:
(2019-12-14) Palantir Wins New Pentagon Deal With $111 Million From the Army

IBM Will No Longer Offer, Develop, or Research Facial Recognition Technology


Original Submission

Nationwide Facial Recognition Ban Proposed by Lawmakers 26 comments

Nationwide Facial Recognition Ban Proposed By Lawmakers:

Lawmakers have proposed legislation that would indefinitely ban the use of facial recognition technology by law enforcement nationwide. The new bill comes after months of public concerns surrounding facial recognition's implications for data privacy, government surveillance and racial bias.

The Facial Recognition and Biometric Technology Moratorium Act was proposed Thursday by Sens. Ed Markey (D-MA) and Jeff Merkley (D-OR), and Reps. Pramila Jayapal (D-WA) and Ayanna Pressley (D-MA). While various cities have banned government use of the technology (with Boston this week becoming the tenth U.S. city to do so), the bill would be the first temporary ban on facial recognition technology ever enacted nationwide.

The newly proposed bill would "prohibit biometric surveillance by the Federal Government without explicit statutory authorization and to withhold certain Federal public safety grants from State and local governments that engage in biometric surveillance."

[...] The ban has no definitive time limit in place, and would continue until Congress passed a law to lift it.

[...] "Facial recognition technology doesn't just pose a grave threat to our privacy, it physically endangers Black Americans and other minority populations in our country," said Senator Markey in a statement. "In this moment, the only responsible thing to do is to prohibit government and law enforcement from using these surveillance mechanisms."

I see nothing blocking companies from using recognition -- facial or otherwise -- and whose data government agencies could request or subpoena.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Funny) by Anonymous Coward on Tuesday June 30 2020, @08:24AM (5 children)

    by Anonymous Coward on Tuesday June 30 2020, @08:24AM (#1014429)

    Identified Politicians as criminals? What do you call a true positive, as opposed to a false positive? Seems it works just fine. Trumpy McTrumpface: RICO violations, multiple extortion attempts, pedophilia and securities fraud, and just, lying. Always the lying.

    • (Score: 3, Touché) by MostCynical on Tuesday June 30 2020, @10:25AM (4 children)

      by MostCynical (2589) on Tuesday June 30 2020, @10:25AM (#1014440) Journal

      no, they mean convicted criminals..

      --
      "I guess once you start doubting, there's no end to it." -Batou, Ghost in the Shell: Stand Alone Complex
      • (Score: 2) by JoeMerchant on Tuesday June 30 2020, @12:05PM (3 children)

        by JoeMerchant (3937) on Tuesday June 30 2020, @12:05PM (#1014454)

        they mean convicted criminals

        Lots of politicians manage to avoid the conviction, but some go away for fraud, racketeering, etc. and come back to win more elections.

        --
        🌻🌻 [google.com]
        • (Score: 3, Insightful) by Runaway1956 on Tuesday June 30 2020, @01:28PM (2 children)

          by Runaway1956 (2926) Subscriber Badge on Tuesday June 30 2020, @01:28PM (#1014486) Journal

          Are you from Louisiana? Ohhh, I know that Chicago and other places have fraud really bad, but Louisiana and the Edwards clan probably rank among the most corrupt SOBs in the country. And, they continue to be elected, again and again. Ted Kennedy probably took lessons from the Edwards clan.

          --
          Do political debates really matter? Ask Joe!
          • (Score: 2) by JoeMerchant on Tuesday June 30 2020, @03:26PM (1 child)

            by JoeMerchant (3937) on Tuesday June 30 2020, @03:26PM (#1014526)

            First time I personally saw a politician get elected after getting out of jail was in Hialeah, Florida - but, yeah, it's everywhere.

            --
            🌻🌻 [google.com]
            • (Score: 0) by Anonymous Coward on Tuesday June 30 2020, @11:13PM

              by Anonymous Coward on Tuesday June 30 2020, @11:13PM (#1014762)

              Corruption is a mainstay of every government and will be with us always. I came to understand why this is so after reading "The Dictator's Handbook". Depressing.

  • (Score: 2, Insightful) by Anonymous Coward on Tuesday June 30 2020, @08:29AM

    by Anonymous Coward on Tuesday June 30 2020, @08:29AM (#1014430)

    I think you'll find that the AI was quite correct when classifying them, but decided to take the piss when identifying them.

    As it's Amazon, and one supposes that the AI has access to other 'cloudy resources', has anyone looked into the criminal records, amazon purchases, browsing histories etc of those poor unfortunate criminals cruelly misidentified as politicians to see if there are other common factors the AI might be using as 'other' identifiers?.
    (I.e. does it know something about the politicians that they have in common with their AI matched criminal alter-egos that we dont?)

  • (Score: 3, Touché) by Bot on Tuesday June 30 2020, @08:51AM (1 child)

    by Bot (3902) on Tuesday June 30 2020, @08:51AM (#1014431) Journal

    Watching most politicians' faces, eyes especially, I usually classify them in the ranks of "wouldn't give them 1$ to park the car".

    OTOH what's their function? placeholders, marionettes.

    --
    Account abandoned.
    • (Score: 1, Funny) by Anonymous Coward on Tuesday June 30 2020, @02:56PM

      by Anonymous Coward on Tuesday June 30 2020, @02:56PM (#1014510)

      Yeah, my initial thought was "wow, they mistakenly identified 32 as not being criminals".

  • (Score: 2) by Booga1 on Tuesday June 30 2020, @09:36AM

    by Booga1 (6333) on Tuesday June 30 2020, @09:36AM (#1014436)

    The fortune on this article was "how bout a policy policing policy with a policy for changing the police policing policy"

    My brain is tired now, so I'll just let this post do the thinking for me.

  • (Score: 1, Informative) by Anonymous Coward on Tuesday June 30 2020, @11:01AM (1 child)

    by Anonymous Coward on Tuesday June 30 2020, @11:01AM (#1014442)

    “Voters … often support candidates with criminal reputations, not in spite of their criminal bona fides, but because of them.” –Milan Vaishnav

    https://knowledge.wharton.upenn.edu/article/does-democracy-encourage-criminal-politicians/ [upenn.edu]

    https://en.wikipedia.org/wiki/List_of_American_federal_politicians_convicted_of_crimes [wikipedia.org]

    IIRC, there are studies showing that politicians in general have a higher chance of having a criminal background than the general public. I tried searching but couldn't find the studies right now.

    "Currently, politicians are not required to partake in background screening as we know it."

    https://www.sterlingcheck.com/blog/2017/01/background-checks-in-politics/ [sterlingcheck.com]

    • (Score: 3, Interesting) by FatPhil on Tuesday June 30 2020, @11:29AM

      Indeed that correlation is not surprising, personality traits that favour cheating and abuse of power are ones that help advancement in the field. Similar with big businesses - being a CEO is strongly correlated with having psychopathic tendencies. But that's not the fault of democracy or the capitalist system, it's a side effect of freedom of the system.

      For example, any democracy that cannot vote a murderous dictator into power is not a true democracy.
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
  • (Score: 0, Funny) by Anonymous Coward on Tuesday June 30 2020, @11:09AM (1 child)

    by Anonymous Coward on Tuesday June 30 2020, @11:09AM (#1014445)

    AI determined that if you arrest a BIPOC, you're more likely than not to be right. The only way to fix this is to put everyone into the database, so privileged white males need to commit more crimes in order to make the database more representative of the population as a whole.

    • (Score: 1, Insightful) by Anonymous Coward on Tuesday June 30 2020, @11:40AM

      by Anonymous Coward on Tuesday June 30 2020, @11:40AM (#1014450)

      > AI determined...

      You lost me at "AI", since all this latest generation does is fancy pattern matching.

      Seems highly unlikely that any AI has determined anything yet. Maybe in some distant future?

  • (Score: 4, Interesting) by Anonymous Coward on Tuesday June 30 2020, @12:02PM (7 children)

    by Anonymous Coward on Tuesday June 30 2020, @12:02PM (#1014453)

    Let's refer to the actual study:
    https://www.comparitech.com/blog/vpn-privacy/facial-recognition-study/ [comparitech.com]

    The software doesn't output a simple yes or no answer. It outputs a confidence level. The errors with the politicians happened with the confidence level set to only 80%. With 530 American politicians tested, you'd expect up to 106 errors. Instead the software produced 32 errors. With the confidence level set to 95%, it produced no errors, even though, by chance, it should have produced up to 26. So the software is actually quite conservative with its matching and is significantly outperforming its claims. With UK politicians, it performed even better. However, for some reason, the study decided to consider multiple false matches of one person as a single error, instead of once for every photo it incorrectly matched against. This is a very strange decision which makes the system seem more accurate than it is, but the effect is probably small.

    Now onto the racial bias. It's true that the software misidentified nonwhite politicians at a higher rate than nonwhite ones. But does that mean that the software is racially biased? Well, maybe. Let's assume that skin color is a really major factor in identification, such that the software will almost never misidentify someone as someone of another race. Now the problem here is that the study isn't designed to be an honest assessment of the accuracy of the system, but rather is trying to prove that it can misidentify anyone as a criminal (even though Amazon recommends a 99% confidence threshold for this use, where the software made zero errors). But the software is effectively comparing white people against white mugshots, and black people against black mugshots. As everyone knows, the rate of arrests is not equal among races. Therefore, we know that although the nonwhite politicians were misidentified at a higher rate, they also had a disproportionately large number of opportunities to be misidentified, which is not accounted for in the results. This is doubly true when you consider that nonwhites are not only overrepresented in arrests, but underrepresented in politics.

    A fair assessment would compare all the pictures being tested against a database that has a similar distribution with respect to the attribute being measured, and it should also be tested where the test groups are broken down into the groups you want to check, and tested against a database containing only those kinds of people. It doesn't have to be race. It could be, for example, people wearing hats, or with beards, or whatever.

    Because the study does not break down the racial composition of the mugshots it uses, there's no way to know, other than going to the source and taking a sample, what the source material actually included. I don't have time to do this right now.

    Furthermore, the entire notion of "racially biased" implicitly assumes that the use of the software is for some racially sensitive purpose, as opposed to, say, monitoring which employees spend too much time on lunch break, or identifying frequent repeat customers to a business, or whatever other completely race-irrelevant task it could also be used for.

    • (Score: 2) by Aegis on Tuesday June 30 2020, @02:32PM

      by Aegis (6714) on Tuesday June 30 2020, @02:32PM (#1014502)

      Furthermore, the entire notion of "racially biased" implicitly assumes that the use of the software is for some racially sensitive purpose, as opposed to, say, monitoring which employees spend too much time on lunch break

      Being wrongly labeled lazy and then fired is the exact types of bias people are worried about...

    • (Score: 4, Interesting) by sjames on Tuesday June 30 2020, @06:35PM (5 children)

      by sjames (2882) on Tuesday June 30 2020, @06:35PM (#1014631) Journal

      However, we know that in practice, police are setting the confidence lower and then blindly accepting the "matches" as fact. Also they are using poor quality input photos and will ignore a warning that the confidence level is insufficient to support probable cause. In short, if the software CAN be mis-used, not only will it be, it already has been. It is perfectly fair to test the software as it is actually used in the wild.

      Also, given your explanation, that still means that if you are black, through no fault of your own, you are more likely to have the police show up to arrest you out of the blue for a crime you had no involvement in. Attributing that to the input dataset is not much of a consolation.

      • (Score: 0) by Anonymous Coward on Wednesday July 01 2020, @01:43AM (3 children)

        by Anonymous Coward on Wednesday July 01 2020, @01:43AM (#1014832)

        Attributing that to the input dataset is not much of a consolation.

        True enough, but it's misleading to say "the algorithm is biased" if actually the problem is "the database is biased" or even "the algorithm didn't solve racism in policing but still did better than humans." It's certainly possible, at least in principle, to build a software system that's not biased. It's probably not possible to find humans with absolutely no bias. This is why I think it's a mistake to ban facial recognition. It has the potential to significantly improve racism in policing while also helping catch more criminals.

        The study I linked contains a digression on the software used by police departments, but I ignored it because it wasn't the software being tested and it wasn't the topic of this article either. It seems that the Amazon software performs quite a bit better than the software used by the police. Whether that's because the police misuse the software, or their software just isn't as good, or what, I have no way of knowing.

        • (Score: 2) by sjames on Wednesday July 01 2020, @08:27AM

          by sjames (2882) on Wednesday July 01 2020, @08:27AM (#1014913) Journal

          But DOES it do better than humans? Does it still do better than humans as deployed and used in the field?

          In the situation in Detroit, the incorrect match made by the software was immediately apparent to any human that bothered to look (spoiler, the police didn't until the wrongly arrested man held the crime photo up to his face).

        • (Score: 0) by Anonymous Coward on Wednesday July 01 2020, @11:40AM (1 child)

          by Anonymous Coward on Wednesday July 01 2020, @11:40AM (#1014951)

          History has shown us, this won't end well. Maybe you've never been in legal trouble, but when you get flagged as a shoplifter or something, I hope they listen to you, and don't just treat you as another lying criminal who was obviously bad or your gave wouldn't have been flagged.
          Whenever you think about how people will use a technology, imagine the stupidest people you can. Your imagination is probably inadequate to imagine what idiocy they will actually practice.

          • (Score: 0) by Anonymous Coward on Wednesday July 01 2020, @05:06PM

            by Anonymous Coward on Wednesday July 01 2020, @05:06PM (#1015068)

            When fingerprinting became available, it helped catch more guilty criminals while exonerating the innocent. When DNA evidence became available, the same thing happened. This is true even though neither of those technologies was completely immune to problems or abuses. Now facial recognition is becoming available and... people think it's different this time, for some reason. It's almost never "different this time."

      • (Score: 0) by Anonymous Coward on Wednesday July 01 2020, @12:29PM

        by Anonymous Coward on Wednesday July 01 2020, @12:29PM (#1014972)
        OK maybe the software is a bit more racially biased than the guns the cops are using the kill black people.

        Seriously speaking, I think the "racially biased facial recognition" thing is more a lighting and contrast issue.
  • (Score: 0) by Anonymous Coward on Tuesday June 30 2020, @02:56PM

    by Anonymous Coward on Tuesday June 30 2020, @02:56PM (#1014511)

    Even garbage AI can see thru "politicians".

  • (Score: 2) by bzipitidoo on Tuesday June 30 2020, @06:27PM

    by bzipitidoo (4388) on Tuesday June 30 2020, @06:27PM (#1014624) Journal

    For decades now, law enforcement has been wanting facial recognition. Seem to think it's not that hard. And yes, when you have only a few hundred faces, it's not. But they want to throw their databases of millions of poor quality mugshots at the system. They think nothing of the scaling problems.

    But why do they want this so much? It's sheer laziness, a trait that unfortunately too much law enforcement is prone. No need for patience and detective work.

  • (Score: 0) by Anonymous Coward on Tuesday June 30 2020, @07:20PM

    by Anonymous Coward on Tuesday June 30 2020, @07:20PM (#1014653)

    Matches that were 80% confident were wrong? Well, no shit. It's right there, unambiguous, plain as day, staring you in the face "this is probably wrong, 1:5 chance totally different bloke."

    "So it would seem that the technology hasn’t really improved all that much."

    That is one of the most insanely idiotic things I have ever read. At no point in their rambling, incoherent analysis were they even close to anything that could be considered a rational thought. Everyone in here is now dumber for having read it. May God have mercy their souls.

  • (Score: 2) by mendax on Wednesday July 01 2020, @03:21AM

    by mendax (2840) on Wednesday July 01 2020, @03:21AM (#1014865)

    AWS Facial Recognition Platform Misidentified Over 100 Politicians as Criminals

    Well they got that right!!! I wonder if Trump's face was among the test set.

    --
    It's really quite a simple choice: Life, Death, or Los Angeles.
(1)