San Francisco Bans Facial Recognition Technology – The New York Times

San Francisco Bans Facial Recognition Technology – The New York Times

Image

Attendees interacting with a facial recognition demonstration at this year’s CES in Las Vegas.CreditCreditJoe Buglewicz for The New York Times

SAN FRANCISCO — The San Francisco Board of Supervisors on Tuesday enacted the first ban by a major city on the use of facial recognition technology by police and all other municipal agencies.

The vote was 8 to 1 in favor, with two members who support the bill absent. There will be an obligatory second vote next week, but it is seen as a formality.

Police forces across America have begun turning to facial recognition to search for both small-time criminal suspects and perpetrators of mass carnage: authorities used the technology to help identify the gunman in the mass killing at an Annapolis, Md., newspaper in June. But civil liberty groups have expressed unease about the technology’s potential abuse by government amid fears that it may shove the United States in the direction of an overly oppressive surveillance state.

Aaron Peskin, the city supervisor who announced the bill, said that it sent a particularly strong message to the nation, coming from a city transformed by tech.

“I think part of San Francisco being the real and perceived headquarters for all things tech also comes with a responsibility for its local legislators,” said Mr. Peskin, who represents neighborhoods on the northeast side of the city. “We have an outsize responsibility to regulate the excesses of technology precisely because they are headquartered here.”

Similar bans are under consideration in Oakland and in Somerville, Mass., outside of Boston. In Massachusetts, a bill in the state legislature would put a moratorium on facial recognition and other remote biometric surveillance systems. On Capitol Hill, a bill introduced last month would ban users of commercial face recognition technology from collecting and sharing data for identifying or tracking consumers without their consent, although it does not address the government’s uses of the technology.

Matt Cagle, an attorney with the ACLU of Northern California, summed up the broad concerns of critics Tuesday: Facial recognition technology, he said, “provides government with unprecedented power to track people going about their daily lives. That’s incompatible with a healthy democracy.”

The San Francisco proposal, he added, “is really forward-looking and looks to prevent the unleashing of this dangerous technology against the public.”

Image

A security camera in San Francisco.CreditEric Risberg/Associated Press

In one form or another, facial recognition is already being used in many U.S. airports and big stadiums, and by a number of other police departments. The pop star Taylor Swift has reportedly incorporated the technology at one of her shows, using it to help identify stalkers.

The issue has been particularly charged in San Francisco, a city with a rich history of incubating dissent and individual liberties, but one that has also suffered lately from high rates of property crime. A local group called Stop Crime SF asked supervisors to exclude local prosecutors, police and sheriffs from the ordinance when performing investigative duties, as well as an exemption for the airport.

The group had been encouraging residents to send a form letter to supervisors. It argued that the ordinance “could have unintended consequences that make us less safe by severely curtailing the use of effective traditional video surveillance by burying agencies like the police department in a bureaucratic approval process.”

The facial recognition fight in San Francisco is largely theoretical — the police department does not currently deploy facial recognition technology, except in its airport and ports that are under federal jurisdiction and are not impacted by the legislation.

Some local homeless shelters use biometric finger scans and photos to track shelter usage, said Jennifer Friedenbach, the executive director of the Coalition on Homelessness. The practice has driven undocumented residents away from the shelters, she added.

Mr. Cagle and other experts said that it was difficult to know exactly how widespread the technology was in the U.S. “Basically governments and companies have been very secretive about where it’s being used, so the public is largely in the dark about the state of play,” he said.

But Dave Maass, senior investigative researcher at the Electronic Frontier Foundation, offered a partial list of police departments that he said used the technology, including Las Vegas, Orlando, San Jose, San Diego, New York City, Boston, Detroit and Durham, N.C.

Other users, Mr. Maass said, include the Colorado Department of Public Safety, the Pinellas County Sheriff’s Office, the California Department of Justice and the Virginia State police.

U.S. Customs and Border Protection is now using facial recognition in many U.S. airports and ports of sea entry. At airports, international travelers stand before cameras, then have their pictures matched against photos provided in their passport applications. The agency says the process complies with privacy laws, but it has still come in for criticism from the Electronic Privacy Information Center, which argues that the government, though promising travelers that they may opt out, has made it increasingly difficult to do so.

But there is a broader concern. “When you have the ability to track people in physical space, in effect everybody becomes subject to the surveillance of the government,” said Marc Rotenberg, the group’s executive director.

Read More

San Francisco may ban use of facial recognition by police and city – Los Angeles Times

San Francisco may ban use of facial recognition by police and city – Los Angeles Times

San Francisco is on track to become the first U.S. city to ban the use of facial recognition by police and other city agencies, reflecting a growing backlash against a technology that’s creeping into airports, motor vehicle departments, stores, stadiums and home security cameras.

Government agencies across the U.S. have used the technology for more than a decade to scan databases for suspects and prevent identity fraud. But recent advances in artificial intelligence have created more sophisticated computer vision tools, making it easier for police to pinpoint a missing child or protester in a moving crowd or for retailers to analyze a shopper’s facial expressions as they peruse store shelves.

Efforts to restrict its use are getting pushback from law enforcement groups and the tech industry, though it’s far from a united front. Microsoft, while opposed to an outright ban, has urged lawmakers to set limits on the technology, warning that leaving it unchecked could enable an oppressive dystopia reminiscent of George Orwell’s novel “1984.”

“Face recognition is one of those technologies that people get how creepy it is,” said Alvaro Bedoya, who directs Georgetown University’s Center on Privacy and Technology. “It’s not like cookies on a browser. There’s something about this technology that really sets the hairs on the back of people’s heads up.”

Without regulations barring law enforcement from accessing driver’s license databases, people who have never been arrested could be part of virtual police line-ups without their knowledge, skeptics of the technology say.

They worry people will one day not be able to go to a park, store or school without being identified and tracked.

Already, a handful of big-box stores across the U.S. are trying out cameras with facial recognition that can guess their customers’ age, gender or mood as they walk by, with the goal of showing them targeted, real-time ads on in-store video screens.

If San Francisco adopts a ban, other cities, states or even Congress could follow, with lawmakers from both parties looking to curtail government surveillance and others hoping to restrict how businesses analyze the faces, emotions and gaits of an unsuspecting public.

The California Legislature is considering a proposal prohibiting the use of facial ID technology on body cameras. A bipartisan bill in the U.S. Senate would exempt police applications but set limits on businesses analyzing people’s faces without their consent.

Legislation similar to San Francisco’s is pending in Oakland, and on Thursday another proposed ban was introduced in Somerville, Mass.

Bedoya said a ban in San Francisco, the “most technologically advanced city in our country,” would send a warning to other police departments thinking of trying out the imperfect technology. But Daniel Castro, vice president of the industry-backed Information Technology and Innovation Foundation, said the ordinance is too extreme to serve as a model.

“It might find success in San Francisco, but I will be surprised if it finds success in a lot of other cities,” he said.

San Francisco is home to tech innovators such as Uber, Airbnb and Twitter, but the city’s relationship with the industry is testy. Some supervisors in City Hall are calling for a tax on stock-based compensation in response to a wave of San Francisco companies going public, including Lyft and Pinterest.

At the same time, San Francisco is big on protecting immigrants, civil liberties and privacy. In November, nearly 60% of voters approved a proposition to strengthen data privacy guidelines.

The city’s proposed face-recognition ban is part of broader legislation aimed at regulating the use of surveillance by city departments. The legislation applies only to San Francisco government and would not affect companies or people who want to use the technology. It also would not affect the use of facial recognition at San Francisco International Airport, where security is mostly overseen by federal agencies.

The Board of Supervisors is scheduled to vote on the bill Tuesday.

San Francisco police say they stopped testing face recognition in 2017. Spokesman David Stevenson said in a statement that the department looks forward to “developing legislation that addresses the privacy concerns of technology while balancing the public safety concerns of our growing, international city.”

Supervisor Aaron Peskin acknowledges his legislation, called the “Stop Secret Surveillance Ordinance,” isn’t very tech-friendly. But public oversight is crucial given the potential for abuse, he said.

The technology often misfires. Studies have shown error rates in facial-analysis systems built by Amazon, IBM and Microsoft were far higher for darker-skinned women than lighter-skinned men.

Even if facial recognition were perfectly accurate, its use would pose a severe threat to civil rights, especially in a city with a rich history of protest and expression, said Matt Cagle, an attorney at the ACLU of Northern California.

“If facial recognition were added to body cameras or public-facing surveillance feeds, it would threaten the ability of people to go to a protest or hang out in Dolores Park without having their identity tracked by the city,” he said, referring to a popular park in San Francisco’s Mission District.

Local critics of San Francisco’s legislation, however, worry about hampering police investigations in a city with a high number of vehicle break-ins and several high-profile annual parades. They want to make sure police can keep using merchants and residents’ video surveillance in investigations without bureaucratic hassles.

Joel Engardio, vice president of the grass-roots group Stop Crime SF, wants the city to be flexible.

“Our point of view is, rather than a blanket ban forever, why not a moratorium so we’re not using problematic technology, but we open the door for when technology improves?” he said.

Such a moratorium is under consideration in the Massachusetts Legislature, where it has the backing of Republican and Democratic senators.

Often, a government’s facial-recognition efforts happen in secret or go unnoticed. In Massachusetts, the motor vehicle registry has used the technology since 2006 to prevent driver’s license fraud, and some police agencies have used it as a tool for detectives.

“It is technology we use,” said Massachusetts State Police Lt. Tom Ryan, adding that “we tend not to get too involved in publicizing” that fact. Ryan and the agency declined to answer further questions about how it’s used.

Massachusetts Sen. Cynthia Creem, a Democrat and sponsor of the moratorium bill, said she worries about a lack of standards protecting the public from inaccurate or biased facial-recognition technology. Until better guidelines exist, she said, “it shouldn’t be used” by government.

The California Highway Patrol does not use face-recognition technology, spokeswoman Fran Clader said.

California Department of Motor Vehicles spokesman Marty Greenstein said facial-recognition technology “is specifically not allowed on DMV photos.” State Justice Department spokeswoman Jennifer Molina said her agency does not use face ID technology, and policy states that “DOJ and requesters shall not maintain DMV images for the purpose of creating a database” unless authorized.

Legislators also sought a face-recognition moratorium this year in Washington, the home state of Microsoft and Amazon, but it was gutted following industry and police opposition. Microsoft instead backed a lighter-touch proposal as part of a broader data privacy bill, but deliberations stalled before lawmakers adjourned late last month.

Read More