Tag Archives: recognition

Baltimore May Soon Ban Face Recognition for Everyone but Cops

After years of failed attempts to curb surveillance technologies, Baltimore is close to enacting one of the nation’s most stringent bans on facial recognition. But Baltimore’s proposed ban would be very different from laws in San Francisco or Portland, Oregon: It would last for only one year, police would be exempt, and certain private uses of the tech would become illegal.

City councilmember Kristerfer Burnett, who introduced the proposed ban, says it was shaped by the nuances of Baltimore, though critics complain it could unfairly penalize, or even jail, private citizens who use the tech.

Last year, Burnett introduced a version of the bill that would have banned city use of facial recognition permanently. When that failed, he instead introduced this version, with a built-in one year “sunset” clause requiring council approval to be extended. In early June, the city council voted in its favor 12-2; it now awaits signature from Mayor Brandon Scott.

“It was important to begin to have this conversation now over the next year to basically hash out what a regulatory framework could look like,” Burnett says.

The proposed law would establish a task force to produce regular reports on the purchase of newly acquired surveillance tools, describing both their cost and effectiveness. Cities like New York and Pittsburgh have created similar task forces, but they’ve been derided as a “waste” as members lack resources or enforcement power.

Burnett says the reports are crucial, because a year from now, Baltimore’s political landscape could look very different.

Since 1860, the Baltimore Police Department has been largely controlled by the state, not the city. The city council and mayor appoint the police commissioner and set the department’s budget, but the city council has no authority to ban police use of facial recognition.

However, Baltimore residents will have the opportunity to vote on returning the police department to city control as early as next year. Mayor Scott himself supported this change during his time as a city councilman. The local-control measure could appear on ballots as the one-year ban is expiring, when Burnett and other privacy advocates would have the benefit of a year’s study on the effects of a ban.

The conversation around returning the police to city control reignited following the death of Freddie Gray in 2015 while in police custody. Then-Mayor Catherine Pugh established a task force to offer suggestions around police reform; in 2018, the task force released a report warning that “BPD will never be fully accountable to its residents until full control of the department is returned to the city.”

Adding to the push to restore local control were revelations that police used social media monitoring software and facial recognition to surveil protesters after Gray’s death. Burnett says the city needs to consider the proper uses of surveillance tools “before we get to a space where [surveillance] is so pervasive that it becomes very much more difficult to unravel.” In contrast, he says, government is usually “much more reactive.”

Critics say the proposed ban is an example of overreach.The police department and the city’s Fraternal Order of Police oppose the measure. A police spokesperson referred WIRED to the department’s letter to the city council, in which it wrote that “rather than a prohibition against the acquisition of any new facial recognition technology, it would be more prudent to establish safeguards.”

Trade groups also came out against the bill, particularly the provisions around private use of facial recognition. As written, the bill not only fines violators, it casts that violation as a criminal offense, punishable by up to 12 months in jail. That goes further than a Portland law banning private use of facial recognition, which made violators liable for damages and attorneys’ fees.

Groups like the Security Industry Association argued that this could criminalize private business owners for, say, requiring facial verification to enter facilities, or even schools for requiring online proctoring that uses the tech. Councilman Isaac Schleifer cited the potential criminalization as a chief concern in his “no” vote on the measure.

Author: Sidney Fussell
This post originally appeared on Business Latest

How Face Recognition Can Destroy Anonymity

Stepping out in public used to make a person largely anonymous. Unless you met someone you knew, nobody would know your identity. Cheap and widely available face recognition software means that’s no longer true in some parts of the world. Police in China run face algorithms on public security cameras in real time, providing notifications whenever a person of interest walks by.

China provides an extreme example of the possibilities stemming from recent improvements in face recognition technology. Once the preserve of large government agencies, the technology is now embedded in phones, social networks, doorbells, public schools, and small police departments.

That ubiquity means that although the technology appears more powerful than ever, the fallout from errors is greater too. Last week, the ACLU sued the Detroit Police Department on behalf of Robert Williams, who was arrested in 2019 after face recognition software wrongly matched his driver’s license photo to murky surveillance video of an alleged shoplifter. Williams is Black, and tests by the US government have shown that many commercial face recognition tools make more false matches of non-white faces.

In the US, government use of face recognition is much less expansive than in China, but no federal legislation constrains the technology. That means law enforcement can mostly do as it pleases. Researchers from Georgetown University revealed in 2019 that Detroit and Chicago had purchased face recognition systems capable of scanning public cameras in real time. At the time, Chicago claimed it had not used that function; Detroit said it was not then doing so.

Nearly 20 US cities, including Jackson, Mississippi, and Boston, Massachusetts, have passed laws to restrict government use of face recognition. Portland, Oregon, has gone further—barring private businesses from installing the technology. Some federal lawmakers have expressed interest in placing limits on face algorithms, too.

The outcome of any federal legislation will be determined in part by the industry selling the technology. An analysis by WIRED in November found that mentions of face recognition in congressional lobbying filings jumped more than fourfold from 2018 to 2019 and were on track to set a new record in 2020.


More Great WIRED Stories

Tom Simonite

This article originally appeared on Business Latest

How to hold Big Tech accountable for violating facial recognition privacy law? Boom Bust finds out

Presented by
RT.com

US tech giant Facebook has been ordered to pay $ 650 million to settle a class action lawsuit in Illinois for violating a landmark state law aimed at protecting people from invasive privacy practice.

Mollye Barrows of America’s Lawyer joins RT’s Boom Bust to talk about growing concerns over AI technology.

“It’s the first law that actually regulates biometric data and it’s the only law that allows individuals to bring a case to the court that says, ‘Hey, my privacy was violated even though no harm was done to me,’” she said.

Barrows explains that “there was a violation under this Illinois law and it allows individuals to be able to pursue claims of either negligence or they were deliberate in invading their privacy. It allows basically tech companies to be held accountable, and there’s some consequences there financially which impact smaller companies more than bigger companies like Facebook, but there are some consequences.”

“So, these tech companies are still sort of hopping from state to state if you will, finding the best laws that suit what they do.”

For more stories on economy & finance visit RT’s business section