US: Face recognition code of conduct confab loses privacy advocates

The National Telecommunications and Information Administration (NTIA) has convened a privacy multistakeholder process regarding the commercial use of facial recognition technology. On December 3, 2013, the NTIA announced that the goal of the second multistakeholder process is to develop a voluntary, enforceable code of conduct that specifies how the Consumer Privacy Bill of Rights applies to facial recognition technology in the commercial context.

Privacy Advocates Walk Out in Protest Over U.S. Facial-Recognition Code of Conduct (The Intercept)

“At a base minimum, people should be able to walk down a public street without fear that companies they’ve never heard of are tracking their every movement — and identifying them by name – using facial recognition technology,” the privacy advocates wrote in a joint statement.

The quoted article is full of links to NTIA online resources.

An “open letter” of resignation on the part of the named privacy advocates lists their concerns here.
Concluding paragraph:

We hope that our withdrawal signals the need to reevaluate the effectiveness of multistakeholder processes in developing effective rules of the road that protect consumer privacy – and that companies will support and implement.

Ultimately, of course, these are political questions rather than technological ones, but the focus on one type of technology (facial recognition) is a little difficult to understand. If it’s wrong for a private corporation to track an unsuspecting individual’s every movement, identifying them by name, why single out facial recognition (the means) rather than the tracking (the end)?

The privacy advocates, however, have a point in their favor. The effectiveness of confabs of privacy advocates, sub-cabinet-level administrators, and corporate executives in defining a society’s scope for privacy in public should be questioned.

Also mentioned in the article is the fact that the states of Texas and Illinois have passed laws limiting the use of facial recognition technology to identify individuals in public without their affirmative consent.

UK: Leicestershire police trial face recognition at music festival

Download Festival: Facial recognition technology used at event could be coming to festivals nationwide (The Independent)

Around 90,000 people attending the five-day rock event in Derby will have their faces scanned by “strategically placed” cameras, which are then compared with a database of custody images across Europe.

The force has trialled the system since April 2014 in “controlled environments”, but this is the first time the portable NeoFace surveillance technology, made by NEC Corporation, is being used outdoors in the UK on this scale.

Leicestershire police said it hoped the system would enable them to find organised criminals who prey on festivalgoers who are often victims of theft.

This sounds a lot like the ‘Snooper Bowl’ deployment we had a role in back in 2001.

Facial recognition surveillance in an uncontrolled environment with non-participating individuals still presents significant technical challenges. Among them are lighting, pose angle, and perhaps most significantly, training users on how to evaluate the information the facial recognition system generates.

See also: Leicestershire Police defend facial recognition scans (BBC)

That’s like, so 2001

MAY, 2013
Boston PD Tested Facial Recognition Software By Recording Every Face At Local Music Festivals (Daily Caller)

Concertgoers at last year’s annual Boston Calling music festivals weren’t just there to watch the show — they were watched themselves as test subjects for Boston police’ new facial recognition technology, which reportedly analyzed every attendee at the May and September two-day events.

Employees at IBM — the outside contractor involved in deploying the tech alongside Boston Police — planned the test of its Smart Surveillance System and Intelligent Video Analytics to execute “face capture” on “every person” at the concerts in 2013.

FEBRUARY, 2001
Welcome to the Snooper Bowl (Time)

In a move that has been both hailed and decried, the Tampa Bay police department used the occasion of Super Bowl XXXV to conduct a high-tech surveillance experiment on its unsuspecting guests. In total secrecy (but with the full cooperation of the National Football League), the faces of each of the games’ 72,000 attendees were scanned and checked against a database of potential troublemakers. The news, first reported in the St. Petersburg Times, raises some urgent questions: is this the end of crime–or the end of privacy?

The surveillance system, FaceTrac, is based on technology originally developed at the Massachusetts Institute of Technology to teach computers to recognize their users, and was installed by a Pennsylvania firm called Graphco Technologies.

The technology and key personnel from Graphco were acquired by SecurLinx in 2003.

Poll: Public not too worried about surveillance and face recognition

Americans mostly in favor of facial recognition at public events: poll (Biometrics Update)

From the report, 59% oppose email and cell phone surveillance (up 13% from 2006), but 79% are in favor of using facial recognition at various locations and public events and 81% support expanded camera surveillance on streets and in public places.

The public probably senses that there are a lot of ways to deploy facial recognition that are much less invasive of privacy than snooping on emails and hacking cell phones.

Putting the mosaic together in Boston

The post’s title refers to the mosaic of information that can be arranged into a picture of the events leading up to the savage acts. The other mosaic, the way things were for so many unique individuals, can never be put back together.

How This Photo of the Boston Marathon Gives the FBI a Bounty of Data (Wired)

The photo — click to enlarge — shows a lot of people, what they’re wearing and where they’re positioned within the crush of Marathon fans. It’s important to law enforcement, as it “can be of use in putting the mosaic together,” says Robert McFadden, a former Navy terrorism investigator. Crabbe’s wide-angle panoramic photo “could be one of the many critical pieces of the map of the investigation.”

The panorama photo was one of seven shots Crabbe snapped with her phone during a leisurely stroll and later handed over to investigators.

The Wired article starts with a single data point (data set, really), a photo, and follows it part-way through the process the FBI has used during its investigation of the recent bombings in Boston.

…putting the mosaic together. It’s a good metaphor for how the people charged with figuring out what happened and who did it go about their work. Read the whole thing.

Also see:
What’s Going on Behind the Scenes of Bombing Investigation? Forensic Scientist, Former DHS Official Shed Light on Tech and Tactics (The Blaze)

“Facial recognition technology will play a very small part,” Schiro told TheBlaze in a phone interview.

“A lot depends on the quality of the images you have to work with,” Schiro continued noting that lighting, angle and other factors could really limit the use of facial recognition in the case. Not only that but there would need to be some sort of match for it to recognize.

UPDATE:
Here’s another good article about facial recognition and crime solving. I selected the two paragraphs below because they highlight both the organizational issue of interoperability and the technology issues around matching. There are other interesting insights in the rest of the piece.

Facial Recognition Tech: New Key to Crime Solving (The Fiscal Times)

However, it’s likely the FBI was unsuccessful in identifying the suspects using FR because either they didn’t have a quality image of the wanted persons, or the suspects were not in any of the databases the FBI has access too, Albers said.

While facial recognition technology has high-accuracy when used to match a clear image of a person with another passport-style photo, it is not as effective when used with low-quality images like the ones the FBI released on Thursday. The standard for facial recognition to be accurate requires 90 pixels of resolution between the two eyes of the pictured person. The pictures the FBI released of the suspects were about 12 pixels between the two eyes, said Jim Wayman, the director of the National Biometric Center.

and..
Facial-recognition technology to help track down criminals – Humans are still better at it (Kuwait Times)

Search for Boston bombers likely relied on eyes, not software (Reuters)

These last two reminded me of the (Facial Recognition vs Human) & (Facial Recognition + Human) post from November 2011.

In the Boston case, it looks like there were two barriers to effective use of facial recognition technology in identifying the suspects. On the “evidence” (probe) side, the image quality was poor. On the enrollment (database) side the only “correct” match was likely to be in a very large database such as the Massachusetts DMV database.

If only one of these conditions were true — for example a bad probe against a small database, or good probe against a large database — facial recognition technology might have been of more help.

Crowd-sourcing the ID challenge to a large number of human beings that operate with a lot more intelligence and information than facial recognition algorithms is another option. It’s been used with photographs since at least 1865 and without photographs since at least 1696.

One crowd-sourcing fact that law enforcement officials must consider, however, is that the suspect is almost certainly in the sourced crowd. If the suspect already knows he’s a suspect, that’s not a problem. If he doesn’t already know he’s suspected, that information is the price of getting the public’s help which means facial recognition technology will retain its place in the criminal ID toolkit.

UPDATE:
Boston police chief: facial recognition tech didn’t help find bombing suspects (Ars Technica)

“The technology came up empty even though both Tsarnaevs’ images exist in official databases: Dzhokhar had a Massachusetts driver’s license; the brothers had legally immigrated; and Tamerlan had been the subject of some FBI investigation,” the Post reported on Saturday.

Facial recognition systems can have limited utility when a grainy, low-resolution image captured at a distance from a cellphone camera or surveillance video is compared with a known, high-quality image. Meanwhile, the FBI is expected to release a large-scale facial recognition apparatus “next year for members of the Western Identification Network, a consortium of police agencies in California and eight other Western states,” according to the San Jose Mercury News.

UK Surveillance Commissioner Speaks

CCTV Technology has ‘Overtaken Ability to Regulate it’ (Wall Street Journal)

“A tiny camera in a dome with a 360-degree view can capture your face in the crowd, and there are now the algorithms that run in the background. I’ve seen the test reviews that show there’s a high success rate of picking out your face against a database of known faces.”

Research into automatic facial recognition being carried out by the Home Office has reached a 90 per cent success rate, he said, and it was “improving by the day”.

The headline quote comes from this more detailed article from The Independent, and might best be taken as a warning rather than a statement of fact. After all, if meant literally, the statement belongs in a resignation letter.

Surveillance Commissioner Andrew Rennison:

Let’s have a debate – if the public support it, then fine. If the public don’t support it, and we need to increase the regulation, then that’s what we need to do.”

Sounds like Transparency and Consent to me.

Implications of Ubiquitous Biometric Technology

A couple of good articles discussing the implications of ubiquitous biometric technology are out today…

Does rise of biometrics mean a future without anonymity? (Contra Costa Times)

“There are multiple benefits to society in using this form of identification,” said Anil Jain, a Michigan State University computer science and engineering professor, adding the technologies could prove “transformative.”

With face recognition, for example, “in 10 years the technology is going to be so good you can identify people in public places very easily,” said Joseph Atick, a face-recognition innovator and co-founder of the trade group International Biometrics & Identification Association. But misusing it could result in “a world that is worse than a big-brother state,” he warned, adding, “society is just beginning to catch up to what the consequence of this is.”

Businesses to use facial recognition (The Advocate)

Imagine arriving at a hotel to be greeted by name, because a computer has analyzed your appearance as you approached the front door.

Or a salesman who IDs you and uses a psychological profile to nudge you to pay more for a car.

A look at digital government services

Of course, I’d say policy and technology must be good bedfellows…

Policy and technology can be good bedfellows (The Guardian)

Technology-enabled reform of public services can create friction, as the public is required to adapt to new platforms for interacting with the state and its administrators have to learn a new way of working. At its worst, this friction can result in disjoined state paralysis following the wrong kind of policy making and subsequent commissioning. At its best, it can reduce the state running costs and better fit the mould of citizens’ lives, such as being able to book a GP appointment via a laptop or mobile.

Following Attendance Scandal São Paulo City Council Self-Imposes Biometric System

After scandal, 42 of the 55 councilors say they are in favor of presence only with digital (O Estadão de São Paulo)
Google Chrome Translation (with slight edits)

After [this newspaper] uncovered fraud in the attendance record at City Hall, 42 of the 55 councilors said they were in favor of attendance at plenary sessions being recorded only by fingerprint. To change the bylaws of the house, you need the backing of 28 MPs.

The current system relies on passwords.

How to Inoculate Against Public Facial Recognition

How to Defend Yourself Against Facial Recognition Technology (PBS)

Facial recognition technology [FRT] is now just about everywhere we are…

Do we simply have to accept this as inevitable, or are there things we can do to protect ourselves and others against improper or repressive use of FRT?

Below are some tactical and technological defenses against FRT. Specifically, two layers of those involve: 1) when we are being watched, for example, at protests or in a public space, and 2) when we ourselves are taking and sharing images of others, especially online.

This well sourced-article contains a wealth of information and links having to do with in person and online public facial recognition.

Of course, CV Dazzle gets plenty of attention, as it should.

The app that automatically pixelates the faces in pictures users take with their mobile phones is really cool, too.

Then there’s the software in “Friends” a threat to your privacy? This facial recognition app might help, which isn’t mentioned in the PBS piece, but it would fit right in.