Wonderful New World vs. Brave New World

Data Privacy Commissioners Discuss Ubiquitous Tracking (Forbes)

The big question for those gathered here is finding the right mix between government regulations, industry’s best practices and consumer education. In a speech at the conference, Microsoft general counsel and executive vice president Brad Smith agreed that some regulations are necessary to create a level playing field and a clear set of rules for big and small companies to follow while regulators like Portugal’s Clara Guerra acknowledged that big government can’t solve all the risks associated with big data. It’s a shared responsibility and it requires consumer awareness starting with privacy education programs aimed at children as well as adults.

Author Larry Magid strikes an important balance between the wonderful things made possible by technological innovation, the downsides of unaccountable misuse, and the need to help people stay aware of the implications of changing technology on their lives.

I hope his temperament is contagious.

Bypassing an Iris Scanner? There’s Got To Be a Better Way.

In honor of today’s twitter biometric chat on iris biometrics, here’s a post from July 30 containing thoughts on the implications of a recent iris biometrics hack…

A couple of weeks ago, when the news broke that someone had claimed to have “hacked” iris biometrics by reverse engineering a template into an image of an iris that would be accepted by an iris recognition system, I said: It’s not a real biometric modality until someone hacks it.

That’s because a hacking claim can generate a lot of media publicity even if it doesn’t constitute proof that a technology is fatally flawed. Where’s the publicity value of hacking something that nobody uses, anyway? Claims like this can also be taken as a sign that a new technology, iris biometrics in this case, has crossed some sort of adoption and awareness threshold.

So what about the hack? Now that more information is available and assuming that Wired has things about right, “experiment” is a far better descriptor than “hack” for what actually went down. “Hack” would seem to indicate that a system can be manipulated into behaving unexpectedly and with exploitable consequences in its real world conditions. Think of picking a lock. A doorknob with a key hole can be manipulated by tools that aren’t the proper key to open a locked door in its normal operating environment.

The method that the researchers relied upon to develop the fake iris from the real template bears no resemblance to the lock-picking example. What  the researchers did is known as hill-climbing. In simple terms, it’s like playing the children’s game Cold-Warm-Hot but the feedback is more detailed. A hill-climbing experiment relies upon the system being experimented on giving detailed information back to the experimenter about how well the experimenter is doing. The experimenter presents a sample and the system gives a score (cold, warm, hot). The experimenter refines the sample and hopes the score will improve. Lather, rinse, repeat. A few hundred iterations later, the light turns green.

Technically, you don’t even need to have a sample (template) to start hill climbing. You could just start feeding the system random characters until you hit upon a combination that fit the template’s template(?).

This is one of those exercises that is academically interesting but doesn’t provide much useful information to system engineers or organization managers. Scientific experiments deal with their subjects by isolating and manipulating one variable at a time. Real world security systems are deployed with careful consideration of the value of what is being protected and a dependence upon all sorts of environmental factors.

A person who wanted to bypass an iris scanner using this method in the real world would:

1. Hack into a biometric database to steal a template of an authorized user; pray templates aren’t encrypted
2. Determine which biometric algorithm (which company’s technology) generated the template
3. Buy (or steal) that company’s software development kit
4. Build and successfully run the hill-climbing routine
5. Print the resulting image using a high quality printer
6. Go to the sensor
7. Place print-out in front of iris scanner
8. Cross fingers

Simple, right? Compared to what?

Once you’re talking about hacking into unencrypted biometric template databases (and depending upon your CRUD privileges) almost anything is possible and little of it requires Xeroxing yourself a pair of contact lenses.

Why not just blow away the whole database of iris templates? Problem solved. The scanners, now just locks with no key, would have to be disabled at least temporarily.

If stealth is more your style, just hack into the database, create a credential for yourself by placing your very own iris template in there and dispense with the whole rigmarole of the hill-climbing business. Delete your template (and why not all the others) after the heist.

If your hacking skillz aren’t up to the task, you could stalk someone who is already enrolled with a Nikon D4 and a wildlife photography lens and skip steps one thru four (and eight) on the above list.

You could trick, threaten or bribe someone into letting you in.

Break the door or a window.

The elaborateness of the process undertaken by the researchers pretty much proves that the iris sensor isn’t going to be the weak link in any real world security deployment.

FTC Freestylin’ on Face Recognition

Federal Trade Commission Staff Report Recommends Best Practices for Companies That Use Facial Recognition Technologies


Mission of the Federal Trade Commission…
To prevent business practices that are anticompetitive or deceptive or unfair to consumers; to enhance informed consumer choice and public understanding of the competitive process; and to accomplish this without unduly burdening legitimate business activity.

In December of last year, the Federal Trade Commission (FTC) hosted a workshop – “Face Facts: A Forum on Facial Recognition Technology” to examine the use of facial recognition technology and related privacy and security concerns.

Monday, the FTC released two documents summing up the effort. The first is the Staff Report, a 21 page attempt to synthesize the views of the forum’s participants and FTC staff into an authoritative guide. The second is a dissent from the 4-1 vote in favor of releasing the staff report.

In my opinion, Best Practices for Common Uses of Facial Recognition Technologies falls a little short for a couple of reasons. First, of the staff report’s three cases, only one — the Facebook case — is actually a facial recognition application. Then in the other instances where the report deals with facial recognition proper, it does so in a wholly hypothetical way. This approach runs the risk of being seen by many as falling outside the ambit of the FTC’s mission.

I have selected passages from both documents mentioned above for examination because they lie at the heart of the whole exercise. They are a distillation of what the entire project was about and has concluded. The entire documents are available via links below for those who seek more information.

from the Staff report (pdf at FTC.gov)

To begin, staff recommends that companies using facial recognition technologies design their services with privacy in mind, that is, by implementing “privacy by design,” in a number of ways. First, companies should maintain reasonable data security protections for consumers’ images and the biometric information collected from those images to enable facial recognition (for example, unique measurements such as size of features or distance between the eyes or the ears). As the increasing public availability of identified images online has been a major factor in the increasing commercial viability of facial recognition technologies, companies that store such images should consider putting protections in place that would prevent unauthorized scraping which can lead to unintended secondary uses. Second, companies should establish and maintain appropriate retention and disposal practices for the consumer images and biometric data that they collect. For example, if a consumer creates an account on a website that allows her to virtually “try on” eyeglasses, uploads photos to that website, and then later deletes her account on the website, the photos are no longer necessary and should be discarded. Third, companies should consider the sensitivity of information when developing their facial recognition products and services. For instance, companies developing digital signs equipped with cameras using facial recognition technologies should consider carefully where to place such signs and avoid placing them in sensitive areas, such as bathrooms, locker rooms, health care facilities, or places where children congregate.

Staff also recommends several ways for companies using facial recognition technologies to provide consumers with simplified choices and increase the transparency of their practices. For example, companies using digital signs capable of demographic detection – which often look no different than digital signs that do not contain cameras – should provide clear notice to consumers that the technologies are in use, before consumers come into contact with the signs. Similarly, social networks using a facial recognition feature should provide users with a clear notice – outside of a privacy policy – about how the feature works, what data it collects, and how it will use the data. Social networks should also provide consumers with (1) an easy to find, meaningful choice not to have their biometric data collected and used for facial recognition; and (2) the ability to turn off the feature at any time and delete any biometric data previously collected from their tagged photos. Finally, there are at least two scenarios in which companies should obtain consumers’ affirmative express consent before collecting or using biometric data from facial images. First, they should obtain a consumer’s affirmative express consent before using a consumer’s image or any biometric data derived from that image in a materially different manner than they represented when they collected the data. Second, companies should not use facial recognition to identify anonymous images of a consumer to someone who could not otherwise identify him or her, without obtaining the consumer’s affirmative express consent. Consider the example of a mobile app that allows users to identify strangers in public places, such as on the street or in a bar. If such an app were to exist, a stranger could surreptitiously use the camera on his mobile phone to take a photo of an individual who is walking to work or meeting a friend for a drink and learn that individual’s identity – and possibly more information, such as her address – without the individual even being aware that her photo was taken. Given the significant privacy and safety risks that such an app would raise, only consumers who have affirmatively chosen to participate in such a system should be identified. The recommended best practices contained in this report are intended to provide guidance to commercial entities that are using or plan to use facial recognition technologies in their products and services. However, to the extent the recommended best practices go beyond existing legal requirements, they are not intended to serve as a template for law enforcement actions or regulations under laws currently enforced by the FTC. If companies consider the issues of privacy by design, meaningful choice, and transparency at this early stage, it will help ensure that this industry develops in a way that encourages companies to offer innovative new benefits to consumers and respect their privacy interests. [ed.: bold emphasis mine]

The fist paragraph above is common sense. For example: “Companies should establish and maintain appropriate retention and disposal practices for the consumer images and biometric data that they collect.” Who could argue with that?

I believe many on all sides of the facial recognition issue will find the Face Facts forum findings disappointing and I think the second italicized paragraph above best encapsulates why. In it, the FTC staff report loses coherence.

Let’s examine it in detail.

1. The staff report doesn’t confine itself to facial recognition proper.

Staff also recommends several ways for companies using facial recognition technologies to provide consumers with simplified choices and increase the transparency of their practices. For example, companies using digital signs capable of demographic detection – which often look no different than digital signs that do not contain cameras – should provide clear notice to consumers that the technologies are in use, before consumers come into contact with the signs.

Demographic inference isn’t facial recognition and nowhere does the FTC staff make a case that a computer guessing at gender, age or ethnicity has any privacy implication, at all. And then, even if that case is made, the task of tying the activity back to the FTC’s mandate remains.

¿Qué?

The recommendation that someone “should provide clear notice to consumers that the technologies are in use, before consumers come into contact with the signs,” however reasonable it seems in theory, is odd in practice. The old microwave-and-pacemaker signs come to mind. But then where would an ad agency put those signs if they wanted to do advertising on, say, a city street? [Bonus: would it be appropriate to use language detection technology in those signs in order to display the warning message in a language the reader is judged more likely to understand?]

2. Next there’s a nameless “social network” — no points for guessing [See: Consumer Reports: Facebook & Your Privacy and It’s not the tech, it’s the people: Senate Face Rec Hearings Editionwhich — that  is hypothetically doing the exact same things a non-hypothetical social network actually did without much in the way of an FTC response.

Similarly, social networks using a facial recognition feature should provide users with a clear notice – outside of a privacy policy – about how the feature works, what data it collects, and how it will use the data. Social networks should also provide consumers with (1) an easy to find, meaningful choice not to have their biometric data collected and used for facial recognition; and (2) the ability to turn off the feature at any time and delete any biometric data previously collected from their tagged photos.

This is the closest the document ever gets to a concrete example of facial recognition technology even being in the neighborhood of an act the FTC exists to regulate and the staff of the FTC still doesn’t abandon the hypothetical for the real world.

3. Then there’s the warning that the FTC would take a dim view of two types of hypothetical facial recognition deployment each of which would require its own dedicated staff report in order to make a decent show of doing the topic justice.

Finally, there are at least two scenarios in which companies should obtain consumers’ affirmative express consent before collecting or using biometric data from facial images. First, they should obtain a consumer’s affirmative express consent before using a consumer’s image or any biometric data derived from that image in a materially different manner than they represented when they collected the data. 

This is far too general to be useful. The above would seem to preclude casinos from using facial databases of known or suspected cheaters, a proposition few would argue.

Then there’s the question of what makes biometric data so special? Should the same standards apply to all personal data or just pictures of faces?

For the situation above to apply to the FTC’s mandate a practice would have to be deemed “deceptive” or “unfair” and if a practice is deceptive or unfair when a face is part of the data being shared, how does using the data in a substantially equal manner cease to be deceptive and unfair by omitting the face? The report is silent on these points.

Second, companies should not use facial recognition to identify anonymous images of a consumer to someone who could not otherwise identify him or her, without obtaining the consumer’s affirmative express consent. Consider the example of a mobile app that allows users to identify strangers in public places, such as on the street or in a bar. If such an app were to exist, a stranger could surreptitiously use the camera on his mobile phone to take a photo of an individual who is walking to work or meeting a friend for a drink and learn that individual’s identity – and possibly more information, such as her address – without the individual even being aware that her photo was taken. Given the significant privacy and safety risks that such an app would raise, only consumers who have affirmatively chosen to participate in such a system should be identified.

This hypothetical future app does exactly what anyone can pay a private detective to do legally and today. If the FTC isn’t taking action against PI’s, it would be extremely helpful of the FTC to make clear to buyers and sellers of facial recognition technology the distinctions they see between the two.

Then, towards the end of the excerpted text, perhaps sensing how far ahead of themselves and the mission of the FTC they have gotten, a couple of sentences later (bold sentence) the staff report essentially says, “Never mind. We aren’t formulating new policy here. We’re just freestylin.”


However, to the extent the recommended best practices go beyond existing legal requirements, they are not intended to serve as a template for law enforcement actions or regulations under laws currently enforced by the FTC. If companies consider the issues of privacy by design, meaningful choice, and transparency at this early stage, it will help ensure that this industry develops in a way that encourages companies to offer innovative new benefits to consumers and respect their privacy interests. [ed.: bold emphasis mine]

With the possible exception of the “social network” example, pretty much everything in the document goes beyond existing legal requirements enforced by the FTC. So what’s going on here?

My hunch is that someone at the FTC became concerned over a “social network” terms of service issue and rather than deal with it as a narrow terms of use issue — an issue seemingly right in the wheelhouse of the FTC’s mission  under the “deceptive or unfair” part of their mission — decided instead that it was a technology issue and that it was both possible and desirable to address the far bigger issues of facial recognition technology, ID and society in a coherent way, forgetting that doing so requires a novel interpretation of the FTC’s mission. Once that decision was made, the best practices document, flawed though it is, was about the best that could be hoped for… which brings us to the dissent.

The decision to release the Face Facts staff report wasn’t unanimous. Commissioner Thomas Rosch thought releasing the report at all was a mistake. Several paragraphs of the dissent follow below.

The last paragraph quoted below is particularly convincing.

then the lone dissent… (pdf at FTC.gov)

The Staff Report on Facial Recognition Technology does not – at least to my satisfaction – provide a description of such “substantial injury.” Although the Commission’s Policy Statement on Unfairness states that “safety risks” may support a finding of unfairness,3 there is nothing in the Staff Report that indicates that facial recognition technology is so advanced as to cause safety risks that amount to tangible injury. To the extent that Staff identifies misuses of facial recognition technology, the consumer protection “deception” prong of Section 5 – which embraces both misrepresentations and deceptive omissions – will be a more than adequate basis upon which to bring law enforcement actions.

Second, along similar lines, I disagree with the adoption of “best practices” on the ground that facial recognition may be misused. There is nothing to establish that this misconduct has occurred or even that it is likely to occur in the near future. It is at least premature for anyone, much less the Commission, to suggest to businesses that they should adopt as “best practices” safeguards that may be costly and inefficient against misconduct that may never occur.

Third, I disagree with the notion that companies should be required to “provide consumers with choices” whenever facial recognition is used and is “not consistent with the context of a transaction or a consumer’s relationship with a business.”4 As I noted when the Commission used the same ill-defined language in its March 2012 Privacy Report, that would import an “opt-in” requirement in a broad swath of contexts.5 In addition, as I have also pointed out before, it is difficult, if not impossible, to reliably determine “consumers’ expectations” in any particular circumstance.

In summary, I do not believe that such far-reaching conclusions and recommendations can be justified at this time. There is no support at all in the Staff Report for them, much less the kind of rigorous cost-benefit analysis that should be conducted before the Commission embraces such recommendations. Nor can they be justified on the ground that technological change will occur so rapidly with respect to facial recognition technology that the Commission cannot adequately keep up with it when, and if, a consumer’s data security is compromised or facial recognition technology is used to build a consumer profile. On the contrary, the Commission has shown that it can and will act promptly to protect consumers when that occurs.

To summarize, Rosch points out that the FTC staff report:

  • Exceeds the FTC’s regulatory mandate
  • Makes no allegation of consumer harm
  • Is so overly broad as to be unworkable
  • Provides no support for the conclusions it draws

The FTC would perhaps have been better served had more Commissioners taken Rosch to heart. As it happens, the FTC staff report over reaches, under delivers, and deviates from the organization’s stated mission and the results aren’t pretty.

NOTE: This post has been modified slightly from the original version to add clarity, by cleaning up grammar, spelling or typographical errors.

Playing it down the middle

Biometric ID advance ignites debate over rights (Trib Live)

Long envisioned as an alternative to remembering scores of computer passwords or lugging around keys to cars, homes and businesses, technology that identifies people by their faces or other physical features finally is gaining traction, to the dismay of privacy advocates.

A balanced article on the tension between biometric technology and privacy.

Biometrics scares people, makes them happy.

Biometrics scares people* (Network World)
Perception of biometrics tends to be rather negative because it’s personal and physical, says Lockheed Martin’s biometrics division director.

How to find happiness in a world of password madness (PC World)
The beauty of biometrics is that you don’t have to remember anything at all, much less a complex password.

*Since I was a tad critical of Ellen Messmer‘s take on rapid DNA in the previous post it’s only fair that I single her out for praise for this highly enjoyable and thorough article.

Implications of Ubiquitous Biometric Technology

A couple of good articles discussing the implications of ubiquitous biometric technology are out today…

Does rise of biometrics mean a future without anonymity? (Contra Costa Times)

“There are multiple benefits to society in using this form of identification,” said Anil Jain, a Michigan State University computer science and engineering professor, adding the technologies could prove “transformative.”

With face recognition, for example, “in 10 years the technology is going to be so good you can identify people in public places very easily,” said Joseph Atick, a face-recognition innovator and co-founder of the trade group International Biometrics & Identification Association. But misusing it could result in “a world that is worse than a big-brother state,” he warned, adding, “society is just beginning to catch up to what the consequence of this is.”

Businesses to use facial recognition (The Advocate)

Imagine arriving at a hotel to be greeted by name, because a computer has analyzed your appearance as you approached the front door.

Or a salesman who IDs you and uses a psychological profile to nudge you to pay more for a car.

FBI Face Rec in the News

The story is all over the news, but I like this cnet piece best because Charles Cooper plays it straight and gives a concise history of how we got here.

Privacy hawks fret as FBI upgrades biometrics capacities (cnet)

The computer revolution arrived late at the FBI, which was still collecting and matching fingerprints in 1999 in much the same way that it did when the agency first began collecting the images in 1924. But that’s been changing lately and privacy hawks are watching closely.

As the millennium neared, the agency finally traded in its manual system for one in which a database of fingerprints and associated criminal histories could be searched and updated. Now, the next step.

We first posted on this subject here in March, 2011.

This is the post that deals with some privacy and technical aspects of the issue in more detail. I highly recommend it (even if I do say so myself). Ultimately what is permitted in the name of law enforcement is, and should be, a political decision.

You may also be interested in our recent twitter “Biometric Chat” with Michael Kirkpatrick. Mr. Kirkpatrick was the FBI’s Assistant Director in Charge of the Bureau’s Criminal Justice Information Services (CJIS) Division from January 2001 – August 2004. He led the Division through profound IT changes especially relating to the application of biometric technologies to the challenges of law enforcement and the curent initiative under discussion here would have been under his purview.

Hop on the Bus, Gus. Drop off the Key, Lee.

Biometric Technology Gets on the School Bus (Press Release via Benzinga)When children board or exit the bus the BlinkSpot iris scanning technology recognizes the child and sends real time reports to the school along with an individual email to each parent verifying the time and location of their child.

The effort combines Verizon, Eye-D, and 3M Cogent capabilities.

I’m curious to see how this works out. An application that provides real-time information on children’s interactions with the school bus system is, obviously, highly desirable.

Will the technology fit the deployment? How well will it work? How passive is the use model (i.e. must the children actively engage the system?). How much training will drivers and children require? How long does each transaction take? Will that cause traffic jams? What are the costs in money and time?

These are the questions that would-be customers and system developers need to ask, answer, and agree upon.

Thinking this one through, my hunch is that from a pure utility point of view, this is a finger app. But in the real world other considerations may apply. If some tech companies want to test their technology, their ability to work together, product design and feasibility, and they find a willing and supportive test environment — in this case a school and community — then that’s what will happen. Lessons will be learned and the state of the art will have been advanced.

Perfect; Good; Tech.; People; etc. It’s a fun landscape in which to participate.

Strange and Unintended Brain-Computer Interface Applications

You shouldn’t believe everything you read in a headline. I’ve supplied one above that is far more accurate but far less alarming than the one provided by the original story below.

Scientists Successfully ‘Hack’ Brain To Obtain Private Data (CBS – Seattle, WA)

The scientists took an off-the-shelf Emotiv brain-computer interface, a device that costs around $299, which allows users to interact with their computers by thought.

The scientists then sat their subjects in front of a computer screen and showed them images of banks, people, and PIN numbers. They then tracked the readings coming off of the brain, specifically the P300 signal.

The P300 signal is typically given off when a person recognizes something meaningful, such as someone or something they interact with on a regular basis.

Scientists that conducted the experiment found they could reduce the randomness of the images by 15 to 40 percent, giving them a better chance of guessing the correct answer.

The case the author wants to make is way overstated, which it too bad because the topic is very interesting without over hyping it.

The controversial part of what the story describes (quoted above) is sort of a half-way house between the hack vs con discussion. I guess in the distant future, people will have to be more wary of street-corner magicians and psychologists but the PIN probably isn’t going anywhere any time soon.

This may be for a future post but I suspect that due to biometrics the PIN will become more common as complex passwords become more rare, even in the presence of brain-computer-interface wielding mountebanks.

Biometrics and “Green on Blue” Violence in Afghanistan

Another ‘green-on-blue’ attack kills NATO troop; 10 dead in 2 weeks (Stars & Stripes)

Afghanistan ‘insider’ attacks pose threat to West’s exit strategy (Stars & Stripes)

How to guard against such attacks is the subject of considerable debate in military leadership circles, because overtly heavy-handed measures can send a signal to the Afghans that they are not trusted, which can be taken as an insult. And in traditional Afghan culture, perceived insult can swiftly lead to exactly the sort of violence the attacks represent.

Efforts on the Afghan side include embedding undercover intelligence officers in some battalions, and stricter scrutiny of recruits, including the collection of biometric data to compare against a database of known insurgents. Some observers, though, believe the safeguards built into the recruitment process, including the requirement that village elders vouch for those who want to join the army, are routinely bypassed in many provinces.

Biometrics can help with identity management but they are always just a part of an overall organizational plan.

This short passage touches on a few important issues: technology, managing people, managing a security regime once it’s in place. All must work together in furtherance of organizational goals. If one leg of the stool goes, the whole structure is at risk. For some organizations that means embarrassing CEO speeches and annoyed customers. For others the results are utterly tragic.

Surveillance, transparency, accountability & technology

TrapWire: Anonymous gives handy tips on how to avoid surveillance

This video has a heavy dose of dead pan humor, which is actually quite endearing.

As far as biometrics countermeasures go, I, like Anonymous, am still a fan of CV Dazzle because there’s something stylish and fun about what how they go about the challenge of defeating facial recognition.

The infra-red LED trick is really cool, too. Fans of the show White Collar will have seen that hack come into play in last week’s episode. That’s the first place I saw it.

All of this, while fun, socially interesting and even romantic, ignores the fact that the smartphone is the holy grail of surveillance technologies. Someone can wear a mask and a crazy hair do, head cocked 20 degrees to the side under a LED hat all they want. It won’t do any good if internet companies and cell providers (whether knowingly or unwittingly) cough up everything they know about individuals. The other virtue of the mobile computing surveillance model is that it requires no taxes, maintenance, or budget. The watched pay their own freight. That makes this type of surveillance available to individuals and organizations that might not have a lot of money or labor.

The answer isn’t regulating private use of technologies such as cell phones or biometrics. With technology, blanket moratoriums and bans are almost never the answer and even more rarely succeed. It may not be romantic or fashionable but the only answer is transparency and accountability.

Technology is all about people. It always will be.

Background on TrapWire

Technology and Management working together can help improve public payments system.

What I like about this article is the juxtaposition of the technological and managerial aspects of dealing with difficult problems.

Ghana loses millions in multiple salary payments (Modern Ghana)

In its response to the issue raised by the Auditor-General, the management of the CAGD said “the observation is noted and CAGD will investigate and take necessary action. In general, the ongoing biometric registration of active employees and pensioners will help address some of the payroll issues”.

The Auditor-General also called for an effective supervision of data entry officers to minimise the risk of payroll frauds and errors.

Biometrics give able managers a powerful new tool and an opportunity to realize significant returns on technology investment (ROI) but they can’t manage anything by themselves.

Biometric identity management is about people.

Of course, Biometrics are not evil.

Mike Elgan’s recent article, Are biometric ID tools evil?, is really, really dumb (I almost said evil). It’s either that or bordering on libelous so, I’ll give it the benefit of the doubt, even though the piece doesn’t extend the same courtesy to those of us working on biometric identity management technologies.

But maybe he didn’t mean it. The title, after all, is a question, right? We’ll read on.

It doesn’t take much longer for the author to remove all doubt, as he rapidly moves from the rhetorical title to the “How often do you beat your wife?” formulation of the question:

How evil is biometric ID?

Followed by…

So we find ourselves in a strange position in which some religious conservatives and some secular liberal privacy advocates both agree that biometric identification is evil.

On the other, you have a large number of people who consider biometrics an unparalleled evil, and they will refuse to participate.

Who’s right and who’s wrong? Is biometric technology the answer to our security problems? Or is it just plain evil? [all emphasis mine]

Evil? Really? Not “a bad idea”, “misguided”, or “dangerous” — evil? 

Last I checked evil means “profoundly immoral and malevolent” and because most people gave up imputing moral qualities to inanimate objects sometime around the Bronze Age, the whole piece is either a really bad joke lacking a punchline or a shot at people — the people at every level of biometric development, from academia to enterprise — working to apply a new technology to the human challenges of identity management.

And why the fixation on “evil”?

Maybe “creepy” seemed too Jan Brady (and way played-out) and moral hyperbole is the new new thing.

Maybe it’s a reference to what is perhaps the least ambitious corporate motto of all time: “Don’t be evil.”

One thing, however, is certain: someone really needs a thesaurus.

inconceivable_means_02
You keep using that word.

Technology & the Future of Violence

Not really biometrics related… but that’s pretty much the point. (Hoover.org)

You walk into your shower and find a spider. You are not an arachnologist. You do, however, know that one of the following options is possible:

The spider is real and harmless. The spider is real and venomous.

Your next-door neighbor, who dislikes your noisy dog, has turned her personal surveillance spider (purchased from “Drones ‘R Us” for $49.95) loose and is monitoring it on her iPhone from her seat at a sports bar downtown. The pictures of you, undressed, are now being relayed on several screens during the break of an NFL game, to the mirth of the entire neighborhood.

Your business competitor has sent his drone assassin spider, which he purchased from a bankrupt military contractor, to take you out. Upon spotting you with its sensors, and before you have any time to weigh your options, the spider shoots an infinitesimal needle into a vein in your left leg and takes a blood sample.

As you beat a retreat out of the shower, your blood sample is being run on your competitor’s smartphone for a DNA match. The match is made against a DNA sample of you that is already on file at EVER.com (Everything about Everybody), an international DNA database (with access available for $179.99).

Once the match is confirmed (a matter of seconds), the assassin spider outruns you with incredible speed into your bedroom, pausing only long enough to dart another needle, this time containing a lethal dose of a synthetically produced, undetectable poison, into your bloodstream. Your assassin, who is on a summer vacation in Provence, then withdraws his spider under the crack of your bedroom door and out of the house, and presses its self-destruct button. No trace of the spider or the poison it carried will ever be found by law enforcement authorities…

Gartner Hype Cycle 2012

Gartner’s 2012 Hype Cycle for Emerging Technologies Identifies “Tipping Point” Technologies That Will Unlock Long-Awaited Technology Scenarios (Gartner)

Biometrics feature prominently is several of the technology groups Grtner presents. Whether you’re an old hand or hearing of the Hype Cycle for the first time, you’ll want to click through and check it out.

Source: Gartner

The Peak of Inflated Expectations was frustrating. The Trough of Disillusionment was a grind. Now is the fun part.

Bypassing an Iris Scanner? There’s Got To Be a Better Way.

A couple of weeks ago, when the news broke that someone had claimed to have “hacked” iris biometrics by reverse engineering a template into an image of an iris that would be accepted by an iris recognition system, I said: It’s not a real biometric modality until someone hacks it.

That’s because a hacking claim can generate a lot of media publicity even if it doesn’t constitute proof that a technology is fatally flawed. Where’s the publicity value of hacking something that nobody uses, anyway? Claims like this can also be taken as a sign that a new technology, iris biometrics in this case, has crossed some sort of adoption and awareness threshold.

So what about the hack? Now that more information is available and assuming that Wired has things about right, “experiment” is a far better descriptor than “hack” for what actually went down. “Hack” would seem to indicate that a system can be manipulated into behaving unexpectedly and with exploitable consequences in its real world conditions. Think of picking a lock. A doorknob with a key hole can be manipulated by tools that aren’t the proper key to open a locked door in its normal operating environment.

The method that the researchers relied upon to develop the fake iris from the real template bears no resemblance to the lock-picking example. What  the researchers did is known as hill-climbing. In simple terms, it’s like playing the children’s game Cold-Warm-Hot but the feedback is more detailed. A hill-climbing experiment relies upon the system being experimented on giving detailed information back to the experimenter about how well the experimenter is doing. The experimenter presents a sample and the system gives a score (cold, warm, hot). The experimenter refines the sample and hopes the score will improve. Lather, rinse, repeat. A few hundred iterations later, the light turns green.

Technically, you don’t even need to have a sample (template) to start hill climbing. You could just start feeding the system random characters until you hit upon a combination that fit the template’s template(?).

This is one of those exercises that is academically interesting but doesn’t provide much useful information to system engineers or organization managers. Scientific experiments deal with their subjects by isolating and manipulating one variable at a time. Real world security systems are deployed with careful consideration of the value of what is being protected and a dependence upon all sorts of environmental factors.

A person who wanted to bypass an iris scanner using this method in the real world would:

1. Hack into a biometric database to steal a template of an authorized user; pray templates aren’t encrypted
2. Determine which biometric algorithm (which company’s technology) generated the template
3. Buy (or steal) that company’s software development kit
4. Build and successfully run the hill-climbing routine
5. Print the resulting image using a high quality printer
6. Go to the sensor
7. Place print-out in front of iris scanner
8. Cross fingers

Simple, right? Compared to what?

Once you’re talking about hacking into unencrypted biometric template databases (and depending upon your CRUD privileges) almost anything is possible and little of it requires Xeroxing yourself a pair of contact lenses.

Why not just blow away the whole database of iris templates? Problem solved. The scanners, now just locks with no key, would have to be disabled at least temporarily.

If stealth is more your style, just hack into the database, create a credential for yourself by placing your very own iris template in there and dispense with the whole rigmarole of the hill-climbing business. Delete your template (and why not all the others) after the heist.

If your hacking skillz aren’t up to the task, you could stalk someone who is already enrolled with a Nikon D4 and a wildlife photography lens and skip steps one thru four (and eight) on the above list.

You could trick, threaten or bribe someone into letting you in.

Break the door or a window.

The elaborateness of the process undertaken by the researchers pretty much proves that the iris sensor isn’t going to be the weak link in any real world security deployment.

It’s not the tech, it’s the people: Senate Face Rec Hearings Edition

Here are a couple of news pieces on the hearing of the Privacy Subcommittee of the Judiciary Committee facial recognition and privacy.

Sen. Al Franken reads Facebook the riot act over facial recognition risks (All Voices)

The senator made some pointed criticisms to Facebook’s manager of privacy and public policy Rob Sherman. Sen. Franken noted how difficult it is for users to opt out of having their faces recognized by Facebook supercomputers. The privacy settings, he argued, are buried deep in a lengthy and frustrating process. “Right now, you have to go through six different screens to get (to the privacy opt-out),” Sen. Franken complained. “I’m not sure that’s ‘easy to use’.”

Regulation of Facial Recognition May Be Needed, US Senator Says (PC World)

The growing use of facial recognition technology raises serious privacy and civil liberties concerns, said Senator Al Franken, a Minnesota Democrat and chairman of the Senate Judiciary Committee’s privacy subcommittee. Franken, during a subcommittee hearing, called on the U.S. Federal Bureau of Investigation and Facebook to change the way they use facial recognition technology.

Biometric information, including facial features, is sensitive because it is unique and permanent, Franken said.

There are real privacy issues surrounding both government biometric surveillance and the transparency of private entities that use biometrics.

Dealing with the particulars of the hearing, though, it seems that if you’re mad at Facebook, deal with Facebook and that those worried about the government’s respect for the privacy of citizens would be best served arguing for limits to the government’s snooping power, regardless of the technical method used. 

See:
Surveillance requests to cellphone carriers surge and Twitter Gives User Info In 75% Of U.S. Inquiries. Google says it complied with about 65% of court orders and 47% of informal requests in the second half of 2011.

Of the methods Facebook uses to extract personal information from users, facial recognition is perhaps the best known.

Of the myriad technologies government uses to track citizens, facial recognition is among the least significant.

That won’t always be the case, so it’s good to to build consensus on the proper use of a new technology in an open and informed way, but it shouldn’t be hyped and used as a distraction from more pertinent privacy issues.

It’s not the tech, it’s the people.