We’re all ears

New academic research on ear biometrics…

3D Ear Identification Based on Sparse Representation (Forensic Magazine)

Compared with classical biometric identifiers such as fingerprint and face, the ear is relatively a new member in the biometrics family and has recently received some significant attention due to its non-intrusiveness and ease of data collection. As a biometric identifier, the ear is appealing and has some desirable properties such as universality, uniqueness and permanence. The ear has a rich structure and a distinct shape which remains unchanged from 8 to 70 years of age as determined by Iannarelli in a study of 10,000 ears. The recognition using 2D ear images has a comparable discriminative power compared with the recognition using 2D face images.

If you click through to the whole study at plos.org, the authors (Lin Zhang, Zhixuan Ding, Hongyu Li & Ying Shen) have made the Matlab source code for the ear matching algorithm available. That’s really neat.

From our first post on ear biometrics in 2010…

Pros:
-Facial recognition accuracy is degraded as the pose angle diverges from a full frontal view. As pose angles get bigger, an ear will come into view. Tying an ear-recognition system to a face recognition system could make more identifications possible, especially with a non-participating subject.

Cons:
-Ears aren’t really that stable. They grow throughout life, as the quote above addresses.
-As high school wrestlers can attest, ears are easily deformed by trauma.
-Hair obscures significant portions of the ear in a significant percentage of the population.

Washington DC: Participate in biometric system testing and earn $95

Seeking individuals to participate in an ID verification research study. (Upper Marlboro Patch)

Participants will be asked to pass through a simulated identification area that uses safe, commercially available sensors like high definition cameras. The simulated screening area will also test the usefulness of safe biometric scanners that are currently being used by other countries at border crossings such as fingerprint identification.

New book on how biometrics age

Book Review: Age Factors in Biometric Processing (M2SYS Blog)

Mr. Fairhurst assembles his book into four primary sections:

1.) An introduction to the aging process and an overview of biometric systems – setting the stage for discussion on how aging can effect individual biometric modalities
2.) A study of how aging effects specific biometric modalities including physiological and behavioral modalities
3.) A closer look at the topic of aging from the perspective of an industrial viewpoint, a forensics perspective, and the impact of aging on one of the more popular biometric modalities – facial recognition
4.) A look to the future and based on the conclusions of this book, what type and the scope of research that may be needed in the future

The challenges confronting any new biometric modality

[ed. This post reflects a substantial rewrite of an earlier post of January 24, 2013: Not the bee’s knees]

Every once in a while a version of the following paragraph finds itself in the news…

Biometrics Using Internal Body Parts: Knobbly Knees in Competition With Fingerprints (Science Daily)

Forget digital fingerprints, iris recognition and voice identification, the next big thing in biometrics could be your knobbly knees. Just as a fingerprints and other body parts are unique to us as individuals and so can be used to prove who we are, so too are our kneecaps. Computer scientist Lior Shamir of Lawrence Technological University in Southfield, Michigan, has now demonstrated how a knee scan could be used to single us out.

Forget digital fingerprints, iris recognition and voice identification, the next big thing in biometrics could be your ______________.

Examples are numerous and fecund:

Heartbeat?
Rear-end?
Ear?
Bone structure or electric conductivity?
Footsteps?
Nose? (ed. Link added later. I forgot about that one.)
Body odor?
Brain prints?
Lip movements?
Kneecap?

While I suspect that any definable aspect of the human anatomy could be used as a biometric identifier — in instances where teeth are all that is known about an individual, they are used for high confidence identification — I’m afraid that, for the foreseeable future, the cards are stacked against any new biometric modality catching on in any big way.

The reasons for this are both scientific (research based) and economic (market based).

On the science side, a good biometric modality must be: unique, durable, and easily measurable. If any of these are missing, widespread use for ID management isn’t in the cards. If something is unique and durable but isn’t easily measurable, it can still be useful but it isn’t going to become ubiquitous in automated (or semi-automated) technology. Teeth and DNA fit this model. Teeth have been used to determine the identity of dead bodies with a high degree of certainty for a long time, but we aren’t going to be biting any sensors to get into our computers any time soon — or ever. Likewise with DNA.

There is also the challenge of proving that a modality is in fact unique, durable and easily measurable which requires a whole lot of experimental data and (especially regarding uniqueness) a healthy dose of statistical analysis. I’m no statistician, and from what I understand, the statistical rules for proving biometric uniqueness aren’t fully developed yet anyway, so let’s just leave things in layman’s terms and say that if you’re wanting to invent a new biometric modality and someone asks you how big a data set of samples of the relevant body part you need, your best answer is “how many can you get me?”

In order to ascertain uniqueness you need samples from as many different people as you can get. For durability you need biometric samples for the same person taken over a period of time and multiplied by a lot of people.

Ease of measure is more experiential and will be discovered during the experimentation process. The scientists charged with collecting the samples from real people will quickly get a feel for the likelihood that people would adapt to a given ID protocol.

For two common biometric modalities, face and fingerprint, huge data repositories have existed since well before there was any such thing as a biometric algorithm. Jails (among others) had been collecting this information for a hundred years and the nature of the jail business means you’ll get several samples from the same subject often enough to test durability, too, over their criminal life. For face, other records such as school year books exist and were readily available to researchers who sought to test the uniqueness and durability of the human face.

The first hurdle for a novel biometric modality is the competition for the attention of scientists and researchers. Getting the attention of science and technology journalists by making a pronouncement that the space between the shoulder blades is the next big thing in biometrics is one thing. Getting academic peers to dedicate the time and research dollars to building the huge database of interscapular scans required for algorithm development is quite another. Any new modality has to offer out-sized advantages over established modaities in order to justify the R&D outlay required to “catch up”. This is highly unlikely.

On the market side, in order to displace established (finger/hand and face/eye) biometric modalities in wide scale deployments, the academic work must be complete and the new technology must produce a return on investment (ROI) in excess of that offered by existing technologies designed to accomplish the same function.

That’s not to say that modalities that didn’t have the advantage of a 100 year head start on data collection are impossible to bring to market. Iris, voice, and the vascular biometrics of the hand (palm, finger) have joined face and fingerprint biometrics in achieving commercial viability despite the lack of historic data repositories. But there were several things recommending them. They either occupy prime real estate on the head and the end of the arm (Iris, vein) making them easy to get at, or they are the only biometric that can be used over a ubiquitous infrastructure that simply isn’t going anywhere (voice/phone), or they offer advantages over similar established modalities. With hand vascular biometrics: they’re harder to spoof than fingerprints; no latency; avoidance of the “fingerprinting = criminality” stigma; can work with gloves; users can avoid touching the sensor, etc. With iris: harder to copy than the face; harder to spoof; easier to measure than retina vasculation; and extremely low/no latency. Yet even despite gaining the required academic attention, iris and voice have had great difficulty overcoming the market (ROI) hurdle, which brings us back to knees.

Is there any database of kneecaps of significant size to allow researchers to skip the time-consuming task of building such a database themselves reducing the cost of development? Is there any deeply embedded ubiquitous infrastructure that is already an ideally suited knee-sensor? Is there any objection to modalities that have a head start on knees that knee biometrics would overcome? Is there any conceivable, repeatable, scalable deployment where a potential end user could save a whole lot of money by being able to identify people by their knees? I’m at a loss but these are exactly the kind of questions any new biometric modality must be able to answer in the affirmative in order to have any hope for wide-scale deployment.

So, it’s pretty clear that knee biometrics are not something the average person will ever come into contact. Does that mean there is no value in exploring the idea of the kneecap as a feature of the human anatomy capable of being used to uniquely identify an individual? Not necessarily.

In order to thrive as high value-added tools in highly specialized deployments a novel modality just needs to help solve a high value problem. This has heretofore been the case with teeth & DNA. The analysis of teeth and DNA is expensive, slow, requires expert interpretation, and is difficult to completely automate, but has been around for a long, long time and isn’t going anywhere anytime soon. That’s because the number of instances where teeth and DNA are the only pieces of identifying information available are frequent enough, the value of making the identification is high enough, and the confidence level of the identification is high enough that people are willing to bear the costs associated with the analysis of teeth and DNA.

Beyond teeth and DNA, any biometric modality can be useful, especially when it is the only piece if information available. The CIA and FBI even invented a completely novel biometric approach in an attempt to link Khalid Shaikh Mohammed to the murder of Daniel Pearl using arm veins. But how likely is something like that ever to be the case for any of these novel modalities, knees included? It’s possible that the situation could arise where a knee bone is discovered and there is an existing x-ray or MRI of a known person’s knee and a comparison would be useful. That, however, is not enough to make anyone forget about any already-deployed biometric modality.

Canada military to spread biometric knowledge domestically

Canadian Forces Expands Its Biometric Capabilities But Remains Silent On The Details (Ottawa Citizen)

…[I]n an April 2010 directive issued by then Chief of the Defence Staff Gen. Walter Natynczyk, the military was ordered to expand such capabilities beyond those being detained in Afghanistan.

The directive called on Canadian Forces planners to “shape” research conducted by the DND’s science organization, Defence Research and Development Canada, so they could identify new future technologies that could improve the collection of biometric data.

The directive was aimed at dealing with the Afghanistan mission. But it didn’t explain whether the call to expand biometric capabilities to support other government departments, as well as the need to conduct new research, was for future international missions, support for domestic operations or a combination of both.

Good face rec article at the BBC

Can disguises fool surveillance technology? (BBC)

putting a scarf over the mouth and nose, or simply wearing dark glasses could fool the system. However, this is beginning to change, says Shengcai Liao, an assistant professor at the Center for Biometrics and Security Research in Beijing, China. He says new techniques are being developed that can use information from the nose or mouth alone if the eyes are occluded, or from the eyes and eyebrows if a scarf is covering the lower part of the face. “It’s not possible to recognize a fully occluded face, but we can currently recognize faces with 30% or even 50% occlusion,” he said. “We have even had success performing recognition from a mouth alone – something that it would be very difficult for a human to do.”

But what about other countermeasures, such as those used by McAfee, which included skin darkening, facial distortion and colouring his hair?

I’m still a fan of CV Dazzle. If you’re going to change your appearance to “jam” facial recognition systems, you can make a bolder fashion statement than wearing a ski mask. Well, I guess Ski Mask is a pretty bold fashion statement, but click over to CV Dazzle for other options that don’t scream “I just robbed a bank.”

Howie Woo also has a more cheery alternative for those committed to the mask, but his approach has its own risks.

Bypassing an Iris Scanner? There’s Got To Be a Better Way.

In honor of today’s twitter biometric chat on iris biometrics, here’s a post from July 30 containing thoughts on the implications of a recent iris biometrics hack…

A couple of weeks ago, when the news broke that someone had claimed to have “hacked” iris biometrics by reverse engineering a template into an image of an iris that would be accepted by an iris recognition system, I said: It’s not a real biometric modality until someone hacks it.

That’s because a hacking claim can generate a lot of media publicity even if it doesn’t constitute proof that a technology is fatally flawed. Where’s the publicity value of hacking something that nobody uses, anyway? Claims like this can also be taken as a sign that a new technology, iris biometrics in this case, has crossed some sort of adoption and awareness threshold.

So what about the hack? Now that more information is available and assuming that Wired has things about right, “experiment” is a far better descriptor than “hack” for what actually went down. “Hack” would seem to indicate that a system can be manipulated into behaving unexpectedly and with exploitable consequences in its real world conditions. Think of picking a lock. A doorknob with a key hole can be manipulated by tools that aren’t the proper key to open a locked door in its normal operating environment.

The method that the researchers relied upon to develop the fake iris from the real template bears no resemblance to the lock-picking example. What  the researchers did is known as hill-climbing. In simple terms, it’s like playing the children’s game Cold-Warm-Hot but the feedback is more detailed. A hill-climbing experiment relies upon the system being experimented on giving detailed information back to the experimenter about how well the experimenter is doing. The experimenter presents a sample and the system gives a score (cold, warm, hot). The experimenter refines the sample and hopes the score will improve. Lather, rinse, repeat. A few hundred iterations later, the light turns green.

Technically, you don’t even need to have a sample (template) to start hill climbing. You could just start feeding the system random characters until you hit upon a combination that fit the template’s template(?).

This is one of those exercises that is academically interesting but doesn’t provide much useful information to system engineers or organization managers. Scientific experiments deal with their subjects by isolating and manipulating one variable at a time. Real world security systems are deployed with careful consideration of the value of what is being protected and a dependence upon all sorts of environmental factors.

A person who wanted to bypass an iris scanner using this method in the real world would:

1. Hack into a biometric database to steal a template of an authorized user; pray templates aren’t encrypted
2. Determine which biometric algorithm (which company’s technology) generated the template
3. Buy (or steal) that company’s software development kit
4. Build and successfully run the hill-climbing routine
5. Print the resulting image using a high quality printer
6. Go to the sensor
7. Place print-out in front of iris scanner
8. Cross fingers

Simple, right? Compared to what?

Once you’re talking about hacking into unencrypted biometric template databases (and depending upon your CRUD privileges) almost anything is possible and little of it requires Xeroxing yourself a pair of contact lenses.

Why not just blow away the whole database of iris templates? Problem solved. The scanners, now just locks with no key, would have to be disabled at least temporarily.

If stealth is more your style, just hack into the database, create a credential for yourself by placing your very own iris template in there and dispense with the whole rigmarole of the hill-climbing business. Delete your template (and why not all the others) after the heist.

If your hacking skillz aren’t up to the task, you could stalk someone who is already enrolled with a Nikon D4 and a wildlife photography lens and skip steps one thru four (and eight) on the above list.

You could trick, threaten or bribe someone into letting you in.

Break the door or a window.

The elaborateness of the process undertaken by the researchers pretty much proves that the iris sensor isn’t going to be the weak link in any real world security deployment.

Bypassing an Iris Scanner? There’s Got To Be a Better Way.

A couple of weeks ago, when the news broke that someone had claimed to have “hacked” iris biometrics by reverse engineering a template into an image of an iris that would be accepted by an iris recognition system, I said: It’s not a real biometric modality until someone hacks it.

That’s because a hacking claim can generate a lot of media publicity even if it doesn’t constitute proof that a technology is fatally flawed. Where’s the publicity value of hacking something that nobody uses, anyway? Claims like this can also be taken as a sign that a new technology, iris biometrics in this case, has crossed some sort of adoption and awareness threshold.

So what about the hack? Now that more information is available and assuming that Wired has things about right, “experiment” is a far better descriptor than “hack” for what actually went down. “Hack” would seem to indicate that a system can be manipulated into behaving unexpectedly and with exploitable consequences in its real world conditions. Think of picking a lock. A doorknob with a key hole can be manipulated by tools that aren’t the proper key to open a locked door in its normal operating environment.

The method that the researchers relied upon to develop the fake iris from the real template bears no resemblance to the lock-picking example. What  the researchers did is known as hill-climbing. In simple terms, it’s like playing the children’s game Cold-Warm-Hot but the feedback is more detailed. A hill-climbing experiment relies upon the system being experimented on giving detailed information back to the experimenter about how well the experimenter is doing. The experimenter presents a sample and the system gives a score (cold, warm, hot). The experimenter refines the sample and hopes the score will improve. Lather, rinse, repeat. A few hundred iterations later, the light turns green.

Technically, you don’t even need to have a sample (template) to start hill climbing. You could just start feeding the system random characters until you hit upon a combination that fit the template’s template(?).

This is one of those exercises that is academically interesting but doesn’t provide much useful information to system engineers or organization managers. Scientific experiments deal with their subjects by isolating and manipulating one variable at a time. Real world security systems are deployed with careful consideration of the value of what is being protected and a dependence upon all sorts of environmental factors.

A person who wanted to bypass an iris scanner using this method in the real world would:

1. Hack into a biometric database to steal a template of an authorized user; pray templates aren’t encrypted
2. Determine which biometric algorithm (which company’s technology) generated the template
3. Buy (or steal) that company’s software development kit
4. Build and successfully run the hill-climbing routine
5. Print the resulting image using a high quality printer
6. Go to the sensor
7. Place print-out in front of iris scanner
8. Cross fingers

Simple, right? Compared to what?

Once you’re talking about hacking into unencrypted biometric template databases (and depending upon your CRUD privileges) almost anything is possible and little of it requires Xeroxing yourself a pair of contact lenses.

Why not just blow away the whole database of iris templates? Problem solved. The scanners, now just locks with no key, would have to be disabled at least temporarily.

If stealth is more your style, just hack into the database, create a credential for yourself by placing your very own iris template in there and dispense with the whole rigmarole of the hill-climbing business. Delete your template (and why not all the others) after the heist.

If your hacking skillz aren’t up to the task, you could stalk someone who is already enrolled with a Nikon D4 and a wildlife photography lens and skip steps one thru four (and eight) on the above list.

You could trick, threaten or bribe someone into letting you in.

Break the door or a window.

The elaborateness of the process undertaken by the researchers pretty much proves that the iris sensor isn’t going to be the weak link in any real world security deployment.

Daily Mail Calls for More Facial Recognition Technology at Borders?

The UK Daily Mail calls for more facial recognition technology at borders, but it is pretty hard to decode that from the article, as published.

The Daily Mail recently published an article about “facial recognition” stating that because humans can be confused while comparing a neutrally-posed facial photo to the live subject standing before them, it follows that “facial recognition technology needs to be upgraded.”

I agree, with a caveat. I’m all for adopting facial recognition technology (SecurLinx does great work in this field). The upgrading will come later.

The article makes a bit of a hash of the problem by muddling the very different processes by which humans and biometric facial recognition technologies do what they do to process visual inputs and, upon a quick read, takes a psychological study of how humans process visual information related to the faces of other people and assumes that those findings apply perfectly to technological biometric facial recognition systems. They don’t.

The observations of Rob Jenkins, Glasgow University Psychologist, actually argue for the increased use of facial recognition technology as currently on offer as an aide to human border agents along the lines advanced in an earlier post (Facial Recognition vs Human) & (Facial Recognition + Human).

A technology assisted human should outperform both a stand-alone technology and unaided humans.

On another note, the psychology surrounding how people (and wasps!) recognize faces is very interesting. The paper by Dr. Jenkins that seems to have the most bearing on facial recognition technology can be read here [pdf].

The paper is a little more skeptical of facial recognition technology than is warranted because the authors envision facial recognition technology as essentially aspiring to be a poor replication of the fallible neurological process rather than an augmentation of what humans do by coming at the problem from a completely different angle.

We suggest that a major attraction of using facial appearance to establish identity is that we accept it can be done in principle. In fact, we experience practical success every day because the system that has solved it is the human brain. The proliferation of ‘biologically inspired’ approaches to automatic face recognition reflects the willingness of computer engineers to model the brain’s success. Yet, psychological studies have shown that human expertise in face identification is much more narrow than is often assumed. Moreover, the process that most automatic systems attempt to model lies outside 1672 R. Jenkins & A. M. Burton Review. Stable face representations. From this perspective, disappointment in machine systems is inevitable, as they model a process that fails. Human limitations in face identification are not widely appreciated even within cognitive psychology, and seldom penetrate cognate fields in engineering and law. In §3, we offer an overview of the most pertinent limitations. For this purpose, we focus specifically on evidence from face matching tasks, as these directly address a problem that is common to security and forensic applications.

A Visionary’s Perspective

The Chartered Institute for IT has published a wide ranging interview, Getting a facial, with Professor Maja Pantic, from Imperial College, London.

Prof. Pantic has been working on automatic facial behaviour analysis. This type of research, if successful, could lead to a revolution in the way humans interact with technologies devoted to security, entertainment, health and the control of local physical environments in homes and offices.

The interview is long, wide-ranging, and worth reading in it’s entirety.

I would, however, like to point out two passages that have great bearing on some of the themes we discuss regularly here.

Why computer science?

But with computers, it was something completely new; we just couldn’t predict where it would go. And we still don’t really know where it will go! At the time I started studying it was 1988 – it was the time before the internet – but I did like to play computer games and that was one of the reasons, for sure, that I looked into it. [ed. Emphasis added]

You never know where a new technology will lead, and those who fixate on a technology, as a thing in itself are missing something important. Technology only has meaning in what people do with it. The people who created the internet weren’t trying to kill the record labels, revolutionize the banking industry, globalize the world market for fraud, or destroy the Mom & Pop retail sector while passing the savings on to you. The internet, much less its creators, didn’t do it. The people it empowered did. 


Technologies empower people. Successful technologies tend to empower people to improve things. If a technology doesn’t lead to improvement, in the vast majority of cases it will fail to catch on and/or fall into disuse. In the slim minority of remaining cases (a successful “bad” technology) people tend to agree not to produce them or place extreme conditions on their production and or use i.e. chem-bio weapons, or CFC’s. There really aren’t many “bad” technologies that people actually have to worry about. 


It makes far more sense to worry about people using technologies that are, on balance, “good” to do bad things — a lesson the anti-biometrics crowd should internalize. Moreover, you don’t need high technology to do terrible things. The most terrible things that people have ever done to other people didn’t require a whole lot of technology. They just required people who wanted to do them.


The interview also contains this passage on the working relationship between people and IT…

The detection software allows us to try to predict how atypical the behaviour is of a particular person. This may be due to nervousness or it may be due to an attempt to cover something up.

It’s very pretentious to say we will have vision-based deception detection software, but what we can show are the first signs of atypical or nervous behaviour. The human observer who is monitoring a person can see their scores and review their case. It’s more of an aid to the human observer rather than a clear-cut deception detector. That’s the whole security part.

There’s a lot of human / computer interaction involved.

It’s not the tech; it’s the people. 


Technology like biometrics or behavioral analysis isn’t a robot overlord created to boss around people like security staff. It’s a tool designed to help inform their trained human judgement. This informs issues like planning for exceptions to the security rule: lost ID’s, missing biometrics, etc. Technology can’t be held responsible for anything. It can help people become more efficient, and inform their judgement, but it can’t do a job by itself.

Back to Three Sides of the Same Coin

Artificial Intelligence & Multimodal Biometrics

Neural network mimics the brain for improved decision-making in biometric security systems (EurekAlert!)

“Our goal is to improve accuracy and as a result improve the recognition process,” says Gavrilova, a professor in the Faculty of Science. “We looked at it not just as a mathematical algorithm, but as an intelligent decision making process and the way a person will make a decision.”

The algorithm can learn new biometric patterns and associate data from different data sets, allowing system to combine information, such as fingerprint, voice, gait or facial features, instead of relying on a single set of measurements.

A system like this is a very long way from seeing the light of day in an actual real-world deployment, but the concept strikes me as having huge potential for extremely complex high value deployments of the future such as airport ID.

Biometric Systems: Hacking from the Outside In

Behind all the techno-jargon, Biometric bugs too dangerous for public? (ZDNet) is about biometric lock picking.

In the software world, if your system has a weakness, you can just fix the software, push out an update, and voila, all is well. If, however, your sensor hardware is buggy (i.e. the lock is easy to pick), you face the much more painful prospect of fixing/replacing each sensor.

Read the whole thing. The topic is very interesting from a technical point of view and does a good job of not overly hyping the issue.