Novel biometric modality: Brain prints

‘Brain prints’ the new biometric identifier (WhaTech)

We’ve had fingerprints for years as unique identifiers of individuals and in recent times their uniqueness has been successfully employed for access control. More recently they’ve been followed by voiceprints and iris scans as unique personal attributes that can be used for access to information systems. But brain waves?

Let’s not get ahead of ourselves. Though there is little doubt that if any behavioral biometric can be used as a reliable identifier, evidence for that uniqueness can be found in the brain and measured, brain prints as ubiquitous biometrics face every obstacle we discussed in our post, The challenges confronting any new biometric modality and then some.

The linked article doesn’t make any mention of the sensor to be used to collect brain prints, much less offer a vision for how a future identification scenario might work.

This is one of those subjects that is intensely interesting from a Ph.D.’s point of view (invention) but not so much from an engineering or business perspective (innovation). Brain prints as a biometric will face significant — I dare say insurmountable — challenges in finding its way into wide use as a commercial ID management application any time soon.

Since heartbeat biometrics are in the news again…

Heartbeats Could Replace Passwords (NPR Boston)

Instead of memorizing all those passwords, what if the key to unlocking everything could be linked to something unique about you — like the rhythm of your heart?

That’s what biometric researchers in Toronto have come up with.

For reasons that are both scientific (research based) and economic (market based), the road to commercialization of any new biometric modality is steep.

And as we discussed last year, the electrical properties of a human heartbeat may not have the characteristics that make success likely.

Read these two together…

Read this…

Emotient and iMotions partner for integrated facial expression recognition, bio sensor and eye-tracking solution (Biometric Update)

Emotient, which specializes in facial expression analysis, and iMotions, an eye-tracking and biometric software platform company, have announced that Procter and Gamble, The United States Air Force and Yale University are its first customers for a newly integrated platform that combines facial expressions recognition and analysis, eye-tracking, EEG and GSR technologies.

According to the companies, the new cobmbined solution is designed for usability research, market research, neurogaming as well as academic and scientific research.

Then this…

Google facial password patent aims to boost Android security (BBC)

Google has filed a patent suggesting users stick out their tongue or wrinkle their nose in place of a password.

It says requiring specific gestures could prevent the existing Face Unlock facility being fooled by photos.

…and then think about Google Glass (or something similar offered by another brand) and the things that become knowable as these technologies are combined and others are added. Iris and face for backward-facing and front-facing ID, knowing precisely what (or whom) someone is looking at when a certain change in neurological activity is noted. Or, precise targeting of weaponry controlled by the eye’s movement along with detailed observations of the neurological states of combatants.

Right now, all of it seems like a long way off, and it is. Significant scientific, technological, and organizational barriers exist. The technology of measurement; the science of interpretation; the fact that a lot of small players own small pieces of the puzzle; integrating the pieces: each present significant challenges. But…

“Most people overestimate what they can do in one year and underestimate what they can do in ten years.”

Stay tuned. Ubiquitous multi-modal sensors and the real-time ability to interpret and act on the data they collect would have profound effects.

Brainstorming UID with Srikanth Nadhamuni

300,000,000,000,000 biometric queries a day

“…[T]he Aadhaar system was deliberately built as an identity platform as opposed to an end user application, so that government departments and private companies/startups could build their own apps leveraging the platform.”

Technology startups have a huge opportunity to leverage the Aadhaar system (VC Circle)

“This is an ecosystem play.”

Sometimes we get bogged down in the scale of the enrollment challenges associated with UID. It’s good to get back to the amazing scale of possible apps that can be spun out of the ecosystem.

Fingerprint Sensor Innovation

Worlds First Non-Optical, FBI Certified Four-Finger Scanner (Press Release)

The [Thin Film Transistor] TFT sensor has an active image area of 3.0 x 3.2, a resolution of 500dpi, and is less than 1mm thick. Ultra-Scan has begun miniaturization of the sensor control electronics to a single Application-Specific Integrated Circuit (ASIC) that, when complete, will result in an integrated sensor and control electronics package measuring 3.5 x 3.5 x 0.25, powered by USB, and suitable for a variety of mobile fingerprint collection applications.

In the business, we call a multi-fingerprint reader a “slap” reader — well, some of us do anyway.

For now, the least costly single print readers, and all the slap readers I know of, are optical readers with a glass platen and some sort of internal light source for capturing an image of a fingerprint. This form factor dictates a certain hardware depth dimension, usually two inches or more. As for the single print readers, in many many applications a two inch hardware depth isn’t a deal-breaker and price is an object. With the slap readers, even though they’re expensive and heavy there are enough applications where only a slap reader will do.

So for a single print reader, if a customer can accept the depth, price comes down.  If a customer has to have a slap reader, they have to accept the depth associated with optical sensors.

As mentioned above, there are a whole lot of applications where optical sensors make the most sense. Mobile, however, isn’t one of them. In mobile hardware, two inches of depth is a deal breaker at any price. Mobile devices will definitely be integrating these thin film transistor-type sensors (I’ve also seen non-optical hardware called semiconductor scanners, and capacitive readers).

Shrinking the depth of a slap reader while increasing the maximum size of a capacitive reader opens up all sorts of possibilities for mobile devices such as the capability of having the back of a mobile phone recognize users’ partial palm print as they hold the device naturally.

This seems like a pretty big deal but my guess is this type of fingerprint sensor is going to be hugely expensive for a while. But that’s the way these things go. They’re expensive before they’re cheap.

A Visionary’s Perspective

The Chartered Institute for IT has published a wide ranging interview, Getting a facial, with Professor Maja Pantic, from Imperial College, London.

Prof. Pantic has been working on automatic facial behaviour analysis. This type of research, if successful, could lead to a revolution in the way humans interact with technologies devoted to security, entertainment, health and the control of local physical environments in homes and offices.

The interview is long, wide-ranging, and worth reading in it’s entirety.

I would, however, like to point out two passages that have great bearing on some of the themes we discuss regularly here.

Why computer science?

But with computers, it was something completely new; we just couldn’t predict where it would go. And we still don’t really know where it will go! At the time I started studying it was 1988 – it was the time before the internet – but I did like to play computer games and that was one of the reasons, for sure, that I looked into it. [ed. Emphasis added]

You never know where a new technology will lead, and those who fixate on a technology, as a thing in itself are missing something important. Technology only has meaning in what people do with it. The people who created the internet weren’t trying to kill the record labels, revolutionize the banking industry, globalize the world market for fraud, or destroy the Mom & Pop retail sector while passing the savings on to you. The internet, much less its creators, didn’t do it. The people it empowered did. 

Technologies empower people. Successful technologies tend to empower people to improve things. If a technology doesn’t lead to improvement, in the vast majority of cases it will fail to catch on and/or fall into disuse. In the slim minority of remaining cases (a successful “bad” technology) people tend to agree not to produce them or place extreme conditions on their production and or use i.e. chem-bio weapons, or CFC’s. There really aren’t many “bad” technologies that people actually have to worry about. 

It makes far more sense to worry about people using technologies that are, on balance, “good” to do bad things — a lesson the anti-biometrics crowd should internalize. Moreover, you don’t need high technology to do terrible things. The most terrible things that people have ever done to other people didn’t require a whole lot of technology. They just required people who wanted to do them.

The interview also contains this passage on the working relationship between people and IT…

The detection software allows us to try to predict how atypical the behaviour is of a particular person. This may be due to nervousness or it may be due to an attempt to cover something up.

It’s very pretentious to say we will have vision-based deception detection software, but what we can show are the first signs of atypical or nervous behaviour. The human observer who is monitoring a person can see their scores and review their case. It’s more of an aid to the human observer rather than a clear-cut deception detector. That’s the whole security part.

There’s a lot of human / computer interaction involved.

It’s not the tech; it’s the people. 

Technology like biometrics or behavioral analysis isn’t a robot overlord created to boss around people like security staff. It’s a tool designed to help inform their trained human judgement. This informs issues like planning for exceptions to the security rule: lost ID’s, missing biometrics, etc. Technology can’t be held responsible for anything. It can help people become more efficient, and inform their judgement, but it can’t do a job by itself.

Back to Three Sides of the Same Coin

Translate »