Not a bug, but a feature

Massive errors mar Aadhaar enrolment (Times of India)

The enrolment process for Aadhaar in Odisha is dogged by massive rejection of data due to errors. According to the directorate of census operations here, enrolled biometric data of 40 lakh people stand rejected by the Unique Identification Authority of India (UIDAI), the Aadhaar body, as on June 15.

Some facts:
Odisha is a state in eastern India. The wiki has its population at 43.73 million as of 2014.
1 lakh = 100,000
1 crore = 10,000,000
All numbers not quoted from the article are in more familiar units.

The article goes on to say a lot about the numbers. 31,700,000 out of 38,400,000 people (82%) of the eligible population have been registered successfully.

The 4 million rejected applications are divided as follows.

2 million were rejected because they were submitted by operators who have been barred from submitting applications. UID works by outsourcing enrollment to private operators who are then paid by the government for accepted applications. Operators who have submitted too many error-riddled or fraudulent applications have been banned from the market.

1 million have been rejected for being duplicate applications, as is proper.

That leaves 1 million true “errors,” or failed enrollments that are potentially valid and are described as those submitted on behalf of “very old people and children (between five to 10 years), whose finger prints and iris scans were not registered properly.” Now, it may turn out that some of these failed enrollments are duplicate applications as well and it will probably turn out that many (if not most) of these people can be enrolled on a second pass where extra care is taken during the enrollment process. Nevertheless describing 1 million failed enrollments out of 32.7 million presumably legitimate applications as “massive errors” seems uncharitable.

Also, UID contains a “Biometric Exception Clause” which allows for creating UID numbers for people whose biometrics cannot be enrolled. As of May 2015, across India, around 618,000 (0.07%) of UID numbers have been issued with biometric exceptions.

India UID: Interesting de-duplication and exception stats

Over 9 crore Aadhaar enrolments rejected by UIDAI (Zee News)

Out of 823.3 million enrollments, 97.3 million (Approx. 12%) have been rejected for reasons of either quality or duplication.

This may seem to be high to some, or low to others. In the big picture, there is (or should be!) a cost-benefit analysis at the beginning of the project that gets at the expense of the process vs. the infallibility of the process. On the first pass, it might make sense to get the highest proportion of good enrollments with the most convenient process, then to engage in a more expensive enrollment process applied only to more difficult enrollments.

It’s also important to note that the 97.3 million rejected enrollments contain both duplicate applications, which must be rejected and other applications where clerical error, fraud, or un-enrollable biometrics are the reason for rejection.

Another interesting statistic in the article is that only about 618,000 UID numbers have been issued under the “Biometric Exception Clause” which allows for creating UID numbers for people whose biometrics cannot be enrolled. That comes out to around 0.07%.

What that means is that (depending on the number of people waiting for a biometric exception) using a data set approaching a billion individuals, at least 99.3% of the population of India is biometrically enrollable within the existing UID enrollment process.

Note: The article uses the Indian numbering units crore and lakh.

1 crore = 10,000,000
1 lakh = 100,000

See also: UID applications without biometrics highly likely fraudulent

At least 99.27% of Ghanaian voters verified by fingerprint

Almost 80,000 voted by face-only verification – Afari – Gyan (Ghana Web)

The Chairman of the Electoral Commission (EC), Dr. Kwadwo Afari-Gyan on Thursday May 30, 2013, told the Supreme Court in the election petition trial that there were close to 80, 000 voters who were designated as ‘Face-Only’ (FO) voters because the biometric registration machines failed to capture their finger prints during the registration exercise.

Explaining himself further in Court on Thursday, at the start of his evidence-in-chief for the second respondent, Dr. Afari Gyan said among those classified as FO voters were eligible voters who had suffered “permanent trauma” and “temporary trauma”.

He explained permanent trauma to mean voters who had no fingers at all for which reason their fingerprints could not have been captured by the biometric verification machine.

Temporary trauma sufferers, according to Dr. Afari-Gyan, were those who had fingers alright, but nonetheless did not have fingerprints to have been captured by the machine.

He said those two categories of voters were captured in the register as people who could only be identified by their faces before voting since their fingerprints could not be captured by the biometric verification equipment.

Any identification system has to plan for exceptions. This is true whether the ID measure in place is a metal key, an ID card, a PIN, a fingerprint or any combination of ID technologies.

More on exceptions.

A Ghana Web article on exception planning published in early 2012 is here, so the subject of unverifiable biometrics isn’t a surprise.

Instead, let’s deal with the numbers.

According to the article quoted above 80,000 (and that seems to be an upper bound rather than a firm total) voters were given blank ballots without fingerprint ID verification. Some portion of that number would have been definitively established during the voter registration process as people missing hands and fingers as they completed the voter registration process.

The image below (also from Ghana Web) shows candidates, percentage of votes received, and. more importantly for our purposes, raw vote total:

The combined number of votes in parentheses below each candidate’s name comes to 10,995,262. Eighty thousand votes represents 0.73% of the almost eleven million votes cast. The margin of victory between the top two vote-getters was 325,863 votes and they were separated by 2.96% of the total vote.

As far as elections go, having the margin of error less than the margin of victory is a good thing. In this case 0.73% < 2.96% means that the 80,000 unverified votes could not have affected who received the most votes.

Moreover, no one yet asserts that votes cast without fingerprint biometric verification could have favored any one candidate either because there was a systematic attempt to circumvent the biometric verification for fraudulent purposes or because of a geographic disparity in the 80,000 (maximum) exceptions that might have favored one candidate over another.

The bigger story appears to be:
99.27% of the votes in the recent election were cast by biometrically verified legitimate voters.

The last time there was a presidential election, that number was zero and given increased familiarity with the technology and expected improvements in both biometric hardware and software, expect that 99.27% number to increase for the next election.

Ghana, and other countries contemplating fully biometric elections should be heartened by these results.

 

India: UID applications without biometrics highly likely fraudulent

Sometimes it seems as though headline writers don’e even bother to read the articles.

India removes 384K Aadhaar biometric IDs (ZDNet) — Properly speaking, they weren’t biometric ID’s because they were created under the “biometric exceptions” provision that allowed for enrollments to be created without an acceptable biometric identifier. That provision was exploited by unscrupulous registrars who created fake enrollments for which they were paid.

The “biometric exception” was created out of necessity to account for those with unreadable fingerprints or for those who lacked fingers or hands altogether, however three quarters of the IDs generated under the biometric exception clause have been found to be fraudulent.

It is also interesting to note that if UID lacked a provision for the collection of a biometric identifier, it is unlikely that the large scale fraud would have been detected at all.

Biometrics & ID infrastructure: Perfect is the enemy of good

No good work whatever can be perfect, and the demand for perfection is always a sign of a misunderstanding of the ends of art.
—John Ruskin

Everybody knows that there’s nothing perfect in this world, yet plenty that is imperfect also happens to be very useful.

Identity management is one of these. Conducted by people to account for people, with human beings on both sides of the equation, perfection is out of the question. Only someone who misunderstands the ends of the art of ID can reject a certain solution because it falls short of perfection.

Is using a name to identify a person perfect?
Some people can’t speak. Some people can’t hear. Some people can’t read. Some can’t write. Many people share the same name.

A token?
Tokens are lost, stolen, counterfeited.

Maybe a photo then?
Some people can’t see.

Fingerprints, then?
Some people don’t have hands, at all.

Iris?
Some people don’t have eyes.

People cope with imperfection in all aspects of their lives including identity management. Planning for exceptions to the routine ID management transaction is something all existing ID management systems already do. Biometrically enabled ID management systems are no different.

None of the above ID techniques is perfect yet (especially when combined) they are all useful. In this context, a proper understanding of Ruskin’s “ends of art” is Return on Investment, not perfection. The economic value of something does not lie in its perfection. It lies in its ability to help improve things by a measure exceeding the sum of its costs.

What distinguishes biometric systems from earlier ID management techniques, especially in the development context, is that they are an extremely effective and affordable means of establishing a unique identity for individuals among populations that have not been highly organized in the past.

Low access to education? High illiteracy? Poor birth records? Highly transient populations? Recent wars left high numbers of orphans or displaced people? New democracy? For countries answering “yes” to any of these or other similar questions, biometric systems are about the only economically viable choice for developing the ID infrastructure that people who can already verify their identity take for granted.

Additionally, when compared to the investments made by the powers of the Industrial Age to develop their ID management systems —  investments still out of reach for the governments of billions of people — biometrics while cheaper, seem capable of outperforming Industrial Age systems. We know this because existing systems using the best Industrial Age techniques have been audited using biometrics. When the older systems are audited with biometric techniques all sorts of errors and inconsistencies are discovered, errors whose numbers would have been reduced significantly, had biometrics been used in the creation of new profiles in the relevant ID systems.

A Visionary’s Perspective

The Chartered Institute for IT has published a wide ranging interview, Getting a facial, with Professor Maja Pantic, from Imperial College, London.

Prof. Pantic has been working on automatic facial behaviour analysis. This type of research, if successful, could lead to a revolution in the way humans interact with technologies devoted to security, entertainment, health and the control of local physical environments in homes and offices.

The interview is long, wide-ranging, and worth reading in it’s entirety.

I would, however, like to point out two passages that have great bearing on some of the themes we discuss regularly here.

Why computer science?

But with computers, it was something completely new; we just couldn’t predict where it would go. And we still don’t really know where it will go! At the time I started studying it was 1988 – it was the time before the internet – but I did like to play computer games and that was one of the reasons, for sure, that I looked into it. [ed. Emphasis added]

You never know where a new technology will lead, and those who fixate on a technology, as a thing in itself are missing something important. Technology only has meaning in what people do with it. The people who created the internet weren’t trying to kill the record labels, revolutionize the banking industry, globalize the world market for fraud, or destroy the Mom & Pop retail sector while passing the savings on to you. The internet, much less its creators, didn’t do it. The people it empowered did. 


Technologies empower people. Successful technologies tend to empower people to improve things. If a technology doesn’t lead to improvement, in the vast majority of cases it will fail to catch on and/or fall into disuse. In the slim minority of remaining cases (a successful “bad” technology) people tend to agree not to produce them or place extreme conditions on their production and or use i.e. chem-bio weapons, or CFC’s. There really aren’t many “bad” technologies that people actually have to worry about. 


It makes far more sense to worry about people using technologies that are, on balance, “good” to do bad things — a lesson the anti-biometrics crowd should internalize. Moreover, you don’t need high technology to do terrible things. The most terrible things that people have ever done to other people didn’t require a whole lot of technology. They just required people who wanted to do them.


The interview also contains this passage on the working relationship between people and IT…

The detection software allows us to try to predict how atypical the behaviour is of a particular person. This may be due to nervousness or it may be due to an attempt to cover something up.

It’s very pretentious to say we will have vision-based deception detection software, but what we can show are the first signs of atypical or nervous behaviour. The human observer who is monitoring a person can see their scores and review their case. It’s more of an aid to the human observer rather than a clear-cut deception detector. That’s the whole security part.

There’s a lot of human / computer interaction involved.

It’s not the tech; it’s the people. 


Technology like biometrics or behavioral analysis isn’t a robot overlord created to boss around people like security staff. It’s a tool designed to help inform their trained human judgement. This informs issues like planning for exceptions to the security rule: lost ID’s, missing biometrics, etc. Technology can’t be held responsible for anything. It can help people become more efficient, and inform their judgement, but it can’t do a job by itself.

Back to Three Sides of the Same Coin