Federal regulation for face recognition technology?

Microsoft wants regulation of facial recognition technology to limit ‘abuse’ (CNN)

“Facial recognition — a computer’s ability to identify or verify people’s faces from a photo or through a camera — has been developing rapidly. Apple (AAPL), Google (GOOG), Amazon and Microsoft are among the big tech companies developing and selling such systems. The technology is being used across a range of industries, from private businesses like hotels and casinos, to social media and law enforcement.

Supporters say facial recognition software improves safety for companies and customers and can help police track police down criminals or find missing children. Civil rights groups warn it can infringe on privacy and allow for illegal surveillance and monitoring. There is also room for error, they argue, since the still-emerging technology can result in false identifications.”

The details of any such federal regulation will matter a lot. I believe three states have laws regulating face recognition technology today: Illinois, Texas, Washington. Illinois is reportedly considering revisions to its Biometric Privacy Law (BIPA) to limit its scope.

The US Congress will need to decide whether to continue to leave regulation of biometric systems to the States or that it’s time for federal action. We’ll definitely bee keeping a close eye on this.

Illinois to revisit BIPA law?

Illinois Considering Amendments to Biometric Privacy Law (BIPA) That Would Create Major Exemptions to Its Scope (Proskauer.com)

“Biometric privacy remains an important issue, as facial recognition and other biometric technologies are increasingly in use. As such, it is desirable to find a balance between privacy and security while at the same time allowing companies to use the advances in biometrics in productive ways. Some argue that the Illinois law, in its present form, fails to strike that balance. It appears that some of the Illinois legislators have heard that argument and are trying to correct any imbalance that the law might present. Given what’s at stake, we will closely follow these legislative developments.”

Proskauer Rose, the source of the linked article, is an international law firm with offices in Chicago. The full piece has a lot of links to more information on the Illinois BIPA law. Read the whole thing, especially if you’re interested in biometrics, privacy, or in business in Illinois.

Our previous posts touching on the Illinois BIPA law can be found here.

Peru: Prepaid mobile sales will require fingerprint verification against national ID database

…with an assist from Microsoft Translator

From now prepaid mobile lines will be sold with fingerprint identification of users (Osiptel)

The operators will be required to verify the identity of users wishing to hire mobile public services in their offices, in the form of prepaid. This identification will be held from today through biometric fingerprint verification systems, which will be connected with the RENIEC database.

Full implementation is to be accomplished by January 1, 2017.

Facial recognition technology is changing how we think about photography

SCOTLAND: Cash-strapped police spend £700k on UK database (The Scotsman)

The MPs noted a “worrying” lack of government oversight and regulation of the use of biometrics by public bodies.

It called for day-to-day independent oversight of the police use of all biometrics, and for the Biometrics Commissioner’s jurisdiction to be extended beyond DNA and fingerprints.

ILLINOIS: Does Facebook’s facial recognition technology violate privacy laws? (ABA Journal)

The lawsuit, filed Wednesday, argues that the social media company was required by Illinois law to inform Carlo Licata in writing that it would collect and retain his “biometric data,” and specify when it would destroy that data.

Both Facebook and the police in Scotland have been collecting photos of individuals for years but facial recognition technology changes things. Photos aren’t simply records of something that happened, mere mementos, anymore. They’re search terms and search results.

That has implications for both public and private entities who collect and store images of people.

Ordinary snapshots are now biometric data. The news pieces above both show long-standing policies being scrutinized in the context of reliable facial recognition technology.

Praise for Ann Cavoukian, Privacy Commissioner of Ontario

Canada’s Global Player in the Privacy Debate (Governing.com)

To Cavoukian, the notion that personal privacy is sacrificed for the greater good — from health reporting to communications tracking — is the lazy way out. She has developed what she calls Privacy by Design, the idea that personal privacy protections and new technology advancements can actually live in harmony. “Why do we have to look at it as one interest versus another?” she asks. “I always call it the power of ‘and.’ Get rid of the word ‘versus;’ substitute the word ‘and.’ I want privacy and security.”

We have also had good things to say about Ms. Cavoukian in the past.

Delhi: First rickshaw pullers, now street vendors…

Street vendors concerned about Parliament disruptions holding up bill meant for their protection (Times of India)

“If the bill is passed, the police and municipal officials will not be able to throw us around,” said Champa Ben, a street vendor from Ahmedabad. She has been selling fruits and vegetables on the pavement for the last 28 years. “Yet I have to pay Rs 50 per day as protection money to policemen. Even then, they keep throwing away my wares and harass me,” she said.

NASVI president Manali Shah said the government should provide for recording of biometric measurements of street vendors so that only genuine ones are issued identity cards. “Often we have seen politicians manage licences for their people while genuine street vendors are denied,” she said.

Short and sweet version: The street vendors want it and biometrics can help.

Biometrics can help bring order out of chaos is the post on the biometric registration rickshaw pullers.

Biometrics can help bring order out of chaos

Special drive for registration of cycle-rickshaws in Delhi (The Hindu)

After years of harassment from the police and municipal authorities, there is finally some good news for the rickshaw pullers and owners. In compliance with the directions of the Delhi High Court, the East Delhi Municipal Corporation (EDMC) on Wednesday announced a special drive for registration of cycle-rickshaws and rickshaw pullers through their Citizen Service Bureaus (CSB). Likewise, the North Delhi Municipal Corporation initiated the drive on March 25.

According to municipal officials, the process of registration has been simplified and residence proof or proof of purchase of cycle-rickshaw would not be asked from the applicant. The CSBs have also been equipped with biometric machines, to take index finger impressions, and cameras for taking the photographs of the applicants for registration.

This system seems a lot like the recent effort in the Philippines to register all the bus drivers in Manila. Traffic congestion, public safety, and compliance with government licencing are some of the major goals of registration initiatives like these. Biometrics — fingerprints in this case — offer a cheap, convenient means of creating an ID system from scratch i.e. one that doesn’t rely on a pre-existing paper trail.

It is this last detail that is often overlooked by those skeptical of biometric systems. It’s just impossible for some people to imagine what it would be like to be entirely cut off from the ID infrastructure or how to go about creating one for those who can’t prove anything about their own personal history.

What is your date of birth? 
Where were you born? 
What is your father’s name? 
 I don’t know. 

In cases like the regulation of rickshaw pullers discussed, you don’t really need to know anything except that the person attached to this finger paid their fee and is legally entitled to ply their trade. A decent ID system can then be built out from there.

It shouldn’t be a surprise that those challenged with bringing some ID order out of chaos are finding a lot to like about biometrics.

Keeping school lunch biometrics in perspective

Maryland: Bill from Carroll senator would ban collection of students’ biometric data (Baltimore Sun)

Earlier this school year, Carroll County Public Schools had biometric scanners in place in about 10 school cafeterias, where they were used to help expedite the process of paying for school meals. Officials said the scanners would be more efficient than processing cash transactions or using a PIN keypad system.

But officials fielded complaints from some parents who felt the scanners were an invasion of privacy.

If you think biometrics for school lunch payment are bad, you’re not going to like this:

Joy Pullmann: Data mining kids crosses line (Orange County Register)

The U.S. Department of Education is investigating how public schools can collect information on “non-cognitive” student attributes, after granting itself the power to share student data across agencies without parents’ knowledge.

The feds want to use schools to catalogue “attributes, dispositions, social skills, attitudes and intrapersonal resources – independent of intellectual ability,” according to a February DOE report, all under the guise of education.

Read the whole thing.

Like we’ve said before, “If schools are unable to keep data secure, biometric template information is the last thing that should concern parents.” “Secure” doesn’t really apply in the situation described above but the observation that schools already possess very detailed information about students stands.

For the curious: This is an actual biometric template created using one finger, an off-the-shelf fingerprint reader and their freely-circulated software development kit (SDK). It consists of 800 hexadecimal characters.

2aba08229b3b2a44e72c8f14da168a560a3caf2257add068a7fc1636215bff53152546da3fc8071ea84433a42261f4ff7bc3b455199be8980eea2bb1e922f18aa309e050130d72ca124ecd6e9e86459e60858ff44f71d0c1c4e23b97a9a6554619543e8d347f79ea8fa70db87eaea7f37bf2cac4e697d5525479cc72fb653b5d32089e7b3cbcd01f8dba60eda95a50a31b2dc9ebaf0d5f602a64ff47f06cf97c1b2dc9ebaf0d5f602a64ff47f06cf97c1b2dc9ebaf0d5f602a64ff47f06cf97c1b2dc9ebaf0d5f602a64ff47f06cf97c1b2dc9ebaf0d5f602a64ff47f06cf97c1b2dc9ebaf0d5f602a64ff47f06cf97c1b2dc9ebaf0d5f602a64ff47f06cf97c1b2dc9ebaf0d5f602a64ff47f06cf97c1b2dc9ebaf0d5f602a64ff47f06cf97c1b2dc9ebaf0d5f602a64ff47f06cf97c1b2dc9ebaf0d5f602a64ff47f06cf97c1b2dc9ebaf0d5f602a64ff47f06cf97c1b2dc9ebaf0d5f602a64ff47f06cf97c1b2dc9ebaf0d5f602a64ff47f06cf97c1b2dc9ebaf0d5f602a64ff47f06cf97c1b2dc9ebaf0d5f602a64ff47f06cf97c

Something similar could be used instead of a PIN number for lunch purchases in Maryland schools unless the state bans the technology.

Now which is more risky to student privacy, those 800 characters which I’ve freely put online and made public, or other types of records schools routinely and uncontroversially* keep?

*Ms. Pullmann seems to find the potential sharing of information without parental knowledge and the chipping away of existing privacy protections that prevented sharing of non-academic information (including biometric information) more problematic than the fact that schools know a lot of non-cognitive details about students.

On another note the mention of “a biometric wrap on kids’ wrists” caught my eye. Within the large and growing list of biometric modalities, I’ve never heard of wrist biometrics. I suspect that this is another example of confusion that arises when “biometrics” and “biostatistics” are needlessly lumped together, a subject we have covered in some detail.

Philippines: Fingerprint regulation of bus system gets positive review from local commuter

Biometric boosts (Malaya Business Insight)

I FELT like I was in the twilight zone last Friday and this Monday. Although there was some traffic, it wasn’t anything like the monstrous bottlenecks I experience every end and start of the work week.

It was a pleasant surprise actually and thanks to the Metro Manila Development Authority (paging Atty. Francis Tolentino).

The website Top Gear reported that MMDA “has rolled out an enhanced bus-dispatch system that not only regulates the number of public-utility buses on EDSA but also monitors the drivers manning them.”

It further reports that the “Bus Management and Dispatch System (BMDS) is the first bus-reduction program in the country that utilizes biometrics (through fingerprint-scanning) to identify and monitor PUB drivers, “ensuring the safety of commuters that patronize PUBs.”

Earlier post: Philippines: Manila development authority adopts fingerprint biometrics in bus dispatch and monitoring system

Wonderful New World vs. Brave New World

Data Privacy Commissioners Discuss Ubiquitous Tracking (Forbes)

The big question for those gathered here is finding the right mix between government regulations, industry’s best practices and consumer education. In a speech at the conference, Microsoft general counsel and executive vice president Brad Smith agreed that some regulations are necessary to create a level playing field and a clear set of rules for big and small companies to follow while regulators like Portugal’s Clara Guerra acknowledged that big government can’t solve all the risks associated with big data. It’s a shared responsibility and it requires consumer awareness starting with privacy education programs aimed at children as well as adults.

Author Larry Magid strikes an important balance between the wonderful things made possible by technological innovation, the downsides of unaccountable misuse, and the need to help people stay aware of the implications of changing technology on their lives.

I hope his temperament is contagious.

FTC Freestylin’ on Face Recognition

Federal Trade Commission Staff Report Recommends Best Practices for Companies That Use Facial Recognition Technologies


Mission of the Federal Trade Commission…
To prevent business practices that are anticompetitive or deceptive or unfair to consumers; to enhance informed consumer choice and public understanding of the competitive process; and to accomplish this without unduly burdening legitimate business activity.

In December of last year, the Federal Trade Commission (FTC) hosted a workshop – “Face Facts: A Forum on Facial Recognition Technology” to examine the use of facial recognition technology and related privacy and security concerns.

Monday, the FTC released two documents summing up the effort. The first is the Staff Report, a 21 page attempt to synthesize the views of the forum’s participants and FTC staff into an authoritative guide. The second is a dissent from the 4-1 vote in favor of releasing the staff report.

In my opinion, Best Practices for Common Uses of Facial Recognition Technologies falls a little short for a couple of reasons. First, of the staff report’s three cases, only one — the Facebook case — is actually a facial recognition application. Then in the other instances where the report deals with facial recognition proper, it does so in a wholly hypothetical way. This approach runs the risk of being seen by many as falling outside the ambit of the FTC’s mission.

I have selected passages from both documents mentioned above for examination because they lie at the heart of the whole exercise. They are a distillation of what the entire project was about and has concluded. The entire documents are available via links below for those who seek more information.

from the Staff report (pdf at FTC.gov)

To begin, staff recommends that companies using facial recognition technologies design their services with privacy in mind, that is, by implementing “privacy by design,” in a number of ways. First, companies should maintain reasonable data security protections for consumers’ images and the biometric information collected from those images to enable facial recognition (for example, unique measurements such as size of features or distance between the eyes or the ears). As the increasing public availability of identified images online has been a major factor in the increasing commercial viability of facial recognition technologies, companies that store such images should consider putting protections in place that would prevent unauthorized scraping which can lead to unintended secondary uses. Second, companies should establish and maintain appropriate retention and disposal practices for the consumer images and biometric data that they collect. For example, if a consumer creates an account on a website that allows her to virtually “try on” eyeglasses, uploads photos to that website, and then later deletes her account on the website, the photos are no longer necessary and should be discarded. Third, companies should consider the sensitivity of information when developing their facial recognition products and services. For instance, companies developing digital signs equipped with cameras using facial recognition technologies should consider carefully where to place such signs and avoid placing them in sensitive areas, such as bathrooms, locker rooms, health care facilities, or places where children congregate.

Staff also recommends several ways for companies using facial recognition technologies to provide consumers with simplified choices and increase the transparency of their practices. For example, companies using digital signs capable of demographic detection – which often look no different than digital signs that do not contain cameras – should provide clear notice to consumers that the technologies are in use, before consumers come into contact with the signs. Similarly, social networks using a facial recognition feature should provide users with a clear notice – outside of a privacy policy – about how the feature works, what data it collects, and how it will use the data. Social networks should also provide consumers with (1) an easy to find, meaningful choice not to have their biometric data collected and used for facial recognition; and (2) the ability to turn off the feature at any time and delete any biometric data previously collected from their tagged photos. Finally, there are at least two scenarios in which companies should obtain consumers’ affirmative express consent before collecting or using biometric data from facial images. First, they should obtain a consumer’s affirmative express consent before using a consumer’s image or any biometric data derived from that image in a materially different manner than they represented when they collected the data. Second, companies should not use facial recognition to identify anonymous images of a consumer to someone who could not otherwise identify him or her, without obtaining the consumer’s affirmative express consent. Consider the example of a mobile app that allows users to identify strangers in public places, such as on the street or in a bar. If such an app were to exist, a stranger could surreptitiously use the camera on his mobile phone to take a photo of an individual who is walking to work or meeting a friend for a drink and learn that individual’s identity – and possibly more information, such as her address – without the individual even being aware that her photo was taken. Given the significant privacy and safety risks that such an app would raise, only consumers who have affirmatively chosen to participate in such a system should be identified. The recommended best practices contained in this report are intended to provide guidance to commercial entities that are using or plan to use facial recognition technologies in their products and services. However, to the extent the recommended best practices go beyond existing legal requirements, they are not intended to serve as a template for law enforcement actions or regulations under laws currently enforced by the FTC. If companies consider the issues of privacy by design, meaningful choice, and transparency at this early stage, it will help ensure that this industry develops in a way that encourages companies to offer innovative new benefits to consumers and respect their privacy interests. [ed.: bold emphasis mine]

The fist paragraph above is common sense. For example: “Companies should establish and maintain appropriate retention and disposal practices for the consumer images and biometric data that they collect.” Who could argue with that?

I believe many on all sides of the facial recognition issue will find the Face Facts forum findings disappointing and I think the second italicized paragraph above best encapsulates why. In it, the FTC staff report loses coherence.

Let’s examine it in detail.

1. The staff report doesn’t confine itself to facial recognition proper.

Staff also recommends several ways for companies using facial recognition technologies to provide consumers with simplified choices and increase the transparency of their practices. For example, companies using digital signs capable of demographic detection – which often look no different than digital signs that do not contain cameras – should provide clear notice to consumers that the technologies are in use, before consumers come into contact with the signs.

Demographic inference isn’t facial recognition and nowhere does the FTC staff make a case that a computer guessing at gender, age or ethnicity has any privacy implication, at all. And then, even if that case is made, the task of tying the activity back to the FTC’s mandate remains.

¿Qué?

The recommendation that someone “should provide clear notice to consumers that the technologies are in use, before consumers come into contact with the signs,” however reasonable it seems in theory, is odd in practice. The old microwave-and-pacemaker signs come to mind. But then where would an ad agency put those signs if they wanted to do advertising on, say, a city street? [Bonus: would it be appropriate to use language detection technology in those signs in order to display the warning message in a language the reader is judged more likely to understand?]

2. Next there’s a nameless “social network” — no points for guessing [See: Consumer Reports: Facebook & Your Privacy and It’s not the tech, it’s the people: Senate Face Rec Hearings Editionwhich — that  is hypothetically doing the exact same things a non-hypothetical social network actually did without much in the way of an FTC response.

Similarly, social networks using a facial recognition feature should provide users with a clear notice – outside of a privacy policy – about how the feature works, what data it collects, and how it will use the data. Social networks should also provide consumers with (1) an easy to find, meaningful choice not to have their biometric data collected and used for facial recognition; and (2) the ability to turn off the feature at any time and delete any biometric data previously collected from their tagged photos.

This is the closest the document ever gets to a concrete example of facial recognition technology even being in the neighborhood of an act the FTC exists to regulate and the staff of the FTC still doesn’t abandon the hypothetical for the real world.

3. Then there’s the warning that the FTC would take a dim view of two types of hypothetical facial recognition deployment each of which would require its own dedicated staff report in order to make a decent show of doing the topic justice.

Finally, there are at least two scenarios in which companies should obtain consumers’ affirmative express consent before collecting or using biometric data from facial images. First, they should obtain a consumer’s affirmative express consent before using a consumer’s image or any biometric data derived from that image in a materially different manner than they represented when they collected the data. 

This is far too general to be useful. The above would seem to preclude casinos from using facial databases of known or suspected cheaters, a proposition few would argue.

Then there’s the question of what makes biometric data so special? Should the same standards apply to all personal data or just pictures of faces?

For the situation above to apply to the FTC’s mandate a practice would have to be deemed “deceptive” or “unfair” and if a practice is deceptive or unfair when a face is part of the data being shared, how does using the data in a substantially equal manner cease to be deceptive and unfair by omitting the face? The report is silent on these points.

Second, companies should not use facial recognition to identify anonymous images of a consumer to someone who could not otherwise identify him or her, without obtaining the consumer’s affirmative express consent. Consider the example of a mobile app that allows users to identify strangers in public places, such as on the street or in a bar. If such an app were to exist, a stranger could surreptitiously use the camera on his mobile phone to take a photo of an individual who is walking to work or meeting a friend for a drink and learn that individual’s identity – and possibly more information, such as her address – without the individual even being aware that her photo was taken. Given the significant privacy and safety risks that such an app would raise, only consumers who have affirmatively chosen to participate in such a system should be identified.

This hypothetical future app does exactly what anyone can pay a private detective to do legally and today. If the FTC isn’t taking action against PI’s, it would be extremely helpful of the FTC to make clear to buyers and sellers of facial recognition technology the distinctions they see between the two.

Then, towards the end of the excerpted text, perhaps sensing how far ahead of themselves and the mission of the FTC they have gotten, a couple of sentences later (bold sentence) the staff report essentially says, “Never mind. We aren’t formulating new policy here. We’re just freestylin.”


However, to the extent the recommended best practices go beyond existing legal requirements, they are not intended to serve as a template for law enforcement actions or regulations under laws currently enforced by the FTC. If companies consider the issues of privacy by design, meaningful choice, and transparency at this early stage, it will help ensure that this industry develops in a way that encourages companies to offer innovative new benefits to consumers and respect their privacy interests. [ed.: bold emphasis mine]

With the possible exception of the “social network” example, pretty much everything in the document goes beyond existing legal requirements enforced by the FTC. So what’s going on here?

My hunch is that someone at the FTC became concerned over a “social network” terms of service issue and rather than deal with it as a narrow terms of use issue — an issue seemingly right in the wheelhouse of the FTC’s mission  under the “deceptive or unfair” part of their mission — decided instead that it was a technology issue and that it was both possible and desirable to address the far bigger issues of facial recognition technology, ID and society in a coherent way, forgetting that doing so requires a novel interpretation of the FTC’s mission. Once that decision was made, the best practices document, flawed though it is, was about the best that could be hoped for… which brings us to the dissent.

The decision to release the Face Facts staff report wasn’t unanimous. Commissioner Thomas Rosch thought releasing the report at all was a mistake. Several paragraphs of the dissent follow below.

The last paragraph quoted below is particularly convincing.

then the lone dissent… (pdf at FTC.gov)

The Staff Report on Facial Recognition Technology does not – at least to my satisfaction – provide a description of such “substantial injury.” Although the Commission’s Policy Statement on Unfairness states that “safety risks” may support a finding of unfairness,3 there is nothing in the Staff Report that indicates that facial recognition technology is so advanced as to cause safety risks that amount to tangible injury. To the extent that Staff identifies misuses of facial recognition technology, the consumer protection “deception” prong of Section 5 – which embraces both misrepresentations and deceptive omissions – will be a more than adequate basis upon which to bring law enforcement actions.

Second, along similar lines, I disagree with the adoption of “best practices” on the ground that facial recognition may be misused. There is nothing to establish that this misconduct has occurred or even that it is likely to occur in the near future. It is at least premature for anyone, much less the Commission, to suggest to businesses that they should adopt as “best practices” safeguards that may be costly and inefficient against misconduct that may never occur.

Third, I disagree with the notion that companies should be required to “provide consumers with choices” whenever facial recognition is used and is “not consistent with the context of a transaction or a consumer’s relationship with a business.”4 As I noted when the Commission used the same ill-defined language in its March 2012 Privacy Report, that would import an “opt-in” requirement in a broad swath of contexts.5 In addition, as I have also pointed out before, it is difficult, if not impossible, to reliably determine “consumers’ expectations” in any particular circumstance.

In summary, I do not believe that such far-reaching conclusions and recommendations can be justified at this time. There is no support at all in the Staff Report for them, much less the kind of rigorous cost-benefit analysis that should be conducted before the Commission embraces such recommendations. Nor can they be justified on the ground that technological change will occur so rapidly with respect to facial recognition technology that the Commission cannot adequately keep up with it when, and if, a consumer’s data security is compromised or facial recognition technology is used to build a consumer profile. On the contrary, the Commission has shown that it can and will act promptly to protect consumers when that occurs.

To summarize, Rosch points out that the FTC staff report:

  • Exceeds the FTC’s regulatory mandate
  • Makes no allegation of consumer harm
  • Is so overly broad as to be unworkable
  • Provides no support for the conclusions it draws

The FTC would perhaps have been better served had more Commissioners taken Rosch to heart. As it happens, the FTC staff report over reaches, under delivers, and deviates from the organization’s stated mission and the results aren’t pretty.

NOTE: This post has been modified slightly from the original version to add clarity, by cleaning up grammar, spelling or typographical errors.

First word about the FTC report

I’ll have much more to say on the topic, perhaps later today, but the first clutch of analysis of the FTC’s findings following the Face Facts workshop is starting to come out.

The best two examples I have seen so far are:
FTC Issues Privacy Guidelines for Facial Recognition Technology (eWeek)
FTC Issues Guidelines for Facial Recognition (Multichannel News)

Brian Prince at eWeek gets, I think, gets at two very important aspects of the FTC’s efforts: the degree to which Facebook is the elephant in the room; and the dissenting voice of Commissioner Thomas Rosch who thought releasing the report at all was a mistake.

John Eggerton at Multichannel News gives a down the middle summary of each of the two points of view (pro and con). Then he really gives the dissent the attention it deserves. The quotes from Daniel Castro, senior analyst at the Information Technology & Innovation Foundation, that close the article are highly appropriate.

Facebook consents to delete face recognition data of EU users

Facebook Agrees to Delete EU Facial-Recogniation Data (Bloomberg)

The owner of the biggest social-networking site has faced several European reviews over concerns a facial-recognition program that automatically suggests people’s names to tag in pictures breaches privacy rights.

Facebook Ireland “agreed to delete collected templates for EU users by Oct. 15” and to seek regulator consent “if it chooses to provide the feature to EU users again,” the Irish Office of the Data Protection Commissioner said in the conclusions to a review today.

Data-protection regulators from the 27-nation EU have been looking into Facebook’s facial-recognition feature.

The theme of the article is consent.

Australia: Progress on Ratifying Privacy Recommendations

Privacy reforms pass through lower House (ZDNet)

The proposed changes, some now four years old, are designed to tighten the rules on how personal information is sent outside of Australia, how personal information may be used for direct marketing, increase the protections provided to sensitive information, such as health records and biometric data, and provide the Privacy Commissioner with powers to allow him to apply civil penalties in cases where the Privacy Act has been breached.