Published on September 5th, 2020 📆 | 2652 Views ⚑
0Eight case studies on regulating biometric technology show us a path forward
Amba Kak was in law school in India when the country rolled out the Aadhaar project in 2009. The national biometric ID system, conceived as a comprehensive identity program, sought to collect the fingerprints, iris scans, and photographs of all residents. It wasnât long, Kak remembers, before stories about its devastating consequences began to spread. âWe were suddenly hearing reports of how manual laborers who work with their handsâhow their fingerprints were failing the system, and they were then being denied access to basic necessities,â she says. âWe actually had starvation deaths in India that were being linked to the barriers that these biometric ID systems were creating. So it was a really crucial issue.â
Those instances provoked her to research biometric systems and the ways the law could hold them accountable. On September 2, Kak, who is now the director of global strategy and programs at the New Yorkâbased AI Now Institute, released a new report detailing eight case studies of how biometric systems are regulated around the world. They span city, state, national, and global efforts, as well as some from nonprofit organizations. The goal is to develop a deeper understanding of how different approaches work or fall short. I spoke to Kak about what she learned and how we should move forward.
This interview has been edited and condensed for clarity.
What motivated this project?
Biometric technology is proliferating and becoming normalized, both in government domains but also in our private lives. The monitoring of protests using facial recognition happened this year alone in Hong Kong, in Delhi, in Detroit, and in Baltimore. Biometric ID systems, which are less talked about, where biometrics are used as a condition to access your welfare servicesâthatâs also proliferated across low- and middle-income countries in Asia, Africa, and Latin America.
But the interesting thing is that the pushback against these systems is also at its peak. The advocacy around it is getting more attention than ever before. So then the question is: Where do law and policy figure in? Thatâs where this compendium comes in. This report tries to pull out what we can learn from these experiences at a moment when it seems like there is a lot of appetite from governments and from advocacy groups for more regulation.
What is the current state of play for biometric regulation globally? How mature are the legal frameworks for handling this emerging technology?
There are about 130 countries in the world that have data protection laws. Almost all cover biometric data. So if weâre just asking the question of do laws exist to regulate biometric data, then the answer would be in most countries, they do.
But when you dig a little deeper, what are the limitations of a data protection law? A data protection law at its best can help you regulate when biometric data is used and make sure that it isnât used for purposes for which consent was not given. But issues like accuracy, discriminationâthose issues have still received very little legal attention.
On the other hand, what about completely banning the technology? Weâve seen that concentrated in the US at the city and state level. I think people forget sometimes that most of this legislative activity has been concentrated on public and, more specifically, on police use.
So we have a mix of data protection law that provides some safeguards but is inherently limited. And then we have a concentration of these complete moratoriums at the local city and state level in the US.
What were some common themes that emerged from these case studies?Â
To me, the clearest one was the chapter on India by Nayantara Ranganathan, and the chapter on the Australian facial recognition database by Monique Mann and Jake Goldenfein. Both of these are massive centralized state architectures where the whole point is to remove the technical silos between different state and other kinds of databases, and to make sure that these databases are centrally linked. So youâre creating this monster centralized, centrally linked biometric data architecture. Then as a Band-Aid on this huge problem, youâre saying, âOkay, we have a data protection law, which says that data should never be used for a purpose that was not imagined or anticipated.â But meanwhile, youâre changing the expectation of what can be anticipated. Today the database that was used in a criminal justice context is now being used in an immigration context.
For example, [in the US] ICE is now using or trying to use DMV databases in different states in the process of immigration enforcement. So these are databases created in a civilian context, and theyâre trying to use them for immigration. Similarly in Australia, you have this giant database, which includes driverâs license data, that is now going to be used for limitless criminal justice purposes, and where the home affairs department will have complete control. And similarly in India, they created a law, but the law basically put most of the discretion in the hands of the authority that created the database. So I think from these three examples, what becomes clear to me is you have to read the law in the context of the broader political movements that are happening. If I had to summarize the broader trend, itâs the securitization of every aspect of governance, from criminal justice to immigration to welfare, and itâs coinciding with the push for biometrics. Thatâs one.
The secondâand this is a lesson that we keep repeatingâconsent as a legal tool is very much broken, and itâs definitely broken in the context of biometric data. But that doesnât mean that itâs useless. Woody Hartzogâs chapter on Illinoisâs BIPA [Biometric Information Privacy Act] says: Look, itâs great that weâve had several successful lawsuits against companies using BIPA, most recently with Clearview AI. But we canât keep expecting âthe consent modelâ to bring about structural change. Our solution canât be: The user knows best; the user will tell Facebook that they donât want their face data collected. Maybe the user will not do that, and the burden shouldnât be on the individual to make these decisions. This is something that the privacy community has really learned the hard way, which is why laws like the GDPR donât just rely on consent. There are also hard guideline rules that say: If youâve collected data for one reason, you cannot use it for another purpose. And you cannot collect more data than is absolutely necessary.
Was there any country or state that you thought demonstrated particular promise in its approach to the regulation of biometrics?
Yeah, unsurprisingly itâs not a country or a state. Itâs actually the International Committee of the Red Cross [ICRC]. In the volume, Ben Hayes and Massimo Marelliâtheyâre both actually representatives of the ICRCâwrote a reflective piece on how they decided that there was a legitimate interest for them to be using biometrics in the context of distributing humanitarian aid. But they also recognized that there were many governments that would pressure them for access to that data in order to persecute these communities.
So they had a very real conundrum, and they resolved that by saying: We want to create a biometrics policy that minimizes the actual retention of peopleâs biometric data. So what weâll do is have a card on which someoneâs biometric data is securely stored. They can use that card to get access to the humanitarian welfare or assistance being provided. But if they decide to throw that card away, the data will not be stored anywhere else. The policy basically decided not to establish a biometric database with the data of refugees and others in need of humanitarian aid.
To me, the broader lesson from that is recognizing what the issue is. The issue in that case was that the databases were creating a honeypot and a real risk. So they thought up both a technical solution and a way for people to withdraw or delete their biometric data with complete agency.
What are the major gaps you see in approaches to biometric regulation across the board?
A good example to illustrate that point is: How is the law dealing with this whole issue of bias and accuracy? In the last few years weâve seen so much foundational research from people like Joy Buolamwini, Timnit Gebru, and Deb Raji that existentially challenges: Do these systems work? Who do they work against? And even when they pass these so-called accuracy tests, how do they actually perform in a real-life context?
Data privacy doesnât concern itself with these types of issues. So what weâve seen nowâand this is mostly legislative efforts in the USâis bills that mandate accuracy and nondiscrimination audits for facial-recognition systems. Some of them say: Weâre pausing facial-recognition use, but one condition for lifting this moratorium is that you will pass this accuracy and nondiscrimination test. And the tests that they often refer to are technical standards tests like NISTâs face-recognition vendor test.
But as I argue in that first chapter, these tests are evolving; they have been proven to underperform in real-life contexts; and most importantly, they are limited in their ability to address the broader discriminatory impact of these systems when theyâre applied to practice. So Iâm really worried in some ways about these technical standards becoming a kind of checkbox that needs to be ticked, and that then ignores or obfuscates the other forms of harms that these technologies have when theyâre applied.
How did this compendium change the way you think about biometric regulation?
The most important thing it did for me is to not think of regulation just as a tool that will help in limiting these systems. It can be a tool to push back against these systems, but equally it can be a tool to normalize or legitimize these systems. Itâs only when we look at examples like the one in India or the one in Australia that we start to see law as a multifaceted instrument, which can be used in different ways. At the moment when weâre really pushing to say âDo these technologies need to exist at all?â the law, and especially weak regulation, can really be weaponized. That was a good reminder for me. We need to be careful against that.
This conversation has definitely been revelatory for me because as someone who covers the way that tech is weaponized, Iâm often asked, âWhatâs the solution?â and I always say, âRegulation.â But now youâre saying, âRegulation can be weaponized too.â
Thatâs so true! This makes me think of these groups that used to work on domestic violence in India. And I remember they said that at the end of decades of fighting for the rights of survivors of domestic violence, the government finally said, âOkay, weâve passed this law.â But after that, nothing changed. I remember thinking even then, we sometimes glorify the idea of passing laws, but what happens after that?
And this is a good segueâeven as I read Clare Garvie and Jameson Spivackâs chapter on bans and moratoriums, they point out that most of these bans apply only to government use. Thereâs still this massive multibillion-dollar private industry. So itâs still going to be used at the Taylor Swift concert in very similar ways to the ways in which cops would use it: to keep people out, to discriminate against people. It doesnât stop the machine. That kind of legal intervention would take unprecedented advocacy. I donât think itâs impossible to have that so-called complete ban, but weâre not there yet. So yeah, we need to be more circumspect and critical about the way we understand the role of law.
What about the compendium made you hopeful about the future?
Thatâs always such a hard question, but it shouldnât be. It was probably Rashida Richardson and Stephanie Coyleâs chapter. Their chapter was almost like an ethnography about this group of parents in New York that felt really strongly about the fact that they didnât want their kids to be surveilled. And they were like, âWeâre going to go to every single meeting, even though they donât expect us to. And weâre going to say we have a problem with this.â
It was just really reassuring to learn about a story where it was the parentsâ group that completely shifted the discourse. They said: Letâs not talk about whether biometrics or surveillance is necessary. Letâs just talk about the real harms of this to our kids and whether this is the best use of money. A senator then picked this up and introduced a bill, and just in August, the New York state senate passed this bill. I celebrated with Rashida because I was like, âYay! Stories like this happen!â Itâs connected very deeply with the story of advocacy.
Gloss