Alison Peck

View Original

Day 52: Gait Recognition Biometrics?!?

Photo by Maksym Tymchyk 🇺🇦 on Unsplash

Today I came out of my office into the hallway and saw a young man plugging in an extension cord and running it along the hall and out the door. I asked what he was working on. He said he was a student in the engineering school and they were setting up a study on gait biometrics.

Gait biometrics? I’d never heard of it, but my radar went up. DHS uses various forms of biometrics to recognize and monitor noncitizens, and I’m always a little concerned about their accuracy and use. I asked some questions.

Gait biometrics, the young man explained, is a technology that allows systems to recognize people by how they walk. Apparently everyone has a unique “fingerprint” to their gait, and with enough data, a system can use that information for things like access to secure buildings or devices.

Blowing my Data Privacy Circuits

My immediate concern was privacy. In this case, the study was measuring the gaits of two hundred people who had signed consent forms. The researchers were setting up cameras outside and just using an electrical outlet in the law school. No data was being captured on anyone other than the study participants, he said.

I’m willing to believe the researchers will be very careful not to capture unauthorized data. (If nothing else, they would violate university IRB protocols if they’re not, and dealing with IRBs sucks.)

But one of the supposed advantages of gait as a biometric, I’m learning, is that it can be used from a long distance and without the subject’s interaction or consent.

From a data privacy standpoint, this set my head reeling. But I’m also learning that gait recognition may be used in forensics (i.e., prosecution) or by law enforcement to create watchlists of people who engage in political protests.

Reliability, Variability, and the risk of Discrimination

If gait recognition were 100 percent reliable, perhaps this wouldn’t be as much of a concern. After all, reliable evidence of a crime should be admissible in court, and people who engage in protest activity are deliberately doing public acts (that’s the whole point of protest). There’s no guarantee of anonymity in such activities and there probably shouldn’t be.

But of course gait recognition isn’t 100 percent reliable. Machine learning has now increased its reliability to (according to its creators) 94 to 98 percent. This apparent reliability is also increasing its viability.

But of course, like every algorithm-driven technology, the old adage applies: garbage in, garbage out. Gait recognition is bound to be plagued by the same failures that plague other types of machine learning technology - overbroad judgments made on inadequate data, systems that fail to detect changes due to age or injury. And since no consent is required, it could easily be deployed to discriminatorily target vulnerable populations (like immigrants and people of color).

As an advocate for people who surrender extraordinary amounts of privacy the moment they enter U.S. territory (with or without documentation), this technology raise lots of red flags. I’ve never heard anyone in a legal setting mention gait recognition (although I’m sure there are some who are studying it). A Westlaw search turns up only 45 law review articles that even mention the term.

A Job for Lawyers

To be clear, I don’t have an opinion for or against WVU engineering scholars conducting a study that I don’t know anything about. Perhaps their data can be used to critically evaluate the potential and problems of gait recognition. But I’m concerned that this technology might be used by law enforcement against non-consenting and marginalized groups before the legal system has caught up with it.