Trustworthy AI for the Future of Fingerprints, Biometrics

GovLoop interviewed Elham Tabassi, Chief of Staff for the Information Technology Lab (ITL) at the National Institute of Standards and Technology (NIST).

NIST is a forebearer for fingerprinting and biometrics standards in the world. These standards allow governments to share fingerprinting and biometric data with one another.

In 2004, Tabassi published a machine learning program – NIST Fingerprint Image Quality (NFIQ) – that is used by the federal government and other national governments, as well as by law enforcement and separate agencies, on visa and immigration applications. For her development of the software, Tabassi earned notable recognitions, including the 2016 Women in Biometrics award.

Her answers have been lightly edited for length and clarity.

GovLoop: What are some of the use cases for artificial intelligence (AI) and machine learning (ML) at NIST?

NIST has six different laboratories; Information Technology Laboratory is the one that I’m from. I want to start by saying use cases in other laboratories at NIST. These include materials, robotics, wireless networking. They all are using AI techniques to advance their measurement sciences. In Information Technology Laboratory, we do a lot of standards and evaluation for AI. There is a group working on biometrics, and they do evaluations of spatial cognition algorithms. While we don’t do spatial cognition algorithms, we evaluate other ones.

I saw you have quite a story, winning awards and serving as the principal architect of a watershed technology in fingerprinting and biometrics. Would you mind elaborating on that history?

Prior to being a chief of staff at Information Technology Laboratory, I was doing fingerprint work at NIST. Our fingerprint work at NIST goes back to the ‘60s. What happened in the mid- ‘60s is computers are being recognized as new resources for doing computations much more efficiently than the humans. And FBI at that time decided to use computers to automate some of its fingerprinting.

What they realized is the format of the files were all different, and there was a need for a sort of a standard formatting and, if you wish, language for fingerprint exchange. And then, they came back to NIST again, and the group that was part of that, before becoming the staff here, in collaboration with the whole community, worked on the standard.

And that was a standard for storage and exchange of fingerprint data. That standard since ‘86 has been revised several times [and] has expanded to include many other modalities. Right now, I think it has 16 or 17 different modalities – fingerprint, iris, face, DNA, bite mark, many others.

Fast forward to after 9/11, the formation of the Department of Homeland Security and passage of the Patriot Act. The US-VISIT Program requires all the visitors to the U.S., when they apply for the visa, they will be fingerprinted, the fingerprint will be sent to the U.S. for background checks, and if the visa is issued to them, when they arrive in the U.S., they get fingerprinted again. They check the fingerprint at the time of arrival to match it with the fingerprint that is submitted at the time of the request for the visa to make sure that the person that was issued the visa and the person who is arriving to the U.S. are the same person.

So, the question was fingerprint quality. What’s the likelihood of this fingerprint being matched correctly? And that project landed on my desk, and out of that I developed a software, NIST Fingerprint Image Quality, that when a fingerprint is captured, you look at it and say the likelihood. And how it’s useful, it’s the old saying of “garbage in, garbage out.” If you can prevent the poor-quality fingerprints [from getting] into the system, we have improved the performance of the system.

How does machine learning come into play here?

You present, you develop, you build a model that’s from the patterns and information characteristic that you present to them. So, if I want to make it an easier example, if you want to predict the price of a house from the number of bedrooms, number of the bathrooms and the ZIP code that the house is located in, you can get a lot of examples of houses. If you give enough of this and write a computer program to learn the function from the input fit, in this case, it’s the number of the bathrooms, bedrooms and the ZIP code, you can try to predict or compute the price of the house.

So, in the case of the fingerprint image quality, I gathered those types of information about the fingerprint. Fingerprints usually are this minutia, you know. A fingerprint is just this print, the pattern of the black and white of the lines, and that’s sometimes a property and sometimes, diversion to many different, more than two of the ridges. So, I computed those types of information. And from the learned model, it comes up with the prediction with a likelihood of the match.

Such a big part of AI is trust in the system. How do we improve trust in AI?

The full potential of AI is going to be achieved only if users can have the confidence and are able to trust the technology for their use. We try to break it down into elements of, or aspects of, what trustworthy properties that the trustworthy AI should exhibit. And then for each of them, go and figure out how to measure them.

This article is an excerpt from GovLoop’s recent e-book “Beyond the Hype of Machine Learning.” Download the full guide here.

Leave a Comment

Leave a comment

Leave a Reply