GovLoop

DHS Embraces AI for the Future of Cybersecurity

This article is an excerpt from GovLoop’s recent e-book, “How Artificial Intelligence Combats Fraud and Cyberattacks.” Download the full e-book here.

At the Department of Homeland Security (DHS), the mission is simple: Keep the nation secure from all of the different threats it faces. But a lot has changed since it was created in 2002, including the spread of security technology at its disposal. Martin Stanley, Senior Advisor for Artificial Intelligence (AI) in the Office of the Chief Technology Officer at DHS, told GovLoop about the agency’s current AI-led cybersecurity efforts and shared his team’s best practices for applying AI to its existing framework.

GOVLOOP: What were some of your agency’s initial use cases of AI and ML to combat cyberattacks?

STANLEY: AI and machine learning have had a long history with cybersecurity. The application of those technologies is not new. Those technologies have been pretty widely deployed, and we’re already using them in our programs that we have here, such as CDM [Continuous Diagnostics and Mitigation] and DPS [Defense Personal Property System]. We do anticipate an increased use of these approaches through other applications like incident triage and Security Orchestration Automation and Response [SOAR].

Tell me more about those applications.

Incident triage is really challenging because you have tons of data about what’s going on in your environment coming in all the time, and it’s increasing as there are more sensors and there are more attacks. Having an automated capability to review this information and to pull out the ones that should be responded to, or the ones that should be referred to a human agent is a big application that we see both within our mission states here at DHS, and also with the communities and stakeholders that we serve.

Security Orchestration Automation and Response is a capability of taking a predefined set of actions based on the analysis that a machine is doing in the environment. It sees a certain set of information coming in through sensors, and it makes certain responses in an automated way.

Could you give an example of a specific project or program that’s teaming together humans and AI capabilities to improve cybersecurity?

The Continuous Diagnostics and Mitigation Program is a civilian, agencywide program where we deploy sensors and a dashboard reporting system across the federal government to identify cybersecurity issues to report those back up through the federal dashboard, and then also to send threat information and recommended responses back down to the federal civilian agencies. Within that, there’s a lot of opportunity for automation at the agency level where they’re doing their cyberdefense.

The EINSTEIN system is a perimeter protection system for federal civilian agencies. With all this data that we have about the perimeters of federal government, being able to do an automated analysis of that information so that we can prioritize the response actions and the incidents for the human operators is an application that we’re looking at as well.

What types of AI solutions are you exploring, more generally?

We’re looking at narrow AI solutions, which are good at uncomplicated, known tasks that have tons of data examples. We focus on what the best practices for implementing AI or machine learning systems within our environment are and then try to apply those as we look at solutions to see if they make sense.

What are some of those best practices?

Make sure that you look at what data you need in order to train the algorithm in how it needs to be prepared. You want to look really hard at the data that you have to see if you’ve got the right data and you have enough of it that you can sufficiently utilize an algorithm.

Another best practice that we’ve identified is to have a human performance metric that you’re using to measure how well the AI is performing the task. Use that as your benchmark to determine if your AI doing better or worse than your human.

The three dimensions that you’re really looking at are benefit, regret and complexity. You don’t want to deploy a solution that’s going to be of low benefit. You want to have low regret in the event that the machine doesn’t do what you want it to do, or the machine makes a mistake. And then lastly, there’s a complexity dimension. These machines generally tend to not be very good at high-complexity tasks, but they tend to outperform humans at low-complexity tasks.

What do you see for the future of the relationship between AI and cybersecurity either within your agency or at large or both?

We’re concerned about AI from three perspectives. Obviously, we spend a lot of time talking about how AI can be used to perform our cyber missions within our program space. But one of the areas that we’re also concerned about is how the networks that we protect are going to deploy AI, and how that’s going to change the attack surface that we’re helping to protect. AI creates additional vulnerabilities in an environment based on the inherent nature of these systems. Lastly, we’re concerned about how adversaries are going to use AI to conduct attacks.

To find out more about how AI and ML can improve cybersecurity, download GovLoop’s recent e-book, “How Artificial Intelligence Combats Fraud and Cyberattacks,” here.

 

Exit mobile version