, ,

Enabling Government Agencies to Secure and Safe AI Use

This year, the buzzword across the federal government is “artificial intelligence” — and for good reason. According to data from AI.gov, there are already over 700 use cases of AI across several different agencies. The federal government released an Executive Order and an OMB memo with ten requirements for the safe adoption of AI.

As government leaders eagerly anticipate the more guidance, there are a few practical steps that agencies can begin implementing now to ensure the safe and responsible use of this emerging technology. It’s not a matter of if AI regulation is needed, it’s a matter of when it will be ready.

Defining the Reality of AI

As we’ve seen in some of the recently published use cases, federal agencies are utilizing AI to automate tedious tasks and streamline processes in several different mission areas.

Some agencies with a higher level of resources, such as the Department of Defense, are even using Generative AI for mission advancement. As part of this project, DOD recently established Task Force Lima, an initiative aiming to harness the power of AI in a strategic and responsible manner.

But the drawback of AI is that this is a new, unpredictable technology, and we don’t quite understand its scope and magnitude yet. This makes it easy to implement but harder to govern. As such, the federal government needs to pay close attention to how it’s using AI in the interest of national security. Similar to other cybersecurity standards and regulations around technology, AI needs to have clear boundaries set to ensure the safety and security of agency’s critical data.

Actionable Steps to Take Now

Government IT leaders are waiting for official guidance from governing bodies such as CISA, NIST and the White House for direction. For these leaders who are looking for actionable steps to take in their AI journeys, consider consulting with cross-sectional leadership across industry and government in areas such as policy, acquisition and oversight to provide counsel to agencies looking to implement AI.

There are several organizations, think tanks and academic groups that are doing their own research on the safe implementation of AI. Other organizations have pulled together their own advisory committees and centers of excellence to provide counsel and direction on AI based on experience from other similar initiatives.

Keep Safety and Security a Priority

While agencies begin implementing AI and federal leaders are drafting regulations, standards and policies around it, government leaders at the agency level should start considering how they will keep safety and security a top priority for its use.

To ensure a smooth transition into the implementation of AI, agency leaders should set the groundwork by checking or improving their total security posture across all systems. This includes accelerating zero-trust architecture, having proper data management strategies for training data, having visibility over models both during the development as well as production deployment phases, and aligning with risk management frameworks for a comprehensive safety and security posture.

The government has a chance to derive significant productivity gains through the adoption of safe and ethical AI. This is an exciting opportunity that bridges together the most innovative thought leaders across the public sector, industry partners and academia.


Gaurav “GP” Pal is CEO and founder of stackArmor. He is an award-winning Senior Business Leader with a successful track record of growing and managing a secure cloud solutions practice with over $100 million in revenue focused on U.S. federal, Department of Defense, non-profit and financial services clients.

Image by Gerd Altmann from Pixabay

Leave a Comment

Leave a comment

Leave a Reply