, , ,

Out From the Shadows: How to Protect Against Data Loss From Unapproved AI 

Employee use of AI tools, particularly generative AI, is widespread across government and leads to improved productivity and innovation. That is a good thing, generally speaking. But too often, employees rely on unapproved or “Shadow AI” options, inadvertently uploading sensitive or proprietary data into public or free AI tools. Once in the wild, the data may be retained or used to train large language models; the agency loses control of it. Although traditional cybersecurity defenses may help to protect devices or stop incoming attacks, they are not designed to monitor or control data that leaves an agency.     

The first step to protect against data loss is identifying which AI tools employees already use. Next, establish governance around who can use what AI solutions, including creating enterprise licensing agreements as appropriate. When an employee tries to access an unapproved GenAI or other tool, they should be either blocked or redirected to an approved alternative. After all, surface-level fixes are not enough, said Dr. Darren Williams, Founder and CEO of BlackFog.  

“If you can prevent unauthorized data from leaving, you can stop cybersecurity threats, Shadow AI and many other cyber risks, because you’re addressing the root cause,” said Williams.  

In this video interview, Williams discusses the risks associated with Shadow AI and how agencies can protect against them. Topics include:   

  • What Shadow AI is, and how it jeopardizes agency data 
  • Best practices for guarding against data loss 
  • How agentic AI factors in, and the critical need to prepare for it 

Leave a Comment

Leave a comment

Leave a Reply