, ,

What You Need to Know to Navigate the Dark Side of AI

“We can build these models, but we don’t know how they work.”

(Joel Dudley)

This is the third in a series of posts designed to help your teams evaluate the role and benefit of Artificial Intelligence (AI). The first post, What You Need To Know Before Embarking On AI Implementation, considers the pros and cons of AI, along with seven steps for getting started. The next post, How to Reap and Share the Benefits of Artificial Intelligence highlights the benefits and practical examples of ways AI is improving Federal, State, and Local agencies.

It’s time for the other shoe to drop.

In this post, we talk about what most people ignore — the risks of AI. Prior to deploying the technology across your organization, it’s critical to identify and understand these risks and have a plan in place to mitigate them.

Building General Knowledge About AI Challenges

Some AI system risks are genuine cyberthreats. Others are the result of poor understanding and a lack of practical knowledge. In fact, one of the biggest challenges with AI is that most builders, sellers, and users of the systems don’t truly grasp how they work. Specifically, we lack good answers for fundamental considerations:

  1. Explainability (how it works)Why an AI makes a decision or recommendation
  2. Robustness: The overall quality of the AI system, generally broken down into three core areas
    1. Bias (how it learns):  Who trains the AI, how should it be trained, and with what data
    2. Security (how it is defended):  How the integrity of the system is protected, ensuring ‘good’ decisions are made – more on this in a minute
    3. Performance (where it can ‘live’):  The environments for which the system is best and worst suited

In all cases, at least some of these challenges are present. They validate the need to consider how much we should trust AI. Still, these are issues of education.

The biggest issue is something far more nefarious.

The Dark Side of AI

One critical aspect is too often overlooked — AI systems can be ‘hacked’ and ‘tricked.’ In one recent case, the security firm BlackBerry Cylance created an AI engine for a sophisticated malware detection system capable of identifying malicious files years before they’re even created. As impressive as this sounds,  another group recently developed a system that could subvert the algorithm, causing it to falsely tag known malware as “goodware.”[1] In other cases, we’ve seen autonomous vehicles tricked by stickers on signs and roads, an NYU research team that installed backdoors into AI systems, and people hacking Alexa.[2]  The World Economic Forum (WEF) is cataloging growing numbers of instances like this, calling it ‘adversarial AI.’

Risks to Address Prior to Deploying AI Systems

As we develop, adopt and deploy AI systems, the best way to protect against threats is to consider them fully.[3] The following list is designed to help you consider your risks and build a plan for AI security and resilience.

  1. Decreased transparency: Drawing from the lack of understanding about how AI works, agencies neglect to effectively question, explain, or understand the decisions made by AI systems. ‘The AI did it,’ does not suffice as an answer to public questions about outcomes.
  2. Limited data availability: Data is lacking for various reasons. Most commonly, it’s because the agency doesn’t prioritize data collection, it hesitates to share data given competition for funding or other policies, or the collected data contains PII or other items that can’t be shared.[4]
  3. Implementation and integration: The government has outdated IT equipment in use. (There are agencies still using floppy discs.) Most of these infrastructures are ill-equipped for AI. Attempting to drop AI on top of legacy systems will create more challenges than solutions.[5]
  4. Discrimination and privacy violations: Issues with bias can result in racial profiling and gender-based discrimination, presenting clear challenges to law enforcement, judicial, hiring and human resources, and other agency functions.
  5. Security and trust: From the algorithm to the data to the IT infrastructure of the agency – how is the ecosystem secured, maintained, and tested? Can your agency tell the difference between good code and malicious code? Probably not. While many agencies are rushing to adopt AI to seem tech-savvy, few can identify security issues with it. It’s worth noting that securing an AI system requires special tools and techniques that are different than traditional cybersecurity frameworks.
  6. Governance: Sometimes referred to as ‘ethical AI,’ the ability to oversee and govern AI presents a real dilemma for government agencies. Standards and policies lag far behind technological advancements, and the current legal system is struggling to keep pace as well.
  7. Workforce impact: There is little truth in the fearful refrain, ‘The robots are coming for our jobs!’ Nonetheless, this panic still tinges water cooler discussions whenever new technologies are considered for implementation, especially ones capable of performing traditionally human tasks.

This is not a rallying cry against AI. It will transform how societies behave and how governments support them. If your agency takes an informed approach, looking at your own infrastructure and needs rather than at the buzz, you will have laid a path to success.

How is your agency approaching AI?

Tyler Sweatt is a GovLoop Featured Contributor. He is the founder and Managing Partner at Future Tense. Tyler works to identify and address risks and opportunities in changing environments. He advises startups across the cybersecurity, artificial intelligence, and physical security domains, and regularly supports R&D, S&T, M&A and strategy initiatives across DHS, DoD, the IC and Fortune 500 organizations. Previously, Tyler worked at futurist consulting firm Toffler Associates, leading emerging technology and security efforts, and worked at Deloitte where he focused on rapid technology acquisition for DoD. A West Point graduate, Tyler served as a Combat Engineer and Counterintelligence Officer with the Army, serving multiple combat deployments. You can find him on Twitter @Tyler_Sweatt.

[1] https://www-vice-com.cdn.ampproject.org/c/s/www.vice.com/amp/en_us/article/9kxp83/researchers-easily-trick-cylances-ai-based-antivirus-into-thinking-malware-is-goodware
[2] https://www.weforum.org/agenda/2018/11/what-is-adversarial-artificial-intelligence-is-and-why-does-it-matter/
[3] https://calypsoai.com/insights/what-is-ai-security/
[4]http://www.businessofgovernment.org/sites/default/files/Delivering%20Artificial%20Intelligence%20in%20Government.pdf
[5] https://dcode.co/from-floppy-disks-to-artificial-intelligence-is-the-government-ready/

Leave a Comment

Leave a comment

Leave a Reply