When Innovation Becomes a Liability
A cautionary tale of AI-gone-wrong shows why leaders must audit incentives and embed accountability. Don’t let a click compromise your integrity.
A cautionary tale of AI-gone-wrong shows why leaders must audit incentives and embed accountability. Don’t let a click compromise your integrity.
AI’s productivity gains don’t automatically translate into shared prosperity. How do we ensure AI doesn’t undermine economic mobility?
We asked three experts to share their thoughts on what agencies had learned about artificial intelligence this year. Here’s what they said.
Responsible for ensuring the safety of the various ways we travel, transportation agencies are challengedto reduce crashes and ensure the efficient movement of people and goods. In order to do that effectively, agencies must rely on data to help create new approaches to safety, resiliency, operations, and planning, and to address challenges such as congestionRead… Read more »
A year ago, caution was the government’s watchword when adopting artificial intelligence. That’s all changed. Read on to find out how.
In this video interview, John Lange with Tricentis explains how model-based automated software testing enhances efficiency and modernization.
Malicious actors are leveraging AI to make attacks more targeted and harder to detect. But AI also is essential to strengthening cyber posture.
Governments are racing to embed AI into public service. But while algorithms accelerate decisions, they also erode something far more valuable: trust.
Artificial intelligence is influencing every step of the cyber defense life cycle. Here’s an overview of some AI possibilities, including emerging roles for generative and agentic AI.
Agencies seek AI transformation but still use procurement systems designed for the last century. To fix it, embed agility and accountability into every acquisition.