AI In Government Starts With Trust in Data, But Is Built With People
AI is embedded in government operations, and agencies need a proactive approach to data governance in order to mitigate risk — and foster trust.
AI is embedded in government operations, and agencies need a proactive approach to data governance in order to mitigate risk — and foster trust.
Recent fraud regulations are creating a powerful catalyst for transformation. Agencies can leverage these changes to modernize policy and processes.
The evolution of AI coding tools has raised the prospect of “disposable software” that would reduce the need for expensive, long-term maintenance. But although appealing, this paradigm shift would be challenging for government to implement. A SpecOps approach, though, may solve the problem.
Upgrading an existing data center for AI/ML needs can be done, but there are many factors to consider.
In this video interview, Jonathan Hasak and Aaron Hunter with Coursera discuss how to overcome barriers to fostering an AI-knowledgeable workforce.
AI systems have become a source of truth for many constituents, but public-sector communication often is poorly designed for AI use and citation. Agencies may need to think about both human and AI audiences.
Updated federal regulations, geopolitical activity, the expansion of AI, and other developments will make this a big year for cybersecurity. Here are six outcomes we can anticipate.
Government modernization in 2026 is no longer just about digitization — it’s about governing data with intention, using automation and AI to build trust, transparency, and confidence in the information agencies rely on every day.
There can be serious, costly consequences when agency staff expose sensitive data, even if human error is the cause. Here are tactics to prevent and identify data leaks.
When their data sources and fragmented IT systems don’t “talk” to each other, agencies lose opportunities for meaningful insights and create new security risks.