,

A Techno-Pessimist’s Guide to Government AI

Techno-pessimism and AI Anyway

Truthfully, I identify as a techno-pessimist. I worry about a future that involves minimal governance of Artificial Intelligence (AI) and other emerging technologies. I don’t worry about a world where machines rule the earth, but of one where bias and misuse deepen inequalities and undermine progress. It’s understandable that we’d have techno-pessimists among us who harbor concerns about AI. After all, a track record featuring discriminatory algorithms and privacy breaches demonstrates that AI implementation has been far from perfect.

But, AI is already becoming an integral component of our work lives. AI applications, predictive analytics, and chatbots are increasingly common, and only growing in popularity. They also present real opportunities to improve workplace processes, making services more efficient for clients. In the federal government, where AI has the potential to benefit the entire American public, a cautious embrace of AI implementation could strengthen public benefits and save taxpayer money.

Navigating Pitfalls

Especially for the techno-pessimists, a key roadblock to government’s AI integration are the following crucial pitfalls (most of which apply to all AI implementation):

  • Data needed to train the algorithms presents a dilemma. Training data is biased and requires a great deal of natural resources to successfully train an algorithm. Governments must ensure that the data feeding into their AI systems is diverse, balanced, and ethically sourced.
  • The black box problem involves the difficulty of understanding, particularly for lay people, the algorithms’ decision-making and outputs. Additionally, no outputs should be trusted without human auditing. Any body that uses AI systems must employ a staff with the technical knowledge and ethical training to oversee and ensure transparency of the algorithms.
  • Job replacement anxiety is entirely understandable. Public service careers attract talent especially because of their stability. Government bodies should understand (and communicate) that AI is a tool to augment human work capacity and enhance the workforce’s productivity and service and uphold a commitment to the job stability of their employees.
  • A “one-and-done mentality” is among the scariest potential attitudes toward AI. Any use case requires ongoing auditing and improvement. To address these needs, governments must staff talented computer- and data scientists to continuously critique and improve the performance of their AI systems.

Harnessing Resources

Fortunately, governments don’t have to navigate these hurdles of the AI landscape alone. Many resources are emerging to assist with the adoption and implementation of AI algorithms. A growing number of organizations both within and separate from the government focus specifically on public use cases. A sampling of organizations that offer best practices, guidance, technical support, and governance expertise to government bodies and agencies that want to develop their AI use include:

Even for the “techno-pessimists,” the only way to prevent unfair deployment of AI is to prepare our institutions for an equitable one.  If we approach AI adoption in the public sector with wary optimism, we can mitigate the most dangerous harms of the technological future ahead.


Julia Meltzer is a recent graduate from Stanford University with a B.S. in Symbolic Systems and a coterminal M.A. in Linguistics during which she wrote about the historical meaning of the word “freedom”. She is originally from New York City but is happily adapting to life at the Department of Housing and Urban Development’s San Francisco hub. She has previously focused on issues of technology and environmental policy and is thrilled to join the new Recent Graduate Program in HUD’s Multifamily West team. In her spare time, she loves to cook, sketch, and play with her dog Shadow.

Image by Pixabay on pixabay.com

Leave a Comment

Leave a comment

Leave a Reply