GovLoop

What Do I Need to Know About Generative AI?

Since OpenAI released ChatGPT last November, it’s been hard to avoid the hype about generative AI. It has been hailed as a world-changing breakthrough on the scale of the steam engine and condemned as a world-ending threat to human life commensurate with nuclear war.

The truth is it probably won’t be either anytime soon. So, let’s take a deep breath and look at what generative AI is — and what it isn’t.

First, some terminology. (Also be sure to check out the “AI 101” in our guide, “AI: A Crash Course.”)

Although media reports tend to refer to the technology behind ChatGPT, DALL-E and similar applications simply as “AI,” generative AI is a very specific branch of the technology. It’s important to differentiate between it and the “narrow AI” we’re already using, such as spell-checkers and search engines.

What’s Different About Generative AI?

According to a Reuters Explainer, “[G]enerative AI learns how to take actions from past data. It creates brand new content — a text, an image, even computer code — based on that training.”

The forms of generative AI in the news use a large language model, which means they’re trained on hundreds of billions of words. For example, ChatGPT is based on 570 gigabytes of data. That gives these forms of AI an enormous library of examples from which to draw statistical correlations.

But it’s important to remember that despite the convincing results you may get from a generative AI chatbot such as ChatGPT, Google’s BARD or Microsoft’s Bing, the chatbot is not thinking, and it doesn’t understand either your question or its answer. It’s looking for the string of words most likely related to those in your query. The chatbot’s answers are so realistic because it has so many examples.

Will It Take My Job?

Probably not right away. The immediate risk is highest for those whose work is rote and repetitive. Even narrow AI has made inroads there, but generative AI’s ability to compose text and code means it is most likely coming for jobs in information processing — from data analysts and software developers to writers, paralegals, and even teachers. The potential use of AI to write screenplays was an issue in the Writers Guild of America strike.

But tending these digital colleagues may lead to new work opportunities. For example, a New York lawyer who used ChatGPT to write a legal brief probably wishes he’d asked a paralegal to look it over before he submitted it to the court. That’s because the AI cited opinions in cases that didn’t exist. To add insult to injury, when he asked it if the citations were real, it confidently told him “yes.” Generative AI won’t displace all knowledge workers until it learns not to generate fake answers and responses that don’t make sense.

So, Will It End the World?

According to many experts, not unless we let it.

Prominent industry figures have called for slower development so regulation — and ethics — can catch up. Others say the risks are overblown, and that the suggested regulations serve both to solidify the lead of companies like OpenAI and distract from addressing the harms caused by existing AI.

AI — not just the generative kind — has already been found to have problems with bias, replicating the shortcomings of its training materials. Some of the creepier interactions with generative AI chatbots that have been reported also suffer from what the web’s flawed humans have taught it.

Unfortunately, it’s likely that generative AI is already being (mis)used by bad actors to generate scam emails better tailored to fool users, misinformation and disinformation campaigns, and tools that accelerate the cybersecurity arms race.

It’s also wreaking havoc on intellectual property protections. Artists have sued Stable Diffusion, a text-to-image model, alleging that the images it was trained on include their work. Getty Images has launched its own suit against Stability AI, claiming copyright infringement.

These are problems that can be addressed, but they will take coordinated international efforts from the public and private sectors. Despite its enormous potential, generative AI is still dependent on how it’s programmed and how it’s used. The real risk isn’t the artificial intelligence. It’s the human one.

This article appears in our new guide “AI: A Crash Course.” To read more about how AI can (and will) change your work, download it here:

Illustration by Marc Tom
Exit mobile version