‘Truth decay’ from Artificial Intelligence may disrupt elections in 2024

Al Tompkins (standing) listens to audience comments during a recent AI workshop.

By JULIE CARLE

BG Independent News

There was a time when seeing was believing. It is no longer enough to see something to believe it.

Artificial Intelligence (AI)—the ability of machines to learn from experience, adjust to new inputs and perform human-like tasks—is the technology turning that idiom upside down.

AI is not a new technology. Phones and homes with Siri or Alexa get questions answered, reminders updated, and tasks completed thanks to AI technology. Many email platforms use AI to filter spam.

“With AI, we’ve gone from ‘Do what I say’ to ‘Imagine the impossible’ in a very short time,” said Al Tompkins, an experienced and award-winning journalist for 52 years, during a recent Bowling Green State University workshop. The workshop focused on AI in journalism, but the implications are far-reaching.

Photographs and videos created by AI are creating dilemmas because the images are not real. He shared a photograph of a dragon in a parade surrounded by people on a city street.   

AI-generated photo of a parade with a crowd watching as a dragon goes by. (AI photo)

“The dragon doesn’t exist. The people don’t exist. None of this exists at all,” Tompkins said about the 100% AI-generated photograph.

The technology is different than editing an image in Photoshop, which has become an accepted practice.

“AI is not going out and retrieving a dragon or a street. It is using data that is building the dragon and the street,” he said. “We are creating something that never existed. It’s a whole new category,” he said.

No computer skills, such as coding, are needed. There are AI-generators that require only prompts of what to include, which means anyone can create photos that aren’t real.

One of the main issues with AI in journalism is that it damages public trust.

“Once I’ve misled you, you don’t trust anything,” Tompkins said. “There’s a risk in believing things that aren’t true, but there is also a risk in not believing things that are true.”

When the public starts doubting everything, they start to doubt things that are true, “and that is just as damaging as believing things that aren’t true,” he said.

Tompkins believes AI and the resulting public distrust  are “going to play in the election in a big way.”

In fact, he said AI will be disruptive to the 2.6 billion people involved in 64 elections taking place worldwide in 2024.

“Disruptors see this as prime time for disruption because there is so much in play,” he explained. The economy of scales to impact 64 elections makes the time right.

In the presidential election, an AI-generated mage of former President Donald Trump speaking to hundreds of cheering Black supporters at a rally never happened but  was widely shared, Tompkins said.

And a photo of Vice President Kamala Harris being greeted by thousands of people at an airport was purported to be AI, but it was real.

“Nobody will be surprised in a month or two to see photographs circulating of people stealing ballots,” Thompkins said. “But here’s the problem. What if someone actually is stealing ballots? We are going to discount those images because we know it’s possible to create false images.”

That is the turmoil and doubt caused by misinformation. For journalists, the conflict is whether to use the images. If they don’t use the photos, they will be accused of covering up the allegations. If they do use the fake pictures, they are rewarding the lie by repeating it.

Detecting when AI is used is possible now, but as the tools become more advanced, it will become more difficult, he said.

Journalists from WTOL listen as Al Tompkins discusses AI’s impact on newsrooms.

There are several software programs available to detect when something is AI-generated, but none are foolproof or without liabilities.

“Detection will get better, but AI will also get better,” he said.

Often fake images produce emotional reactions. If a photo elicits an emotion, such as anger, excitement or happiness, he recommended not looking at the photo until you can be non-emotive.

In many AI tools, words and letters in a photo can be a telltale sign that the photo is not real. Looking at fingers, ears and and proximity errors are often identifiers of AI-generated photos.

It only takes one item being off to discount the entire photo, he added. “Let’s use our common sense” when trying to identify if something is AI-generated.

Rough times are ahead for journalists, Tompkins said. “You don’t know what or who to trust.”

Also, AI-generated newsrooms are becoming a reality in cities such as San Francisco. The news is generated from “a robot scraping news sites, PR sites” and the anchors are not real.

The news industry is changing, but Tompkins said the key for journalists is to be relevant by being in “the sense-making business. Sell clarity.”

The message to viewers and readers is: “We know it’s a difficult time. We are going to spend our energy helping you sort through what’s real and what’s not real.”