Examining Generative AI / Tech Threatens to Warp U.S. Presidential Election; Experts Warn Deluge of Misinformation Could Alter Outcome

From the official Republican National Committee YouTube channel
An AI-generated political ad of the Republican Party featuring fictional images of China invading Taiwan.

As generative AI tools become more readily accessible, their associated risks are creating a stir in society. While expectations run high that the latest technological innovation on the scale of the internet and smartphones will enhance convenience in people’s lives, adverse effects such as the spread of fake news cannot be ignored. This is the first installment of a series in which The Yomiuri Shimbun considers how humanity should deal with the emergence of generative AI.

***

“This morning, an emboldened China invades Taiwan.”

Well, of course, this is not a fact.

It is part of a 32-second video produced by the Republican National Committee of the United States, using generative AI. Titled “Beat Biden,” the video asks, “What if the weakest president we’ve ever had were re-elected[?]”

The ad was posted on YouTube last April in response to Democratic President Joe Biden’s announcement of his reelection bid. Following an image resembling a news report of Biden’s November 2024 “victory,” the ad showed an image of China bombing the city of Taipei. The video later cut to images of immigrants rushing into the United States, with a subtitle asking “What if our border is gone[?]”

Samuel Chen, a Republican strategist, explained that the party wanted to warn about the danger of a second Biden term.

He said that AI-generated images are very effective in getting voters to imagine the worst-case scenario in another Biden administration. He even emphasized the “advantage” of generative AI, by saying, “Even if you know it’s fictitious, people will remember the images in their minds.”

In digital spaces such as social media, people more often than not experience a “filter bubble,” in which they are surrounded only by information that fits their preferences and are least likely to encounter information they do not want, even if it is correct. They are also likely to experience an “echo chamber,” in which people are encouraged to share opinions similar to their own and are connected only with people who think likewise.

In such a situation, they tend to believe false information as fact and their behavior is likely to become more radical.

Moreover, a bizarre situation occurred earlier this year, in which disinformation, believed to have been created with generative AI, could have impacted voting behavior in the election.

On Jan. 21, two days before the New Hampshire Democratic presidential primary election, there was an act of election disruption. People received phone calls from a digitally altered voice meant to sound like Biden. The voice urged them not to vote in the primary, saying: “It is important that you save your vote for the November election. Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again.”

The calls have been considered an attempt to reduce the votes for Biden. But in the United States, laws and regulations are not in place to contain the risks posed by generative AI.

Elaine Kamarck, a senior fellow for the Brookings Institution, warned that political ads using generative AI are likely to be put to bad use in the presidential election in the days ahead.

This is because it is possible to spread fake videos and images that are difficult to distinguish from reality, so as to lower a candidate’s approval ratings.

It is also widely assumed that they would slander rival candidates with false information and mislead public opinion, a situation that could distort the will of the people and shake the very foundation of democracy.

In the ongoing races for the two major parties’ nominations, Biden has a commanding lead in the Democratic Party, and former U.S. President Donald Trump has a commanding lead in the Republican Party. But Kamarck predicted: “But when it comes to the main competition, there will be a lot of excitement [with the use of AI increasing]. It’s the calm before the storm.”

Even during the 2016 presidential elections, some regarded the relationship between the election campaigns and the use of digital spaces as a problem. It is believed that campaign consultants for the Trump camp tried to guide public opinion in their favor.

What differs greatly from the 2016 campaign is the marked development and spread of generative AI technology that can instantly create fictitious images and text simply by inputting brief instructions.

Out of concern over possible misuse, OpenAI, which has developed an interactive AI service called ChatGPT, announced on Jan. 15 that it would ban the use of its AI technology for political activities or for disrupting elections.

About a week later, OpenAI banned the support group for Democratic primary challenger Rep. Dean Phillips from using its AI tools.

Meta, formerly Facebook, also announced on Feb. 6 that it would take action. But CNN, a U.S. news channel, said there is a limit in regulating AI-generated content solely by relying on the acts of private companies, devoid of federal regulation.

Kamarck points out that if a massive amount of disinformation spreads just before Election Day, there is a danger that it will have a huge impact on the behavior of voters, which may change the outcome of the election.

The International Fact-Checking Network (IFCN), a Florida-based organization with about 160 member organizations, including U.S. media outlets, checks the authenticity of online information.

Enock Nyariki, community manager at IFCN, said that if false information is spread just before Election Day, there will be a limit to what fact-checkers can do. He also said that it could take several days to determine the authenticity of sophisticated false information.”

There is a possibility of the fact-checks not being done in time for Election Day, Nyariki admitted.

Last October, Biden issued an executive order to regulate AI, requiring developers to share their safety test results for new AI models with the U.S. government agencies, from the development stage of the models.

However, Tim Harper, a senior policy analyst at a U.S. nonprofit organization, the Center for Democracy and Technology, who was in charge of political and election advertising at Meta until last May, points out that the executive order implies reliance mostly on the voluntary acts of tech companies, so it is doubtful that it can prevent the AI technology from being misused.

He then emphasized that it is important to swiftly establish comprehensive AI regulation that prohibits the online posting of AI-generated election-related images and videos for a certain period before the election date and also allows the disclosure of information on the source of false information.