Disinformation in Elections: Democracy Must Be Protected from AI Abuse

With the emergence of generative artificial intelligence, anyone can easily create fake videos and other fake information. There are concerns that such technology could be abused to compromise the elections that are the very foundation of democracy.

Governments must take effective measures against such fake information.

The world’s 20 leading tech companies, including Microsoft Corp. and Google LLC, have agreed to cooperate in taking measures to prevent AI-generated fake information from interfering with elections in many countries.

The companies aim to develop digital watermarks that show the source of videos and images, so users can confirm whether information online is reliable or not, and to establish technology capable of detecting and erasing false information on social media.

There are a number of cases where false information has been used to interfere in elections. In the 2016 U.S. presidential election, false rumors, such as one about Democratic candidate Hillary Clinton providing funds to a terrorist organization, spread on social media. The rumors are believed to have been spread by a Russian company.

Now that AI has progressed, the risk of public opinion being manipulated by sophisticated fake videos and other information has increased. The efforts of tech companies can be said to be a response to the needs of the times.

A number of phone calls were made in January urging people not to vote in the Democratic primary in New Hampshire for the U.S. presidential election, using a voice that sounded like President Joe Biden. The fake voice is believed to have been synthesized by an AI, and it was likely intended to reduce votes for Biden.

Democracy is a system in which politics is entrusted to elected representatives. In elections — the foundation of democracy — voters must not be misled by disinformation into making bad decisions about who to vote for.

Russia and China are believed to have been spreading disinformation in an attempt to cast doubts on the reliability of elections in other countries and regions. There is a possibility of more election interference in the future through the abuse of AI.

The European Union regards the spread of disinformation to be a threat to the public good and has mandated by law that tech giants take measures to prevent the spread of disinformation.

The Japanese government, on the other hand, is leaving measures against disinformation to the voluntary efforts of information technology businesses. The government apparently believes that strict regulations could hinder the development of AI capable of producing videos and other information.

The Japanese government’s attitude of leaving countermeasures to businesses shows a considerable lack of a sense of crisis, even though there are fears that AI could be used to create sophisticated disinformation and influence elections. The government should consider legal regulations based on the European system.

It is also important to enhance media literacy among voters so that they will be able to ascertain the validity of information.

(From The Yomiuri Shimbun, March 9, 2024)