Examining Generative AI / Russian Side Seeks to Undermine Ukraine Via Disinformation; Fake Video Shows Military Leader Criticizing Zelenskyy

Ukrainian Presidential Press Office via AP
In this photo provided by the Ukrainian Presidential Press Office, former Commander-in-Chief of Ukraine’s Armed Forces Valerii Zaluzhnyi attends an awarding ceremony in Kyiv on Friday, Feb. 9.

This is the third installment in a series in which The Yomiuri Shimbun considers how humanity should deal with the emergence of generative AI.

***

Three months before the dismissal of Valeriy Zaluzhnyi, the commander in chief of Ukraine’s armed forces, a fake video that emphasized the divide between the country’s leaders was widely shared on social media.

In the video, a man resembling Zaluzhnyi said he had been informed that Ukrainian President Volodymyr Zelenskyy had killed his aide the day before. He went on to say that the president also planned to get rid of him and surrender the country.

“Zelenskyy is an enemy of our country,” the man in the video said.

Zaluzhnyi appeared to be talking to viewers in the video, which was released on the night of Nov. 7. The man in the video said Zelenskyy was involved in the death of his aide, who reportedly had died in a unexpected accident, and called for a military coup.

Mykola Balaban, deputy head of Ukraine’s Center for Strategic Communications and Information Security, said he had expected something like the video to emerge. The video came from Radio Truha, a pro-Russia media outlet.

Zaluzhnyi said in a contribution to the British magazine The Economist that the Ukraine-Russia conflict was in a “stalemate.” Zelenskyy denied this, leading many in the media to report a feud between the two leaders. Balaban thought Russia was likely to take advantage of the situation.

Balaban’s center and the Center for Countering Disinformation (CCD) — both part of the Ukrainian government — analyzed the video in question. Less than two hours after its spread, the centers determined the video was a fake produced with the help of generative AI, and called on related organizations to be on the alert. They decided the video was fake because of its rough image quality and a mismatch between the speaker’s words and facial expressions.

Yet, elaborate misinformation continued to emerge. After the video was determined to be fake, another media outlet released a separate video in which a person purported to be Zaluzhnyi said that he had expected the government to claim it was Russian propaganda.

The Ukrainian government determined that this video was also fake.

A senior CCD analyst is closely watching posts on social media related to Russian forces but said there is currently no method to immediately determine whether a video was produced with AI.

Russia is believed to be aiming to gradually change public opinion and people’s behavior by scattering the seeds of distrust and suspicion in Ukraine, the United States, Europe and other Western countries.

The creation of elaborate disinformation through generative AI is accelerating information warfare.

Fake video of supermodel

“Bella Hadid stands with Israel.”

At the end of October last year, a deep-fake video of U.S. supermodel Bella Hadid appearing to show her support for Israel’s war against the Islamic militant group Hamas was posted on X, formerly Twitter.

Hadid’s father is of Palestinian origin, and she is known for her activities to seek peace in the Gaza Strip.

The video was viewed more than 30 million times, but it turned out to be a deepfake video produced by generative AI.

The Japan Fact-check Center, a nonprofit organization dedicated to fact-checking and promoting media information literacy, compared it with a 2016 video that is believed to have been used to create the digitally manipulated footage. It determined that gestures and facial expressions in the two videos were almost identical.

The fake version is believed to have been created with the help of AI, having it study Hadid’s voice and tone and modifying the movement of her mouth to match the generated speech.

“The quality of the video is so sophisticated that you can’t tell [whether it’s fake] from its images and sound,” said Daisuke Furuta, editor-in-chief of the fact-checking center. The video is believed to have been produced by an individual who sympathizes with Israel.

Many other fake images and videos related to the war between Israel and Hamas that are believed to have been produced with generative AI have been posted on social media. They include images featuring supporters of a Spanish soccer team flying a Palestinian flag in a stadium and a man walking in Gaza with five children. It can be hard for the general public to tell whether such images and videos are genuine.

“There is a lot of misinformation concerning humanitarian crises that evoke people’s sympathy,” said Keishi Ono of the National Institute for Defense Studies. “As the international community is casting a stern eye toward Israel, pro-Palestine forces may be waging an information war in an effort to manipulate the outside world.”