Frauds Using Generative AI Get More Sophisticated; Survey Finds Public Concern Over Fake Ads, Propoganda

The Yomiuri Shimbun
A fake ad impersonating economic analyst Takuro Morinaga is posted on social media, saying, “You can participate for free by joining Takuro Morinaga’s Line app group.”

A recent Yomiuri Shimbun survey revealed growing public concern about the misuse of generative AI for crimes and manipulation. As this technology becomes increasingly prevalent, measures must be taken to address the negative aspects of AI.

False statement

“It’s difficult even for members of my family to tell that this is not my father’s voice,” said economic analyst Kohei Morinaga, 39, after listening to an audio recording purported to be of his father Takuro, 66, a well-known economic analyst who is battling cancer.

The audio was from a fake ad posted on social media that urged people to join a Line app group. A fabricated voice that sounds like Takuro’s says: “I want to increase your assets. It’s my last wish.”

“Students who joined my Line group can identify me through my voice, like you’re listening to me now,” the ad said.

The voice was likely created by having AI train on Takuro’s voice, taken from his appearances on TV and in other media.

A fake ad involving Kohei has also been posted, and the Morinagas are receiving inquiries every day about whether the ads are real, and requests for refunds. If all the claims are legitimate, the damage suffered would total about ¥1.4 billion.

The Ibaraki prefectural police announced in April that a woman had been defrauded of about ¥700 million by a person claiming to be Takuro.

“Tricks have become more sophisticated with the use of generative AI, and people can be fooled more easily,” Kohei said.

Fake ads using the names of celebrities have led to a protest by entrepreneur Yusaku Maezawa, who demanded that Meta Platforms Inc. remove an ad impersonating him.

Manipulation

The Yomiuri’s survey, conducted nationwide from March to April, showed that 85% of respondents were not confident they could distinguish voices and other content fabricated by AI from real content, while 96% said measures need to be taken to prevent the criminal misuse of generative AI.

Fake videos and images can be used to manipulate the impression people have of politicians. In February, a fake image was spread on social media that claimed to show Prime Minister Fumio Kishida sitting on a sofa with his eyes closed in front of a U.S. government official. The U.S. official appeared to have his legs crossed and a stern expression on his face.

A viewer posted a comment about the photo, saying: “He [Kishida] looks like a deer in the headlights. Is he okay?”

Japan Fact-check Center (JFC), which verifies the authenticity of information on the internet, said the image likely was creating by putting Kishida in place of the Brazilian foreign minister, who met with the U.S. official. Kishida’s hand was unnaturally deformed in the photo.

The Yomiuri Shimbun

An AI for an AI

AI is also being eyed for crime prevention.

In collaboration with Toyo University and Fujitsu Ltd., the Amagasaki city government developed an AI trainer, in which generative AI plays the role of a fraudster. The AI learns the language often used in scams and creates statements typical of a criminal. Elderly people communicate with the trainer over the phone, learning how to avoid becoming a victim of fraud.

The municipality held a trial session from November to December and got feedback from participants, who said they become more careful.

“We want more people to experience the system through crime-prevention classes,” a government official said.

International University of Japan Associate Prof. Shinichi Yamaguchi said, “Generative AI allows anyone to easily create fake videos and images, causing the spread of misinformation worldwide.

“There are fears that not only false images of celebrities, but also propaganda will spread faster and faster in the future, requiring platforms to take serious countermeasures,” he said. It is important for viewers to be aware that some videos and images can be fake, Yamaguchi said.