• Social Series

Unbalanced Information Diet: AI-generated deception / Chinese-Originated Disinformation Spreading on Social Media; AI-Generated Profile Pictures Make Fake Accounts Look Real

The Yomiuri Shimbun
A post on social media site Ameba Blog shows remarks attributed to “Eiichi Fujikura,” a supposed Fukushima journalist who apparently does not exist.

Efforts to spread biased information and manipulate the will of the people are gaining momentum amid the proliferation of social media and advances in digital technology. With the advent of generative artificial intelligence, such moves are rapidly escalating, putting democracy at risk. This is the first installment in a series.

***

“There is a great conspiracy behind the Hawaiian wildfires.”

Blog posts with titles such as this appeared simultaneously in August on Ameba Blog, one of Japan’s largest blog platforms.

The posts referred to the wildfires that began on Aug. 8 this year in Hawaii, killing about 100 people.

Some of the posts were about 2,000 characters long. They made claims that the fires were caused by a “weather weapons experiment conducted by the U.S. military” and that “the U.S. military has invested a huge amount of money in research and development of the weapons.”

In reality, the cause of the fires has not been determined. However, there is no evidence whatsoever that the U.S. military caused the fires.

The Yomiuri Shimbun analyzed a group of user accounts that posted such texts and found unnatural features that are not characteristic of ordinary Ameba Blog users.

Since Aug. 16, a total of 139 accounts have been confirmed to have posted false information under the Japanese heading related to Hawaiian wildfires.

Of these, 65 accounts had posted only about the Hawaiian wildfires, and the remaining 74 had also posted criticism of the Japanese government’s release of treated water into the ocean from Tokyo Electric Power Company Holdings Inc.’s Fukushima No. 1 nuclear power plant.

More than 60% of the 139 accounts used fonts that were set for Chinese characters.

When a research firm conducted an evaluation of the facial images in the profiles of these accounts, they found that several of the faces appeared to have been created by generative AI.

Such unnatural posts were also found on other platforms in Japan, with a total of 29 accounts making similar posts on Pixiv, Hatena Blog, Rakuten Blog, Livedoor Blog, Nico Nico Douga and note.

Odd phrasing

The postings that spread in Japan are a disinformation campaign by China, said Macrina Wang, an analyst with the U.S.-based watchdog group NewsGuard.

Wang and other NewsGuard staff identified accounts posting similar messages about the Hawaiian wildfires on a total of 14 major platforms, including X (formerly known as Twitter) and Facebook.

The Hawaiian fires conspiracy theory was published in 16 languages including Chinese, English and French. The first article in the campaign was apparently published on a Chinese platform, according to the NewsGuard.

Many of these accounts exclusively published content that aligned with the Chinese government’s interests. Such articles in other languages also had “odd phrasing” as if they had been machine-translated.

When The Yomiuri Shimbun requested interviews with seven domestic platform operators in November, these posts were deleted from Ameba Blog, Pixiv, and Hatena Blog.

Tokyo-based CyberAgent Inc., which operates Ameba Blog, said it deleted the posts because they violated its terms of service.

Of the platforms NewsGuard checked, Ameba Blog had the highest number of accounts spreading disinformation in a single language, according to Wang.

Wang noted that the posts were intended to undermine Japan-U.S. relations by building up Japanese public opinion that the U.S. is an evil nation.

Spamouflage

An organized campaign on a platform to simultaneously make claims in line with the Chinese interests is called spamouflage, a term coined from the English words “spam” and “camouflage.”

The disinformation campaign by China began around 2017, according to Albert Zhang, an analyst at the Australian Strategic Policy Institute who closely monitors the issue.

Initially, disinformation messages were posted in Chinese and English on a limited number of platforms, focusing on domestic issues in China, such as the pro-democracy protests in Hong Kong.

Around last year, however, they began to become multilingual and were found on many platforms in many countries, including Japan. They are also characterized by AI-generated profile facial images.

Accounts identified by Meta, formerly Facebook, as manipulating public opinion, including those originating in China, are increasingly using AI to create profile photos.

In 2019, only a single-digit percentage of such profile accounts used AI-generated photos, but by 2022, the figure had risen to nearly 70%.

It is believed that the use of AI-generated profile photos is intended to disguise the posts as those of real people, thereby increasing credibility.

At this point, spamouflage tactics are not sophisticated and have limited impact.

However, if spamouflage is left unchecked, there is concern that the public will believe waves of false information posted during elections, disasters and wars.

Meta, Google and others — keenly aware of the problem — have been deleting disinformation-related accounts one after another.

Platforms that do not take action will become easy targets for disinformation campaigns, and Japanese platforms need to hurry up and take action, Zhang said.

China’s campaign on treated water

A disinformation campaign suspected to have originated in China took aim at the release of treated water into the sea from the Fukushima No. 1 nuclear power plant that started in August.

Online posts quoted “Eiichi Fujikura,” identified as a reporter for a local newspaper in Fukushima Prefecture, as saying, “Agricultural production in Fukushima Prefecture has not recovered to even 20% of the level before the nuclear accident,” and “If the nuclear-contaminated water is discharged into the sea, agriculture and fisheries will face severe situations.”

The remarks were cited by a large number of posts that were critical of the water release. These included posts on Ameba Blog accounts, by users who had also posted false information about wildfires in Hawaii.

But does Eiichi Fujikura even exist? The Yomiuri Shimbun asked major local newspapers in Fukushima Prefecture, and got the same answer: There is no reporter with such a name.

Further research by The Yomiuri Shimbun found that the name seemed to refer to Hidekazu Fujikura, chief of the Fukushima prefectural office of a citizens group called Kakushinkon.

When he was shown the online postings, Fujikura responded, “It seems that my answers to a Chinese TV broadcaster in an interview may have spread.”

However, he denied that he used phrases like “20% compared with that before the accident” or “nuclear-contaminated water.” He said, “I did not say such things.”

In fact, the value of Fukushima Prefecture’s agricultural products has recovered to 80% to 90% of levels before the nuclear accident.

In addition, Fujikura said he has always carefully used the term “treated water” to prevent rumors from causing damage.

The Yomiuri Shimbun checked websites in Chinese and confirmed that there are videos in which Fujikura speaks to a reporter. In the videos, however, he does not use the phrases in question.

Who distorted his remarks?

Fujikura angrily said: “Though I have been demanding a stop to the release of treated water, if information like this is spread, members of the public will be divided. It’s regrettable.”

A great volume of false information about the treated water has spread on the internet.

It includes graphic images created by generative AI software showing oversized fish and shellfish, an untrue statement that radiation concentration levels off the coast of the prefecture exceeded the standards and a fake report that a former politician had died after drinking contaminated water that had supposedly been purified.

However, Jun Osawa, a research fellow of the Sasakawa Peace Foundation, said that related posts on Chinese social media rapidly decreased when critical movements in China excessively heated up.

Osawa said: “There was a risk that the criticisms [of Japan] would come to be directed at the Chinese Communist Party from people feeling that [Beijing’s] responses were lukewarm, and so the authorities seemed to have contained social media content. Including the transmission of the information, it was a campaign which the authorities had controlled to a certain degree.”

Taiwan’s case

Taiwan has been exposed to threats of false information from China on a larger scale than Japan has been.

At an international conference in early November in Tokyo, Che Chang, a senior analyst of TeamT5, said that China spares no effort in creating circumstances favorable to itself. TeamT5 is a cyber security firm based in Taiwan.

In his address, he discussed some of the increasingly sophisticated methods being used.

One method that has been increasingly employed recently is a way to spread photo images of online posts that are stored as screenshots.

It seems that the method is used to prevent posts from being deleted by social media platform operators through keyword searches.

The style of kanji characters used in China and Taiwan are different, though they share the same language.

But Chang said that if AI software makes more progress, the language barrier will soon be overcome and more false information will enter Taiwan. China is persistent, he added.

Yomiuri Shimbun file photo
Che Chang of cyber security firm TeamT5 speaks about false information from China in Minato Ward, Tokyo, in November.

Fake graphics, videos

AI software is now able not only to create sentences but also to generate realistic fake graphics and fake video.

In Taiwan, where a presidential election was held this month, a law was revised in May to impose criminal punishments on people who try to affect the results of elections by using such deepfake deceptions.

Security authorities began an investigation in August into a sound data file with a voice similar to that of former Taipei Mayor Ko Wen-je of the Taiwan People’s Party, who is running in the presidential election.

The security authorities judged it highly likely that the sound data is a deepfake. Thus the authorities are investigating based on the revised law.

According to Taiwan’s Executive Yuan — equivalent to Japan’s Cabinet — there were another four similar cases related to the presidential election as of Nov. 23.

Lo Ping-cheng, an official of the Executive Yuan, said that AI will make the problem of false information more serious, and the revision to the law was one countermeasure against it.

Lo emphasized that China has intervened each time Taiwan has held key elections, and people in Taiwan have been reminded of just how important democracy is. Taiwan must not lose in the fight, Lo said.

Concerning legal restrictions on deepfakes, it is essential to keep a balance between freedom of expression and legal action.

In Japan, AI-generated video clips appearing to show scenes of a politician making untrue remarks have been circulating.

Prof. Kazuto Suzuki of the University of Tokyo, an expert in international political economy, asked: “Against deepfakes that could affect election results, should we limit freedom of expression and impose legal restrictions? Japan also needs to hold broad public debates.”