Generative AI Shock Wave / Generative AI Triggers Concerns — Even Among Developers of the Technology

Reuters file photo
OpenAI and ChatGPT logos are seen in this illustration taken, February 3, 2023.

ChatGPT and other generative AI tools present a new threat to the information space.

U.S. research firm Eurasia Group’s “Top Risks for 2023,” released in January, ranked technological “weapons of mass disruption” third, after the Chinese administration of Xi Jinping in second and Russia in first place.

The report states that “Disinformation will flourish, and trust — the already-tenuous basis of social cohesion, commerce, and democracy — will erode further.”

Even OpenAI, the U.S. startup that develops ChatGPT, has expressed concern about the generation of false information.

OpenAI and other institutions released a report in January stating: “There are no silver bullets for minimizing the risk of AI-generated disinformation,” and that “New institutions and coordination (like collaboration between AI providers and social media platforms) are needed to collectively respond to the threat of (AI-powered) influence operations.”

The fact that generative AI has made it easy for anyone to create misinformation has triggered concerns.

Problems with such misinformation have already emerged in Japan. A tweet posted on Sept. 26 when a typhoon caused heavy rain damage in central Japan featured an image of submerged buildings and the caption: “This is an image of flood damage in Shizuoka Prefecture captured by a drone. This is woeful, seriously.”

“It’s terrible” and “We must help immediately” were among some of the replies to the tweet, which went viral.

But the image was fake and created for “fun,” the poster told The Yomiuri Shimbun.

The poster generated the image featured in the tweet by typing prompts such as “flood” and “Shizuoka” in English into a free tool developed by a U.K.-based AI firm.

After news of the fake image emerged, Chief Cabinet Secretary Hirokazu Matsuno said at a press conference: “It’s important to prevent confusion caused by rumors. We call on people to be on the alert for information that is not based on facts.”

Disinformation generated by AI has also been abused to manipulate public opinion. Amid the Russian invasion of Ukraine, a fake video of Ukrainian President Volodymyr Zelenskyy calling on citizens to surrender appeared online.

“There is no doubt that disinformation will increase exponentially in the future,” said Shinichi Yamaguchi, an associate professor of econometrics at the International University of Japan. “There is a possibility existing systems will not be able to cope with the situation, which could trigger chaos across society.”

Services such as ChatGPT often present inaccurate or biased information in subject areas for which the AI system’s knowledgebase is limited.

The University of Tokyo ’s Executive Vice President Kunihiro Ota said using ChatGPT was “like conversing with a know-it-all who’s good at speaking,” in a document titled “Generative AI” that was posted on the university’s website on April 3.

The text generated by such systems is worded so naturally that it can be difficult to spot misinformation.

If ChatGPT becomes widespread and users share incorrect information without checking its accuracy, society could be inundated with misinformation.

Technology that can differentiate AI-generated information from human-created information will be needed.

A University of Tokyo research team attracted attention at an international conference in June after presenting technology that can detect AI-generated videos with an accuracy of 70%-90%.

It increased the accuracy of detections by processing large volumes of generated images that differ slightly from authentic ones.

“In addition to developer-led countermeasures such as watermarking AI-generated information, internet platforms must determine whether the content they host is AI generated and tackle misuse,” said J. F. Oberlin University Prof. Kazuhiro Taira.

Computer literacy among users of such services is also important.

Kazutoshi Sasahara, an associate professor of computational social science at the Tokyo Institute of Technology, said: “Without actively checking the contents, user will be deceived. [Computer] literacy in the age of ChatGPT is necessary to comprehend the nature of information generated by AI.”