Generative AI Shock Wave / Generative AI Triggers Concerns — Even Among Developers of the Technology
The Yomiuri Shimbun
6:00 JST, April 26, 2023
ChatGPT and other generative AI tools present a new threat to the information space.
U.S. research firm Eurasia Group’s “Top Risks for 2023,” released in January, ranked technological “weapons of mass disruption” third, after the Chinese administration of Xi Jinping in second and Russia in first place.
The report states that “Disinformation will flourish, and trust — the already-tenuous basis of social cohesion, commerce, and democracy — will erode further.”
Even OpenAI, the U.S. startup that develops ChatGPT, has expressed concern about the generation of false information.
OpenAI and other institutions released a report in January stating: “There are no silver bullets for minimizing the risk of AI-generated disinformation,” and that “New institutions and coordination (like collaboration between AI providers and social media platforms) are needed to collectively respond to the threat of (AI-powered) influence operations.”
The fact that generative AI has made it easy for anyone to create misinformation has triggered concerns.
Problems with such misinformation have already emerged in Japan. A tweet posted on Sept. 26 when a typhoon caused heavy rain damage in central Japan featured an image of submerged buildings and the caption: “This is an image of flood damage in Shizuoka Prefecture captured by a drone. This is woeful, seriously.”
“It’s terrible” and “We must help immediately” were among some of the replies to the tweet, which went viral.
But the image was fake and created for “fun,” the poster told The Yomiuri Shimbun.
The poster generated the image featured in the tweet by typing prompts such as “flood” and “Shizuoka” in English into a free tool developed by a U.K.-based AI firm.
After news of the fake image emerged, Chief Cabinet Secretary Hirokazu Matsuno said at a press conference: “It’s important to prevent confusion caused by rumors. We call on people to be on the alert for information that is not based on facts.”
Disinformation generated by AI has also been abused to manipulate public opinion. Amid the Russian invasion of Ukraine, a fake video of Ukrainian President Volodymyr Zelenskyy calling on citizens to surrender appeared online.
“There is no doubt that disinformation will increase exponentially in the future,” said Shinichi Yamaguchi, an associate professor of econometrics at the International University of Japan. “There is a possibility existing systems will not be able to cope with the situation, which could trigger chaos across society.”
Services such as ChatGPT often present inaccurate or biased information in subject areas for which the AI system’s knowledgebase is limited.
The University of Tokyo ’s Executive Vice President Kunihiro Ota said using ChatGPT was “like conversing with a know-it-all who’s good at speaking,” in a document titled “Generative AI” that was posted on the university’s website on April 3.
The text generated by such systems is worded so naturally that it can be difficult to spot misinformation.
If ChatGPT becomes widespread and users share incorrect information without checking its accuracy, society could be inundated with misinformation.
Technology that can differentiate AI-generated information from human-created information will be needed.
A University of Tokyo research team attracted attention at an international conference in June after presenting technology that can detect AI-generated videos with an accuracy of 70%-90%.
It increased the accuracy of detections by processing large volumes of generated images that differ slightly from authentic ones.
“In addition to developer-led countermeasures such as watermarking AI-generated information, internet platforms must determine whether the content they host is AI generated and tackle misuse,” said J. F. Oberlin University Prof. Kazuhiro Taira.
Computer literacy among users of such services is also important.
Kazutoshi Sasahara, an associate professor of computational social science at the Tokyo Institute of Technology, said: “Without actively checking the contents, user will be deceived. [Computer] literacy in the age of ChatGPT is necessary to comprehend the nature of information generated by AI.”
Generative AI shock wave / ChatGPT Can Be Tricked into Generating Malware, Bomb-making Instructions
https://japannews.yomiuri.co.jp/society/social-series/20230425-105738/Popular Articles
-
Miho Nakayama, Japanese Actress and Singer, Found Dead at Her Tok...
-
Troops Sent to S. Korea Election Commission HQ During Martial Law...
-
As Baboons Become Bolder, Cape Town Searches for Solutions
-
Japan Star Miho Nakayama’s Death Unlikely Caused by Foul Play; To...
-
China’s New Energy Vehicles Dominating Domestic Market; Japanese,...
-
In Pictures: Tense Night in Seoul After President Yoon Suk Yeol D...
-
Federal Appeals Court Upholds Law Requiring Sale or Ban of TikTok...
-
Japan-China Public Opinion Poll: Fake Information Might Be Worsen...
"Society" POPULAR ARTICLE
-
Malaysia Growing in Popularity as Destination for Studying Abroad; British-style Education Available at Low Cost
-
Ministry Eyes Improving Night-School Japanese Lessons; Aim Is To Help Foreigners Complete Junior High School
-
Miho Nakayama, Japanese Actress and Singer, Found Dead at Her Tokyo Residence; She was 54 (UPDATE 1)
-
Companies Expanding Use of Recycled Plastic; Technological Developments Improve Production Process, Allow Incorporation in Cars, Electronics
-
Fire Breaks Out after Explosion Heard in Susukino Building, Sapporo; Multiple Injuries Reported (UPDATE 1)
JN ACCESS RANKING
- APEC Leaders Vow to Maintain Free Trade System
- Malaysia Growing in Popularity as Destination for Studying Abroad; British-style Education Available at Low Cost
- Ministry Eyes Improving Night-School Japanese Lessons; Aim Is To Help Foreigners Complete Junior High School
- China to Test Mine for Rare Metals Off Japan Island; Japan Lagging in Technologies Needed for Extraction
- Christmas TV Movies Are in Their Taylor Swift Era, with Two Swift-inspired Films Airing This Year