Unbalanced Information Diet: Protecting the Facts / Generative AI Can Be Tricked by ‘Poisoned’ Data into Producing Biased, Malicious Answers
The Yomiuri Shimbun
1:00 JST, April 24, 2024
This is the second installment in a series examining situations in which conventional laws and ethics can no longer be relied on in the digital world, and exploring possible solutions.
***
Google researchers attracted attention when they published a paper in February last year showing that it is possible to trick generative artificial intelligence (AI) into creating disinformation by “poisoning” the online encyclopedia Wikipedia.
The poison here means information that is full of malicious lies.
Wikipedia gathers a large amount of relatively reliable information, and so is an ideal learning environment for generative AI, which uses data collected to create text, images and music based on user instructions.
If generative AI learns a large amount of incorrect information, it will produce answers that reflect such information. For example, one can have the generative AI create disinformation about a politician, saying that he or she is a bigot. This type of cyber-attack is called “data poisoning.”
It is difficult for false information to persist on Wikipedia because users from all over the world participate in editing the site. However, data poisoning is possible if someone plants disinformation at a particular time, according to the paper.
One of the paper’s co-authors, Florian Tramer, an assistant professor at ETH Zurich, said he had already informed Wikipedia of their experimental results. He then added that there is a vast amount of data on the internet, and any number of poisons can be planted. There are also concerns that this could be done for political purposes, he said.
Japan, the United States, Britain, Australia and seven other countries in January agreed on international guidelines for the secure use of AI.
In the guidelines, data poisoning was listed as the first of five threats to which AI is exposed.
The guidelines warn that AI may provide inaccurate, biased, and malicious answers.
A case study of “Tay,” the Microsoft AI chatbot released in 2016 that interacts with users on social media was described in the guidelines. Users’ inappropriate remarks became “poison” and Tay began to give biased answers.
Before Microsoft shut down Tay, it tweeted, “Hitler was right.”
Popular Articles
-
Ex-DPRK Diplomat Criticizes Pyongyang Over Nuclear Program; Defec...
-
Japan Trying to Draw Digital Nomads, Who Are Seen as Beneficial t...
-
Japanese Cosmetics Firms Competing with South Korean Brands over ...
-
Japan’s Cultural Industry: Urgently Strengthen International Comp...
-
‘Doraemon’ Voice Actress Nobuyo Oyama Dies at 90; Also Voiced Kat...
-
Japan's Nihon Hidankyo Wins Nobel Peace Prize; Hibakusha Group Ca...
-
Japan Logistics Industry Struggles Amid Shortage of Drivers; Incr...
-
Yu Darvish Limits the Dodgers' Powerful Offense to One Run and Th...
"Society" POPULAR ARTICLE
-
Typhoon Cimaron Forms South of Japan; Expected to Move Closer to Kyushu, Shikoku in Few Days
-
Typhoon Jebi, Typhoon Krathon Approaching Japan; Impact on Eastern Japan, Okinawa is Concerning
-
Boy Stabbed Near Japanese School in China’s Shenzhen Dies; Tension Builds in Japanese Community (Update 1)
-
Typhoon Pulasan to Approach Japan’s Nansei Islands after Wednesday
-
Photo of Shibuya’s Iconic Dog Hachiko Giving Paw Found; Picture Is One of Four Discovered in the Past Year
JN ACCESS RANKING
- Harris Widens Lead over Trump to 47%-40%, Reuters/Ipsos Poll Finds
- Japan-S. Korea Exchange Festival Held in Seoul
- Mooncake Sales in China Frosty Ahead of Fall Holidays, as Sluggish Economy and Govt Rules Take Their Toll
- Gaza Polio Vaccination Rate Likely Exceeds 90%; UNRWA Health Director Praises ‘Miraculous’ Rollout
- Typhoon Cimaron Forms South of Japan; Expected to Move Closer to Kyushu, Shikoku in Few Days