Unbalanced Information Diet: Protecting the Facts / Generative AI Can Be Tricked by ‘Poisoned’ Data into Producing Biased, Malicious Answers

Figurines with computers and smartphones are seen in front of the words “Artificial Intelligence AI” in this illustration taken on Feb. 19, 2024.
The Yomiuri Shimbun
1:00 JST, April 24, 2024
This is the second installment in a series examining situations in which conventional laws and ethics can no longer be relied on in the digital world, and exploring possible solutions.
***
Google researchers attracted attention when they published a paper in February last year showing that it is possible to trick generative artificial intelligence (AI) into creating disinformation by “poisoning” the online encyclopedia Wikipedia.
The poison here means information that is full of malicious lies.
Wikipedia gathers a large amount of relatively reliable information, and so is an ideal learning environment for generative AI, which uses data collected to create text, images and music based on user instructions.
If generative AI learns a large amount of incorrect information, it will produce answers that reflect such information. For example, one can have the generative AI create disinformation about a politician, saying that he or she is a bigot. This type of cyber-attack is called “data poisoning.”
It is difficult for false information to persist on Wikipedia because users from all over the world participate in editing the site. However, data poisoning is possible if someone plants disinformation at a particular time, according to the paper.
One of the paper’s co-authors, Florian Tramer, an assistant professor at ETH Zurich, said he had already informed Wikipedia of their experimental results. He then added that there is a vast amount of data on the internet, and any number of poisons can be planted. There are also concerns that this could be done for political purposes, he said.
Japan, the United States, Britain, Australia and seven other countries in January agreed on international guidelines for the secure use of AI.
In the guidelines, data poisoning was listed as the first of five threats to which AI is exposed.
The guidelines warn that AI may provide inaccurate, biased, and malicious answers.
A case study of “Tay,” the Microsoft AI chatbot released in 2016 that interacts with users on social media was described in the guidelines. Users’ inappropriate remarks became “poison” and Tay began to give biased answers.
Before Microsoft shut down Tay, it tweeted, “Hitler was right.”
Popular Articles
-
Estimated Magnitude 5 Earthquake Hits Nagano Pref. ; No Tsunami W...
-
Mid-20th Century Shopping Street in Tokyo Soon to be Demolished; ...
-
Giant Cherry Blossom Tree Resembling Waterfall Draws Visitors to ...
-
2025 Expo Osaka: Expo Venue Exclusively Uses Cashless Payments; V...
-
4,000 Cherry Blossom Trees at Their Peak against Snow-Capped Moun...
-
Child Sex Crime Victims Oppose Civil Lawsuit Time Limit; Japan's ...
-
Webb Directly Observes Exoplanetary CO2 for 1st Time
-
Cherry Blossom, Tulips, Snow-Capped Mountains Seen Together in Ja...
"Society" POPULAR ARTICLE
-
World War II Battleship Yamato Was Outdated From the Start; Unable to Compete With Newly Developed Warplanes
-
Nankai Trough Megaquake Estimated Death Toll Lowered, Tsunami-hit Area Increased in Govt Report
-
Estimated Magnitude 5 Earthquake Hits Nagano Pref. ; No Tsunami Warning Issued (UPDATE 2)
-
Cherry Blossoms Reach Full Bloom in Tokyo; Ueno Park Draws Many Viewers
-
Cherry Blossoms Officially in Bloom in Tokyo, Beating Last Year’s Date by 5 Days
JN ACCESS RANKING