Unbalanced Information Diet: Protecting the Facts / Generative AI Can Be Tricked by ‘Poisoned’ Data into Producing Biased, Malicious Answers
The Yomiuri Shimbun
1:00 JST, April 24, 2024
This is the second installment in a series examining situations in which conventional laws and ethics can no longer be relied on in the digital world, and exploring possible solutions.
***
Google researchers attracted attention when they published a paper in February last year showing that it is possible to trick generative artificial intelligence (AI) into creating disinformation by “poisoning” the online encyclopedia Wikipedia.
The poison here means information that is full of malicious lies.
Wikipedia gathers a large amount of relatively reliable information, and so is an ideal learning environment for generative AI, which uses data collected to create text, images and music based on user instructions.
If generative AI learns a large amount of incorrect information, it will produce answers that reflect such information. For example, one can have the generative AI create disinformation about a politician, saying that he or she is a bigot. This type of cyber-attack is called “data poisoning.”
It is difficult for false information to persist on Wikipedia because users from all over the world participate in editing the site. However, data poisoning is possible if someone plants disinformation at a particular time, according to the paper.
One of the paper’s co-authors, Florian Tramer, an assistant professor at ETH Zurich, said he had already informed Wikipedia of their experimental results. He then added that there is a vast amount of data on the internet, and any number of poisons can be planted. There are also concerns that this could be done for political purposes, he said.
Japan, the United States, Britain, Australia and seven other countries in January agreed on international guidelines for the secure use of AI.
In the guidelines, data poisoning was listed as the first of five threats to which AI is exposed.
The guidelines warn that AI may provide inaccurate, biased, and malicious answers.
A case study of “Tay,” the Microsoft AI chatbot released in 2016 that interacts with users on social media was described in the guidelines. Users’ inappropriate remarks became “poison” and Tay began to give biased answers.
Before Microsoft shut down Tay, it tweeted, “Hitler was right.”
Popular Articles
-
2 Beatles Songs from ’90s to Be Remixed with AI for Clearer Vocal...
-
Big Temperature Change Expected in Japan from February to March
-
Disney's Pink Fox ‘Linabell’ Wins Chinese Hearts
-
Sponsors Pull Commercials on Fuji TV after Masahiro Nakai Scandal...
-
Shenzhen Court Hands Death Penalty to Man for Murder of Japanese ...
-
New attempt at moon landing: Open the way for private sector to l...
-
Major Japanese Video Game Companies Eager to Recruit Personnel
-
Ichiro Thanks Wife, Mentor after Hall of Fame Vote; Calls Coopers...
"Society" POPULAR ARTICLE
-
New Year’s Ceremony Held at Imperial Palace (UPDATE 1)
-
Tire of Landing Gear of JAL Plane Goes Flat at Haneda; No Injuries Reported, but Runway Closed 25 Minutes
-
Japan Allows 5 Countries to Renew Working Holiday Visas; Britain, Canada Among Eligible Countries
-
Japan Emperor Prays for World Peace in New Year’s Message
-
Big Temperature Change Expected in Japan from February to March
JN ACCESS RANKING
- Indonesia Launches Free School Meal Program with Support from Japan; Ishiba Currying Favor with New President
- New Year’s Ceremony Held at Imperial Palace (UPDATE 1)
- Tire of Landing Gear of JAL Plane Goes Flat at Haneda; No Injuries Reported, but Runway Closed 25 Minutes
- SoftBank to Build Next-Generation Industrial Park with AI-Based Data Center Utilizing Sharp’s Plant in Sakai, Osaka Pref.
- Japan Allows 5 Countries to Renew Working Holiday Visas; Britain, Canada Among Eligible Countries