Unbalanced Information Diet: Protecting the Facts / Generative AI Can Be Tricked by ‘Poisoned’ Data into Producing Biased, Malicious Answers
The Yomiuri Shimbun
1:00 JST, April 24, 2024
This is the second installment in a series examining situations in which conventional laws and ethics can no longer be relied on in the digital world, and exploring possible solutions.
***
Google researchers attracted attention when they published a paper in February last year showing that it is possible to trick generative artificial intelligence (AI) into creating disinformation by “poisoning” the online encyclopedia Wikipedia.
The poison here means information that is full of malicious lies.
Wikipedia gathers a large amount of relatively reliable information, and so is an ideal learning environment for generative AI, which uses data collected to create text, images and music based on user instructions.
If generative AI learns a large amount of incorrect information, it will produce answers that reflect such information. For example, one can have the generative AI create disinformation about a politician, saying that he or she is a bigot. This type of cyber-attack is called “data poisoning.”
It is difficult for false information to persist on Wikipedia because users from all over the world participate in editing the site. However, data poisoning is possible if someone plants disinformation at a particular time, according to the paper.
One of the paper’s co-authors, Florian Tramer, an assistant professor at ETH Zurich, said he had already informed Wikipedia of their experimental results. He then added that there is a vast amount of data on the internet, and any number of poisons can be planted. There are also concerns that this could be done for political purposes, he said.
Japan, the United States, Britain, Australia and seven other countries in January agreed on international guidelines for the secure use of AI.
In the guidelines, data poisoning was listed as the first of five threats to which AI is exposed.
The guidelines warn that AI may provide inaccurate, biased, and malicious answers.
A case study of “Tay,” the Microsoft AI chatbot released in 2016 that interacts with users on social media was described in the guidelines. Users’ inappropriate remarks became “poison” and Tay began to give biased answers.
Before Microsoft shut down Tay, it tweeted, “Hitler was right.”
Popular Articles
-
Record 320 School Staff Punished for Sex Offenses in Japan
-
Peace in Ukraine Hinges on NATO Membership, Says Kyiv’s Ex-foreig...
-
Honda, Nissan Integration Likely to Affect Auto Parts Suppliers; ...
-
Satoru Hiura Marks 40th Anniversary of Her Career as Manga Creato...
-
Honda, Nissan Struggle to Fight Back in China Price War as BYD’s ...
-
Japan Defers Decision on Planned Hokuriku Shinkansen Route
-
Mitsubishi Motors Seen As Key to S.E. Asia in Honda, Nissan Talks...
-
Netflix Signs US Broadcast Deal With FIFA for the Women's World C...
"Society" POPULAR ARTICLE
-
Record 320 School Staff Punished for Sex Offenses in Japan
-
Miho Nakayama, Japanese Actress and Singer, Found Dead at Her Tokyo Residence; She was 54 (UPDATE 1)
-
Central Tokyo Observes 1st Snow of Season; 25 Days Earlier than Last Winter
-
Risk of Nuclear Weapons Being Used Greater Than Ever; Support Growing in Russia As Ukraine War Continues
-
Overtourism Grows as Snow Cap Appears on Mt. Fuji; Local Municipalities Hard Pressed to Establish Countermeasures
JN ACCESS RANKING
- Core Inflation in Tokyo Accelerates in November
- China to Test Mine for Rare Metals Off Japan Island; Japan Lagging in Technologies Needed for Extraction
- Record 320 School Staff Punished for Sex Offenses in Japan
- Miho Nakayama, Japanese Actress and Singer, Found Dead at Her Tokyo Residence; She was 54 (UPDATE 1)
- Immerse Yourself in Snoopy’s World Ahead of Comic Strip’s 75th Anniversary Next Year; Renovated, Refreshed Museum Features Original, Reproduced Comic Strips, Vintage Merchandise