89% Seek Legal Regulation of AI-Generated Misinformation; Survey Finds Fear Over Possible Manipulation of Public

Reuters file photo
Figurines with computers and smartphones are seen in front of the words “Artificial Intelligence AI” in this illustration taken in February.

Misinformation created by generative AI needs to be regulated by law, according to nearly 90% of respondents to a recent Yomiuri Shimbun survey.

In the mail-in survey conducted nationwide from March to April, 89% of respondents said they believe it is necessary for the government to legally regulate AI-generated misinformation. A similarly large number – 86% – said they fear that AI-generated misinformation could manipulate public opinion, because it has become easier to create such falsehoods.

Asked how much they thought misinformation would influence voting behavior, 28% said “greatly” and 61% said “somewhat,” for a total of 89%.

Multiple answers were allowed regarding respondents’ concerns stemming from the use and spread of generative AI. The largest group, or 65%, cited “criminal misuse,” followed by the “unintended spread of false information” at 63%, the “spread of false information” at 60% and “diminishing people’s ability to think and judge” at 50%.

Japan’s Copyright Law, in principle, allows AI to machine-learn copyrighted works except where it would unreasonably prejudice the interests of the copyright holders. In the survey, 82% of respondents said the law needed to be revised to prevent unauthorized use of copyrighted works.

Asked about fields that should not use generative AI, again with multiple answers allowed, the largest group cited “news reporting” at 36%, followed by “elections” and “security and defense” at 33% each. “The judiciary,” “child-rearing” and “culture and arts” were chosen by 31% each.

Regarding the benefits they hope to see from the use and spread of generative AI (multiple answers allowed), 53% cited “increased work efficiency,” while 45% said “help solve labor shortages.”

“Cost reductions” and “fewer human errors” were each cited by 35% of respondents.