Take Measures to Identify False Information

If generative artificial intelligence becomes more prevalent, allowing anyone to easily create images and texts, the amount of inauthentic information on the internet and social media will continue to increase. Measures to identify false information are necessary.

The government’s AI Strategic Council, chaired by Prof. Yutaka Matsuo at the University of Tokyo, has approved a draft proposal on AI regulations.

The Group of Seven advanced nations plans to formulate international rules on AI by the end of this year. The Japanese government will present its draft proposal at the G7 ministerial-level meeting scheduled for next month, aiming to lead the discussion as the chairing country.

One of the features of the draft proposal is that it calls for an initiative to clearly identify the source of information on the internet and social media as a countermeasure against disinformation.

For example, a dedicated button on the news screen of a smartphone would display the media outlet that distributed the news when the user clicks on the button. A third-party organization would certify trusted companies and allow them to introduce this system, according to the draft proposal.

This technology is called “originator profile,” and major media organizations and advertising-related companies in Japan have formed a research organization to study its practical application.

If the source of information is clarified, it will be easier for users to judge whether the information is credible. Cooperation between the public and private sectors to establish a system that makes it easier to distinguish false information is one idea.

Cases have already begun to emerge in which false information believed to have been created by AI is having a negative impact on society. In May, the U.S. stock market was temporarily disrupted by the spread of a fake image purporting to show an explosion near the Pentagon.

Although there have been no such serious cases in Japan, it is vital to remain vigilant. The government has a responsibility to face the various risks of AI, such as the increase in crime and the sophistication of cyber-attacks, and to formulate international rules to deal with them.

Measures against copyright infringement by AI are another important issue.

Compared to other countries, Japan is said to have been more active in utilizing AI. In 2018, the government revised a law to allow AI to learn from texts and images on the internet without the permission of the rights holders, with the aim of supporting AI development.

In Europe, by contrast, there is a growing emphasis on how to protect the value of human-created works as AI becomes more widespread. The European Union has compiled a draft regulation that states that any text or image that uses generative AI must clearly state this fact.

The Japanese government should take a serious look at the challenges, referring to the European example as well, rather than being so eager to use AI.

(From The Yomiuri Shimbun, Aug. 16, 2023)