Intl Leaders Set to Agree Safety Measures to Combat AI Disinformation

Yomiuri Shimbun file photo
Group of Seven leaders discuss AI-related measures at G7 Hiroshima Summit in May.

The first international agreement on safety measures relating to artificial intelligence was set to be reached Friday.

The agreement, which covers both developers and end-users, is based on a draft statement compiled by the Group of Seven digital and technology ministers as an outcome of the Hiroshima AI Process — a framework for G7 countries to discuss regulations for generative AI, among other issues.

Among the key points of the draft is the promotion of countermeasures against disinformation. One focus of attention is a digital technology called Originator Profile (OP), which enables users to confirm the authenticity of online data.

The draft statement was expected to be adopted at an online meeting of digital and technology ministers on the day, and to be agreed upon during a web conference attended by Prime Minister Fumio Kishida and other leaders in early December at the earliest.

The draft statement details international guidelines for countries to use as a basis for regulating AI — a de facto international code of conduct for AI developers — and a research cooperation plan to help counter disinformation.

The guidelines require developers and others to assess the potential perils of advanced AI before introducing it to the market and to take appropriate steps to mitigate risks, among other matters.

The guidelines urge AI users, including businesses, to recognize issues associated with AI — including the spread of disinformation — while improving their ability to understand and use digital technology appropriately.

The guidelines particularly emphasize measures to prevent the spread of mendacious data, given the ease with which generative AI can create elaborate fake videos, among other things.

Specifically, the guidelines call for promoting a plan to conduct international demonstration experiments to boost research on countering false information through the Organization for Economic Cooperation and Development and other bodies.

As a related technology, the guidelines detail an authentication- and history-management mechanism that allows the original source of online data to be clearly identified. This function that can be performed by OP, which Japan is currently developing.

OP will allow users to confirm the authenticity of online articles and advertisements via an electronic attachment that contains information about the sender and which has been authenticated by a third-party organization.

In December last year, domestic and foreign media and other organizations founded the OP Collaborative Innovation Partnership. Demonstration experiments are currently underway, and it is hoped the technology can be put into practical application by 2025.

The guidelines also propose the introduction of a monitoring system to check whether IT giants such as Microsoft and Google follow the rules. The system’s design is planned to be discussed in the future.

The Hiroshima AI process was initiated by Japan, which hosted the G7 Hiroshima Summit in May. The draft statement states that the process will continue beyond next year, when Italy assumes the G7 chair, and calls for the process to be endorsed by countries other than the G7 nations.