Japan Looks to Take Lead on AI Regulation

Yomiuri Shimbun file photo
Group of Seven leaders hold summit talks in Hiroshima in May.

A new web standard technology to tackle disinformation is at the heart of government proposals for generative artificial intelligence regulation.

The proposals have been compiled in response to a Group of Seven communique which called for discussions dubbed the Hiroshima AI Process to address generative AI-related issues.

The communique issued at the Hiroshima summit in May expressed the G7’s commitment to promoting responsible AI, and the Japanese government hopes to lead such efforts.

Responsible AI concerns human rights considerations, such as the protection of personal information, and includes measures to prevent the spread of disinformation, which can have a major political impact.

“G7 countries lack decisive measures on disinformation, even though it is expected to increase with the spread of generative AI,” a senior Japanese government official said.

On the other hand, it is difficult for governments to address such issues because it can invite accusations of censorship or manipulation of public opinion.

The Japanese government has compiled proposals for generative AI regulation that are based on the idea that “problems caused by new technologies should be tackled with new technologies.”

Originator Profile, a web standard currently under development, features in the proposals.

The proposed technology works by assigning digital signatures to internet content such as news, corporate websites and advertisements, and can display the signatures in web browsers.

The signatures will be authenticated by a third-party organization and will include the name and address of the source, as well as details about editorial policy or corporate stance.

Under the envisaged web standard, internet users will be able to find details about the source of the content they are viewing with the click of a button. The digital signature is designed to follow content when it is shared on social media or posted on other sites.

If a malicious user were to spread misinformation resembling a newspaper article, the lack of a digital signature would flag to web users that the content likely originated from an untrustworthy source.

The Originator Profile Collaborative Innovation Partnership aims to launch the technology in 2025 after conducting trials. Comprising 27 organizations, the partnership includes all of Japan’s national newspapers, as well as advertising, telecommunications, and IT firms.

The development of technology that can detect whether content was generated by AI is underway overseas. Combining such technology with Originator Profile would enhance the capabilities of the proposed web standard.

Generative AI models such as Chat GPT are trained on enormous datasets that can include inaccurate information. In addition to taking advantage of technologies such as Originator Profile, there should be discussions on the foundations of generative AI, such as whether only trustworthy sources should be used for AI model training.