Generative AI Risks Could Lead Japanese Govt to Create 3rd-party Authentication System

Yomiuri Shimbun file photo
Prime Minister’s Office

The government will consider introducing new measures to mitigate risks from generative artificial intelligence as part of efforts to encourage the technology’s developers and service providers to abide by guiding principles, government sources have said.

Core measures could include establishing a third-party authentication system and tightening regulations on industries and organizations that are deemed to be high-risk.

The move comes after the Group of Seven advanced economies agreed on an international code of conduct for developers of generative AI.

The government on Tuesday presented proposals on how to reduce AI-based risks at the AI strategy council comprised of AI experts.

In order to ensure developers implement measures for risk reduction and disclose information for boosting transparency, the government is expected to consider conducting external audits and establishing a third-party authentication system. It also plans to hold regular meetings with AI developers for the exchanging of opinions.

As for generative AI service providers, sources say eight fields are listed as examples of being at high-risk: government, finance, energy, logistics, transportation, telecommunications, broadcasting and medicine. For these fields, the government is expected to consider stronger regulations on service providers by drawing up additional rules.

For low-risk fields, the government plans draw up a governance policy on operating AI systems that requests that service providers disclose information.

The government plans to research overseas examples and aims to come up with specific measures designed to reduce generative AI-based risks by the end of this fiscal year, the sources said.

As G7 chair this year, Japan is heading the Hiroshima AI Process, a framework for the G7 to discuss challenges involving generative AI. At the end of October, the G7 hammered out an international code of conduct that calls for appropriate action to address AI-based risks, as well as the development and introduction of digital watermarking technology and other techniques to help users identify AI-generated content.