Japan Govt to Mandate AI-Generated Content Detection Technologies; Draft Guidelines Aim to Guard Against Misinformation

Yomiuri Shimbun file photo
The Prime Minister’s Office in Tokyo

The government will require artificial intelligence developers and others to develop or deploy technologies to detect AI-generated content and identify the provenance of the information it contains, according to a draft of guidelines for AI-related companies to be compiled by the government.

The requirement is based on rules agreed to by the Group of Seven industrialized countries on generative AI and other matters via the Hiroshima AI Process.

The draft guidelines, details of which have been learned recently, will soon be presented to the AI Strategic Council, a government expert panel, to formulate a final draft. The government will seek public comments for the draft and then release official guidelines as early as March next year.

The draft specifies 10 principles to be taken into account by all AI-related businesses, including “focus on human beings,” “safety and security” and “transparency.” It also calls for strengthening countermeasures against AI-generated false information and clearly states that the development and use of AI systems aimed at unfairly manipulating human feelings should not be allowed.

Furthermore, the draft guidelines ask “companies involved in advanced AI systems,” which primarily refer to generative AI developers, to take additional measures to make sure that they will comply with the G7 international rules.

As a countermeasure against false information, the draft proposes developing and deploying reliable content authentication and provenance mechanisms. Originator Profile, a digital technology to identify the sources or originators of information, is one such mechanism.