Japan Govt Releases New Draft of Guidelines for AI Firms; Will Require Increased Transparency and Action Guidelines

Yomiuri Shimbun file photo
The Prime Minister’s Office in Tokyo

The government has released a new draft of guidelines for companies related to artificial intelligence. AI-related companies are urged to improve transparency and companies above a certain size will be required to formulate action guidelines for risks associated with AI.

The government plans to finalize the guidelines before the end of this year, following discussions at the government’s AI strategy council chaired by University of Tokyo Prof. Yutaka Matsuo.

The draft guidelines stipulate that AI-related companies “must not provide or use AI for violating human rights, committing terrorism or crimes, or encouraging such acts.” The guidelines also discuss the responsibility of companies to develop and introduce technologies that prevent the inappropriate use of AI. Required action guidelines for companies are expected to include companies making a plan for monitoring inappropriate use of AI by customers and employees.

The draft guidelines call for AI developers to disclose the functions and risks of AI systems and includes rules for excluding some data from what is collected to be used to train AI. The draft also include the necessity of disclosing the learning data itself. The guidelines also indicate that the government would consider whether external audits by a third party are necessary, and the methods used to conduct such audits.

Companies offering AI-based services will be asked to formulate and disclose a policy for personal information protection. Declaration of unacceptable usage, such as generating disinformation and spam emails, and explicitly stating that services are AI-based, are also issues to be considered.