AI Agents Should Consult Humans, Say New Rules from Japanese Govt

Reuters file photo
This illustration featuring the letters for AI was created in June 2023.

With the spread of AI products that handle tasks autonomously, the government plans to ask AI operators to build systems that require decision-making by humans.

The new requirement is outlined in a draft revision to guidelines for businesses, municipalities and other parties involved in the development, provision or use of AI. The draft was unveiled Monday by the Internal Affairs and Communications Ministry and the Economy, Trade and Industry Ministry.

The guidelines were introduced in 2024 and have been revised twice. They are not legally binding and carry no penalties.

The revisions require AI operators to establish appropriate systems for managing “physical AI,” or those used in robots and autonomous driving, and “AI agents,” which handle tasks autonomously, to prevent malfunctions or misuse. The operators are required, for example, to obtain consent from customers when selling them expensive items under AI-based systems.

Definitions for these two types of AI were provided for the first time in the guidelines in response to their rapid development and proliferation. Previously, their definitions were vague. According to the revisions, physical AI has been defined as systems that obtain external information via sensors or other means that is then processed by AI for autonomous decisions and physical action. AI agents are described as AI systems that understand their environment and autonomously execute operations.

A panel of experts under the communications ministry will discuss the revisions with a view toward finalizing them by the end of March.