Japan Issues Administrative Guidance to ChatGPT Operator

Tokyo, June 2 (Jiji Press)—The Japanese government said Friday it has issued an administrative guidance to ChatGPT operator OpenAI due to insufficient consideration for personal information protection related to the use of the chatbot.

The guidance, issued Thursday by the government’s Personal Information Protection Commission, based on the personal information protection law, pointed to the possibility of ChatGPT infringing privacy by obtaining sensitive personal information such as medical histories without prior consent.

The commission said that it has not confirmed any specific harm or violation of the law so far.

It is believed to be the first time for the commission to issue administrative guidance over generative artificial intelligence.

If U.S.-based OpenAI fails to take sufficient measures in response to the guidance, Japanese authorities may conduct an on-site probe or impose fines.

The personal information protection law defines information on people’s race, beliefs, social status, medical history and criminal history as sensitive personal information, and in principle requires the consent of the individuals before it is obtained.

The commission urged OpenAI not to collect sensitive personal information from users without their prior consent. It also told the company to make efforts so that such information is not included in data collected to train AI, and to take measures if it is found, such as by deleting the data or making it impossible to identify the individuals concerned.

The commission also took issue with the fact that OpenAI did not warn users in Japanese about the purpose of ChatGPT’s use of personal information, and demanded that an explanation be made in the language.

It also urged administrative institutions and corporations using generative AI to minimize the use of personal information and to sufficiently check that the personal information protection law is not violated.

For general users, the commission warned about the risk of personal information they enter being used in machine learning and leading to outputs of inaccurate information.

ChatGPT has previously been suspended and investigated by the Italian government over a suspected law violation.