Legal Restrictions on AI: Face up to Risks and Restart Debate

The government seems to have finally gotten around to legal restrictions for artificial intelligence, but when it says it will “spend several years discussing the issue,” one wonders if it seriously intends to regulate AI.

The government should take head-on the various risks surrounding AI. One option is to change its conventional approach from a heavy emphasis on promoting AI and restart the debate from scratch.

So far, the government has been reluctant to regulate AI, and tried to let the industry self-regulate so that AI development can drive economic growth.

However, there are risks with AI, such as that it will be used to create elaborate fake videos for criminal ends and that personal information will be collected without permission.

Last year, the United States regulated AI development for security purposes through an executive order. This month, the European Union also enacted the AI Act, which comprehensively regulates AI. It intends to apply the law to member states from as early as 2026.

The EU’s AI Act aims to protect democracy and human rights from AI. The law prohibits the development of AI and other technologies that automatically collect photos of people’s faces online and compile them into a database.

In light of these efforts in the United States and Europe, the government has now begun to consider AI regulations. Based on the government’s policy, its AI Strategy Council, chaired by University of Tokyo Prof. Yutaka Matsuo, will reportedly consider specific measures for regulations.

One proposal that has been floated is for the government to require developing firms to conduct safety inspections to ensure that AI does not provide answers that could encourage crime or leak personal information.

If such measures are implemented, the safety of AI is expected to improve. However, even if the government actually implements the regulations, it is said to be looking to do so several years from now.

In addition, the AI Strategy Council intends not to look at how to protect copyrighted works in the regulations.

Under the Copyright Law, which was revised in 2018, AI developing companies are allowed to make AI learn copyrighted works without the permission of the copyright holders.

Creators and others, whose works are being used to develop products, have criticized the law, saying that it could hinder their creative activities.

If such a situation is left unchecked, people could lose the motivation to create writings, paintings and other works, and art and culture could decline. The government should move quickly to revise the Copyright Law again.

But more fundamentally, is it appropriate to leave discussion of regulations to the AI Strategy Council, which has strongly promoted the use of AI? If the government intends to come up with effective measures, it needs to consider changing the forum for discussion.

(From The Yomiuri Shimbun, May 24, 2024)