New OECD Guidelines to Address False Information From AI; Developers, Providers to be Required to Take Action

Reuter/file photo
Figurines with computers and smartphones are seen in front of the words “Artificial Intelligence AI” in this illustration taken on Feb. 19, 2024.

The Organization for Economic Cooperation and Development plans to create guidelines requiring AI developers to address false information created by generative AI, it has been learned.

The guidelines are included in draft proposals for changes to the AI Principles — international guidelines for AI — currently under review by the OECD.

Since the adoption of the OECD AI Principles in 2019, the use of generative AI to produce sophisticated text and video has expanded rapidly. In light of this, a new clause requiring AI developers and  AI-based service providers to take action against misinformation and disinformation is expected to be added to the principles.

The draft guidelines are expected to be adopted at the OECD Ministerial Council Meeting to be held on Thursday and Friday.

With elections such as those for the European Parliament and the U.S. presidency on the horizon, OECD member countries have shared awareness of the threat posed by false information created by generative AI, a government source said.

The draft would also revise provisions on the transparency and accountability of AI. This reflects the agreement reached in the Hiroshima AI Process, which Japan led as chair of the Group of Seven industrialized nations last year.

The revision would require AI developers and others to disclose information about the capabilities and limitations of AI, as well as information about the AI’s training data and how information is generated.

Although the OECD AI Principles are not legally binding, they have been signed by 46 countries and influence the formulation of their domestic policies.