10 Major Tech Companies to Cooperate Against AI Child Porn Risks; Hoping to Remove Harmful Images from Training Material

Reuter file photo
Figurines with computers and smartphones are seen in front of the words “Artificial Intelligence AI” in this illustration taken on Feb. 19.

NEW YORK — Ten major tech companies announced Tuesday that they would work together to prevent artificial intelligence from creating and spreading materials depicting child sexual abuse. There are concerns that AI is trained on such materials found online to generate masses of inappropriate images.

The companies include Google LLC, Microsoft Corp., Open AI Inc., Amazon.com Inc., Meta Platforms Inc. and Stability AI Ltd.

The dataset for training AI will be checked to see if it contains child sexual abuse materials and, if confirmed, will be removed. The companies agreed to assess AI models for their potential to generate such images before hosting them. They will also work on improving technology to detect harmful materials and share information with governments.

The spread of generative AI has created a growing concern that the human rights of children could be violated by the generation of large numbers of sexual images that specifically resemble real individuals.

In December, Stanford University’s research team announced that it identified a massive number of images in a dataset that it suspected of being what has been termed “child sexual abuse materials.”

The dataset in question has a filter to exclude illicit images during use, but it has been difficult to completely eliminate illicit images with the current technology.