AI-generated faces, testimonials blur ethical line for web advertising

Courtesy of AC Works Co.
Images of faces of fictitious people created by AI

Human faces that do not exist in the real world are being used more and more to promote products and services on the websites of various companies, according to research conducted by The Yomiuri Shimbun.

Fictitious persons created with artificial intelligence appeared on dozens of websites, presented as real customers recommending products and services. The potential for abusing AI technology has already been seen as problematic in many countries. If this practice continues unabated, there could be a debate over the rules for handling it in the future.

Images can be elaborately and automatically generated by AI’s deep learning technology, in which AI learns features from huge amounts of data.

AC Works, a technology company in Osaka, started a service two years ago that allows users to download images of nonexistent people for free if they register as members. The company expected these images to be used in such cases as sample photos or virtual models, and prohibited them from being posted as if they were, for example, real customers.

However, with the cooperation of the company, The Yomiuri Shimbun investigated how the images of 103 fictitious people had been used, and found that at least 90 websites were posting them in ways that might have violated their terms of use.

In particular, the images were displayed on websites such as for health food companies, staffing agencies and system development companies as if they were customers saying that they would recommend the firms’ products.

Some companies used the images as if they were tax accountants or other specialists retained by them, but the descriptions were found to be false.

In all such cases, the companies seem to be attempting to appear more trustworthy. AC Works said it will require such companies to delete the images.

Fictitious images raise ethical concerns different from the spread of deepfakes, another misuse of AI that alters video images of politicians and other celebrities.

In other countries, fictitious persons’ images have been used on fake social media accounts to spread political claims, and since the persons do not exist, they are said to be difficult to detect. In Japan as well, there is a fear that they will be used for crimes such as false information and fraud.

Fictitious authenticity

Two men with calm expressions appear on the website of a company that handles the belongings of dead people in the Tohoku region. The photos are accompanied by clients’ commentaries, saying, “It was good that I asked a certified company to do this.”

The two men, identified as “Mr. Matsumoto, 49” and “Mr. Ando, 55” do not exist. The photos were generated by an AI program.

“Displaying faces raises the comments’ credibility,” the operator of the website said. “Though the names and ages are indeed fictitious, we need to make them look real.”

As it is difficult to obtain permission to show photos of their actual clients, the operator said, “We thought nobody would complain about the fictitious faces.”

AC Works does not allow this kind of usage in its terms of service, but the operator said, “I never read [the fine print].”

There are more than 90 websites that have shown images of fictitious people, such as purported customers shopping for home appliances, patients at osteopathic clinics and students at certification courses, in a wide variety of industries.

The images were accompanied with comments such as “I’m very satisfied” and “I’m starting a new business by making use of my qualifications.”

Fake world

Some companies present fictitious people as their own staff to create a sense of credibility.

A website for an English conversation school for children claimed to have “first-rate bilingual teachers” and listed the faces of 16 teachers, but at least 11 of them were fictitious. It was confirmed that not only the faces but also the descriptions were false.

A business consulting firm in the Chubu region had posted the names of its tax accountants and judicial scriveners on its website, but it turned out that these people did not exist, according to the Japan Federation of Certified Public Tax Accountants’ Associations and others.

A website used for making reservations for trips that are accompanied by nursing care staff had more than 30 writers recommending fantastic tourist spots. But the head of the site’s operating entity admitted that some aspects about their names and careers were concocted.

The Yomiuri Shimbun cross-referenced one of the faces of the supposed writers, and the same image appeared on at least seven other websites, including an online shopping site and on others as a client under various names.

At present, there are only a few companies that can provide these images of fictitious people in Japan, but it is expected that more companies will enter the market when the technology becomes more widespread.

There are already several sites overseas that offer this service for free or at a low cost, and it is also available from Japan. There are more than 2 million images to choose from, such as “White,” “Black” and “Asian,” and users can also adjust the age group and hair color to create the face they need.

If such photos are used, no one but the users themselves will know what kind of face was generated. It will also be more difficult to verify if the technology is being abused.

“Especially in Japan, many people feel a sense of trustworthiness, thinking that information should be true if photos are attached. These kinds of images could be abused by using them to spread false information about politics on social media or for illicit sales schemes,” said Meiji University Prof. Harumichi Yuasa, an expert on information law with knowledge about AI and cases of false information.

“If the border between fiction and reality becomes ambiguous, people will not be able to determine what is true. We should start discussions on regulation and where to draw the line, ethically.”