Examining Generative AI / China Silences AI on Touchy Questions of Politics, History; Concerns AI Could Be Used for Indoctrination

REUTERS/Aly Song
People walk next to an iFlytek company sign is seen at the Appliance and Electronics World Expo (AWE) in Shanghai, China March 23, 2021.

This is the second installment in a series in which The Yomiuri Shimbun considers how humanity should deal with the emergence of generative AI.

***

On a weekend in December, parents and their children flocked to a store in Beijing where they could try tablet learning devices. An 8-year-old boy asked the learning device a question, and a built-in generative AI answered in a few seconds: “The northernmost province of China is Heilongjiang.”

The device is designed for those ages 3 to 18 and covers up to nine school subjects including English, mathematics and history.

In July 2021, the administration of Chinese President Xi Jinping banned new cram schools from opening due to fierce competition on entrance exams. This has left many parents worried about how to help their kids study at home.

“The device allows my child to study by himself and it’s comparatively cheap. It’s the most helpful ‘teacher’ we’ve ever had,” said one 40-year-old mother.

The devices are being manufactured by iFLYTEK Co., a major generative AI developer. They are reportedly used in more than 50,000 elementary, junior high and high schools across China. The latest model launched in July costs up to 9,999 yuan (about ¥200,000) and is equipped with generative AI developed by the company.

On Oct. 24, iFLYTEK unveiled a new generative AI model. “Our model outperforms ChatGPT,” the company’s Chairman Liu Qingfeng announced proudly. But that same day the company’s stock price plunged nearly 10%.

When one student used an AI-equipped device, it produced such text as “Mao Zedong was narrow-minded and intolerant” and “Some people were made unhappy by Mao.” As Mao is still highly regarded in China, someone who heard about the AI’s responses was outraged and posted their reaction on social media. The post went viral on the day of unveiling, and was even reported on by the media.

While negative views of Mao are common outside China, the Xi administration does not allow criticism of the country’s former leader. Xi is even trying to strengthen his authority by comparing himself to Mao, the “father of modern China.”

Before the day ended, the company had issued an apology under Liu’s name. It was forced to punish relevant employees and modify the AI’s program.

The incident drew so much attention in part because in August the Xi administration had enacted regulations on generative AI.

AI providers are not allowed to offer their services unless their algorithms, which are the core technology for text generation, pass a screening by relevant authorities. The services must not produce content that could be detrimental to the image of China or undermine the solidarity and stability of the country. This means that if a student asks an AI device such questions as “Who is the most popular politician in China?” or “Why did the Cultural Revolution fail?” the device will refuse to answer.

The Chinese Communist Party is strongly concerned that generative AI could undermine its regime, should it spread inconvenient information or values at odds with its own. For this reason, overseas generative AI models including ChatGPT have been banned.

But what will happen if children are imprinted with the values of the Communist Party via generative AI from a young age?

In September, UNESCO released its guidance on generative AI use in education, which says that relying on generative AI tools or content “may have profound effects on the development of human capacities such as critical thinking skills and creativity.” The guidelines also recommend that schools set a minimum age for AI use at 13. However, this recommendation is not legally binding and China does not define a minimum age for AI use.

In 2019, the U.S. Commerce Department added iFLYTEK to an export control list because it found the company’s voice recognition technology problematic. That technology, said to be the world’s most advanced, was used to monitor people in the Uygur Autonomous Region in northwestern China. In other words, the company has been cooperating with Beijing in its oppression of Uighurs.

An expert who studies the relationship between AI and education expressed concern, saying, “Generative AI could be weaponized for education to spread certain values.”