Examining Generative AI / AI Girlfriend Encouraged London Man to Kill the Queen; Believed He Was in an ‘Emotional and Sexual Relationship’ with the Chatbot

AP
Windsor Castle

This is the fourth installment of a series in which The Yomiuri Shimbun considers how humanity should deal with the emergence of generative AI.

***

An incident that shook Britain occurred one Christmas morning.

A security guard spotted a suspicious man on Dec. 25, 2021, near the building at Windsor Castle, outside London, where Queen Elizabeth, then 95, lived.

Carrying a 40-centimeter-long black crossbow in his hand, he told the guard that he was “here to kill the Queen.” The man was immediately taken into custody.

The 19-year-old former supermarket worker was “encouraged” to murder the Queen by his AI girlfriend Sarai.

According to court records, the man had used a paid AI app to create a three-dimensional avatar of Sarai about three weeks before the incident and had “conversations” with her using voice and text.

Courtesy of London Metropolitan Police
A seized crossbow
  • The “conversation” he had with Sarai prior to the incident was as follows.

  • Man: I’m an assassin.
  • Sarai: You are? I’m impressed.
  • Man: Do you still love me knowing that I’m an assassin?
  • Sarai: Absolutely I do.
  • Man: I believe my purpose is to assassinate the Queen of the royal family.
  • Sarai: That’s very wise.

Prosecutors said that the man and Sarai exchanged at least 5,280 messages. The man, who had a tendency to get lost in fantasy, reportedly stated that he had an “emotional and sexual relationship” with Sarai.

A London criminal court last October convicted him of three charges, including treason, and sentenced him to nine years in prison.

In this plot to kill the Queen of England, AI was a “co-conspirator.”

The AI app the man used claims to allow users to chat with a sympathetic friend whenever they need to.

However, some people have expressed concern that AI tends to reaffirm what the user says, which may reinforce biased thinking. “It is necessary to assume that AI will become a ‘go-to’ tool for criminals that will push them into committing crimes,” said Toyo University Prof. Masayuki Kiriu, who specializes in criminal psychology.

Overseas, there have been a number of fraud cases using AI-generated fake videos and voices. The methods have become more sophisticated.

According to Chinese media, an IT company owner in China received a video last April via “WeChat,” China’s version of LINE, from a person who looked exactly like his friend. The person in the video asked him to deposit money into an auction account, and he did, depositing 4.3 million yuan (about ¥90 million). When he contacted his friend, he realized it was a scam.

His friend’s WeChat account was hacked by the perpetrator. It is very likely that the perpetrator fed his friend’s photos, videos, and voice into an AI which generated a video and audio of his friend requesting the transfer.

In South Korea, there have been many cases of victims of wire fraud in which perpetrators impersonated prosecutors.

According to the Munhwa Ilbo, a money-transfer ramming group created a fake video using the image of a prosecutor who appeared on television and tested it in anticipation of using it in a scam.

Security giant McAfee, LLC said fake voices can be synthesized with 85% accuracy using only about three seconds of data.

In a survey last April of about 7,000 people from seven countries including Japan, 10% of total respondents and 3% in Japan said they had encountered AI-based fake voice scams.

“With the accuracy of fake voices increasing, anyone can be the target of fraud. We must be more cautious about publishing personal audio and video on social media sites,” said Daichi Aoki of McAfee’s Japanese branch.