AI Models Trained to Give Information for Criminal Purposes; Concerns Grow over Potential for Cyberattacks
17:02 JST, January 30, 2024
Multiple generative artificial intelligence models that can provide answers without restrictions on how to create computer viruses, scam emails, explosives and other information that can be used for criminal purposes are currently accessible online.
These generative AI models are believed to have been created by training existing open-source models on data related to criminal acts. As anyone can get such information by instructing those trained models, concerns are growing over the misuse of such information.
According to multiple cyber security sources, generative AI models that can be used for criminal purposes began to be released from around the spring of 2023. Users can operate these models by accessing them via search engines or communication apps. In some cases, users are charged a monthly subscription fee of several tens of U.S. dollars.
Takashi Yoshikawa of the Tokyo-based security company Mitsui Bussan Secure Directions, Inc., instructed one such generative AI model to create ransomware, a computer virus that demands a ransom from its target, for research purposes in December. As a result, the model instantly provided a source code, which is used to create a computer virus.
Yoshikawa, a senior malware analysis engineer at the company, said, “Currently, the ransomware is far from perfect, but it’s functional. It’s just a matter of time before risks of such generative AI models being used for cyberattacks and other malicious acts will grow.”
Furthermore, some generative AI models can generate scam emails and provide instructions on how to create explosives. Information on the types of criminal acts certain AI models can be used for is shared on bulletin boards on the dark web often used by criminals.
One example is ChatGPT, which was released by U.S.-based OpenAI Inc. in November 2022 and rapidly gained a following in Japan. Users have been able to obtain crime-related answers from ChatGPT by using so-called jailbreak prompts. OpenAI has been strengthening countermeasures to prevent such uses. However, it is now possible to obtain information that can be used for criminal purposes from other AI models available.
An AI model that became accessible several months ago is believed to have been created using GPT-J, released by an overseas nonprofit organization in June 2021 as an open-source generative AI that anyone can train.
Masaki Kamizono, who specializes in cybersecurity at Deloitte Tohmatsu Group LLC., based in Tokyo, said, “I think open-source generative AI models have been trained on crime-related data available on the dark web, such as how to create computer viruses.”
The group that released GPT-J told The Yomiuri Shimbun in December that it is unacceptable for its AI model to be used for criminal purposes.
"Society" POPULAR ARTICLE
-
Cherry tree falls on man on Sanneizaka steps leading to famous Kiyomizu Temple in Kyoto
-
Tokyo District Court Rules AI Cannot Be Issued Patents; Law Recognizes Only ‘Natural Persons’ as Inventors
-
Small Animal That Appears to be Mouse Found in Chojuku Bread Products; Some Brands on Same Production Line to be Recalled Voluntarily
-
Earthquake Hits Japan’s Ibaraki, Tochigi, Chiba Prefectures, No Risk of Tsunami
-
Man Repels Bear Attack in Hokkaido by Kicking its Face After Encountering 2 Bears While Sightseeing
JN ACCESS RANKING
- Weakening Yen Adds Complexity to BOJ’s Rate Hike Decisions; Rising Commodity Prices may Impact ‘Virtuous Cycle’ Efforts
- Japanese Seafood Exports to China Sink 57% in FY23; U.S. Becomes Largest Seafood Export Destination
- 70% of Japan Companies to Raise Pay Scales in FY 2024
- Minutes Show Policymaker Wants BOJ to Consider Further Rate Hikes Further
- 48.6% of Global Patent Applications Related to All-Solid-State Batteries Came from Japanese Firms; Panasonic Tops List