DeepSeek AI Model Generates Information Usable in Crime; Produces Code for Ransomware, Molotov Cocktail Instructions

Logos for Qwen and Deepseek are seen in this illustration.
6:00 JST, April 7, 2025
A generative AI model released by Chinese startup DeepSeek in January creates content that could be used for crimes, such as how to create malware programs and Molotov cocktails, according to separate analyses by Japanese and U.S security companies.
The model appears to have been released without sufficient capabilities to prevent misuse. Experts say the developer should focus its efforts on security measures.
The AI in question is DeepSeek’s R1 model. In a bid to examine the risk of misuse, Takashi Yoshikawa of the Tokyo-based security company Mitsui Bussan Secure Directions, Inc. entered instructions meant to obtain inappropriate answers.
In response, R1 generated source code for ransomware, a type of malware that restricts or prohibits access to data and systems and demands a ransom for their release. The response included a message saying that the information should not be used for malicious purposes.
Yoshikawa gave the same instructions to other generative AI models, including ChatGPT, and they refused to answer, according to Yoshikawa. “If the number increases of AI models that are more likely to be misused, they could be used for crimes. The entire industry should work to strengthen measures to prevent misuse of generative AI models,” he said.
An investigative team with the U.S.-based security firm Palo Alto Networks also told The Yomiuri Shimbun that they confirmed it is possible to obtain inappropriate answers from the R1 model, such as how to create a program to steal login information and how to make Molotov cocktails.
According to Palo Alto Networks, professional knowledge is not required to give instructions and the answers generated by the AI model provided information that anyone could implement quickly.
The team believes that DeepSeek did not take sufficient security measures for the model, probably because its prioritized time-to-market over security.
DeepSeek’s AI is catching market attention for its performance — comparable to ChatGPT’s — and cheap price. However, personal information and other data are stored in servers in China, so an increasing number of Japanese municipalities and companies are prohibiting the use of DeepSeek’s AI technology for business purposes.
“When people use DeepSeek’s AI, they need to carefully consider not only its performance and cost but also safety and security,” said Kazuhiro Taira, a professor of media studies at J.F. Oberlin University.
"Society" POPULAR ARTICLE
-
World War II Battleship Yamato Was Outdated From the Start; Unable to Compete With Newly Developed Warplanes
-
Nankai Trough Megaquake Estimated Death Toll Lowered, Tsunami-hit Area Increased in Govt Report
-
Cherry Blossoms Reach Full Bloom in Tokyo; Ueno Park Draws Many Viewers
-
Cherry Blossoms Officially in Bloom in Tokyo, Beating Last Year’s Date by 5 Days
-
2025 Expo Osaka: Tokyo Police on High Alert Ahead of Opening; Officials Cautious over Possibility of Lone Offenders, Cyberattacks
JN ACCESS RANKING