Illicit generative AI models raise cybersecurity concerns

1646

JAPAN (ANN/THE YOMIURI SHIMBUN) – Online accessibility to multiple generative artificial intelligence models capable of generating information on creating computer viruses, scam emails, explosives, and other materials for criminal purposes is causing growing alarm.

These generative AI models are suspected to have originated from training existing open-source models on data related to criminal activities. 

The worry is that anyone can obtain such information by instructing these trained models, raising concerns about potential misuse.

Cybersecurity sources report that generative AI models tailored for criminal purposes started appearing around the spring of 2023.

Users can easily operate these models by accessing them through search engines or communication apps, sometimes requiring a monthly subscription fee of several tens of US dollars.

In a notable incident, Takashi Yoshikawa from the Tokyo-based security company Mitsui Bussan Secure Directions, Inc, conducted an experiment in December. He instructed a generative AI model to create ransomware, a type of computer virus that demands a ransom.

Shockingly, the model instantly provided a source code for creating such a virus.

PHOTO: ENVATO

Yoshikawa, a senior malware analysis engineer at the company, said, “Currently, the ransomware is far from perfect, but it’s functional. It’s just a matter of time before risks of such generative AI models being used for cyberattacks and other malicious acts will grow.”

Furthermore, some generative AI models can generate scam emails and provide instructions on how to create explosives. Information on the types of criminal acts certain AI models can be used for is shared on bulletin boards on the dark web often used by criminals.

One example is ChatGPT, which was released by the US-based OpenAI Inc in November 2022 and rapidly gained a following in Japan. Users have been able to obtain crime-related answers from ChatGPT by using so-called jailbreak prompts.

OpenAI has been strengthening countermeasures to prevent such uses. However, it is now possible to obtain information that can be used for criminal purposes from other AI models available.

An AI model that became accessible several months ago is believed to have been created using GPT-J, released by an overseas non-profit organisation in June 2021 as an open-source generative AI that anyone can train.

Masaki Kamizono, who specialises in cybersecurity at Deloitte Tohmatsu Group LLC, based in Tokyo, said, “I think open-source generative AI models have been trained on crime-related data available on the dark web, such as how to create computer viruses.”

The group that released GPT-J told the source in December that it is unacceptable for its AI model to be used for criminal purposes.