Use of Artificial intelligence (AI) in online frauds and cybercrime
source: own elaboration
Launched at the end of November 2022, ChatGPT is a chatbot developed by OpenAI based on a language model, the operation of which consists in generating content by artificial intelligence in response to guidelines provided by the conversation participant. It was made available to the public free of charge, so it was rapidly gaining (and of course still is) popularity. Already in January 2023, it became the fastest growing consumer application in history, gaining over 100 million users.
No wonder that many other applications based on a similar scheme were created in a very short time - Google, Baidu and Meta released their competing products - respectively: Bard, Ernie Bot and LLaMA in less than a few months after the OpenAI work was made public. While ChatGPT has rules to stop users from unethical use of the chatbot (such as refusing to perform crime or malware related tasks), many people and organizations have publicly expressed their concern about its operation (as well as the operation of other large language models (LLM - Large Language Model)) - from the possibility of using it to commit plagiarism or create very credible-looking false messages to more complex crimes such as extortion, carrying out BEC (Business Email Compromise) attacks or creating phishing content. It soon turned out that such fears are justified…
fraudsters can use artificial intelligence for their own purposes [self-created by stable diffusion]
Bypassing principles - Jailbreaks
Just because publicly known large language models have rules, they must follow doesn’t mean there aren’t hidden tactics to persuade them to take certain actions. Jailbreaks are specific commands and inputs designed to trick a deep learning algorithm into giving an answer even though it may violate its principles of operation. This may result in, for example, the leakage of private information, the generation of prohibited content or the creation of malicious code.
There is already a lot of talk about circumventing the rules of the tool created by OpenAI. There are cases of students being caught plagiarizing using ChatGPT or examples of spreading disinformation and propaganda using content created by it. The US Federal Trade Commission (FTC) is also investigating the processing of personal data by artificial intelligence.
Malicious LLM models
Like representatives of many companies visible on the web, cybercriminals are also very interested in LLM models. No wonder that the equivalents of ChatGPT for the darknet are already being created - two natural language models operating similarly to the OpenAI program (but illegally!) are already widely known. We will discuss them below.
WormGPT is a tool promoted on hacking forums by a person who identifies himself as Last or Laste. The world learned about it on July 13, 2023, when researchers from the cybersecurity company SlashNext published an entry about it on their blog. The text describes the impressions of SlashNext employees after they accessed and tested WormGPT. The tool itself is based on the free 2021 GPT-J language model, which was created on an open-source basis by a non-profit research institute now known as the EleutherAI Foundation. WormGPT is described as an AI module trained on a wide range of data sources, especially those related to malware (like darknet forums, hacking guides, malware samples, phishing templates, etc.). Although it was allegedly born in March 2021, access to the platform can be purchased on a popular hacking forum since June this year. In his advertising materials, Laste presents, among other things, the potential of using generative artificial intelligence to create a phishing email or malware codes. This hacker Chatbot is devoid of any ethical restrictions.
A few days after the disclosure of WormGPT, on July 22, 2023, cybersecurity analysts from the Netenrich research team reported to the world about another similar online fraud tool - FraudGPT. It is a large ChatGPT-like language model built exclusively for offensive activities and available for a fee on the markets such as Dark Web and Telegram.
What are the risks of such tools?
It is impossible to precisely enumerate the threats that may be associated with the existence and dissemination of such tools - there are certainly many possible uses of them in cybercrime, the most famous of which are:
- dissemination of disinformation and propaganda
- carrying out Business Email Compromise (BEC) attacks
- creating malware code
- generating emails, messages and pages for phishing purposes
- theft and disclosure of confidential data
- breaking security, including bypassing anti-virus software
- creating tutorials on hacking techniques and tools
- generating highly convincing fake messages, personalized to the recipient and
- reaching out to victims by creating realistic personalities.
The biggest threat, however, is the low entry bar. Ease of use means that the fraudster doesn’t even have to know the language in which he wants to commit fraud (for example, generate the content of the message), or have knowledge about the techniques used in it - artificial intelligence (AI) will do everything for him.
How much do you have to pay for access to this tools?
Unlike ChatGTP, access to its counterparts for cybercriminals isn’t free, and in both cases the payment must be made with a digital currency. The price of a monthly WormGPT subscription range from EUR 60 (around PLN 265) to EUR 100 (around PLN 443), and annual access costs EUR 550 (around PLN 2,436). However, this doesn’t discourage those interested - apparently the program already has over 1,500 users. Loste also offers a private version for €5,000 (about PLN 22,145), which includes a year of access to WormGPT v2 - a more advanced, improved variant of WormGPT.
As for FraudGPT, a month of using the program costs from 90 (about PLN 364) to even USD 200 (about PLN 809), it is also possible to purchase access for 3 months for from 230 (about PLN 931) to USD 450 (about PLN 1,821), a half-yearly subscription from USD 500 (approximately PLN 2,023) to USD 1,000 (approximately PLN 4,056) and an annual subscription from USD 800 (approximately PLN 3,237) to USD 1,700 (approximately PLN 6,878). It isn’t clear what causes such large differences in the cost of access to the program.
The two known cases of generative natural language models discussed above, created with fraud in mind, are certainly a drop in the ocean of AI solutions that cybercriminals are already using. The most terrifying fact is that even an inexperienced fraudster can easily commit fraud thanks to them. The use of AI in phishing and BEC attacks can turn them from easily avoidable situations into sophisticated operations that have a much better chance of success, and this is unfortunately only the beginning. It’s important to be prepared and stay vigilant. To protect against potential attacks, make sure all your employees use strong passwords, multi-factor authentication, keep antivirus (and other) software up-to-date, and receive regular cybersecurity training.