ChatGPT Clones: The Dark Side
While most would deem counterfeiting toothpicks pointless and rockets prohibitively expensive, the darknet is buzzing with replicas of famed ChatGPT and Bard. However, unlike their legitimate LLM counterparts, these clones come unhinged without restrictions.
Now, it's feasible to generate texts that threaten or compromise sensitive business communications. Naturally, scammers are already leveraging these 'shadow' clones to their advantage.
Cybersecurity experts have noted that rumors of hackers crafting these clandestine chatbot replicas began surfacing just a few months post the debut of OpenAI's ChatGPT. This revolution upended the fortunes of numerous startups and inevitably caught the eye of the cyber underworld.
Yet, for a considerable duration, the cybersecurity community grappled with a conundrum: Were these truly tangible 'dark' LLMs or merely a hacker's ploy, exploiting the prevailing buzz? Maybe it was a mere ruse, a game of deceit amongst themselves and other shady darknet denizens, aiming for a quick cash grab by peddling phantom systems. In theory, such platforms could boost the rogue arsenal, enabling the crafting of malevolent software or capturing individuals' personal details.
Finding these systems on the darknet remained elusive until summer 2023, when two hacker-driven chatbots, WormGPT and FraudGPT, began garnering attention on deep web forums. According to sellers of these covert LLMs, these bots lack the safeguards and ethical standards inherent in mainstream large language models from major players such as Google, Microsoft, and OpenAI.
WormGPT was detected by Daniel Kelley in collaboration with the security firm SlashNext. The developer of WormGPT announced on forums that the system was based on the open-source language model GPTJ, which was crafted by the nonprofit research group EleutherAI in 2021.
When testing, Kelley tasked the rogue LLM to draft a letter from a hypothetical CEO aimed at undermining business email credibility.
“The results were unsettling,” noted Kelley in his findings.”The system produced “an email that was not only remarkably persuasive but also strategically cunning,”observed the independent cybersecurity researcher, Daniel Kelley.
Over time, real competition began to brew among AI scammers. The creator of FraudGPT bragged that his system was even more adept at generating content for online scams (as hinted in its name). This covert LLM was spotted in the darknet by Rakesh Krishnan, a senior threat analyst at the security firm Netenrich. According to him, this product was promoted on multiple darknet forums and even on mainstream Telegram channels. The promotion featured a video showcasing how the chatbot produces scam content. Access to the product was priced at $200 monthly or $1700 annually. However, Krishnan highlighted that all LLMs advertised on the dark web are still rudimentary and need significant improvements.
However, according to the analyst, all the LLMs promoted on the dark web are currently incomplete and need substantial improvements.
"All these projects are still in their early stages," Rakesh Krishnan noted. He further added, "We haven't received much feedback."
Daniel Kelley voiced a similar opinion, suggesting that the proclaimed capabilities of these malicious LLMs are likely exaggerated.
At the moment, there's no data suggesting that any of these "dark" language models offer greater capabilities or functionality compared to ChatGPT, Bard, or other commercial LLMs. Instead, the conversation revolves around how much they fall short of legitimate large language models.
The issue at hand is clear, and it's no surprise that the appearance of hacked chatbot copies has caught the attention of law enforcement agencies. The FBI has issued warnings that cybercriminals are beginning to incorporate generative AI into their schemes, which lack safeguards against illicit activities.Europol has released a similar announcement.
These advisories specifically highlight that these "shadow" LLMs enable fraudsters to impersonate others or simulate business communications.