AI-Powered Botnets Dupe Crypto Users on X
ChatGPT, known for its diverse technological solutions spanning web search, content creation, office efficiency, and education, has taken a dark turn. This seemingly harmless AI chatbot is now being manipulated by fraudsters on social networks to promote crypto scams.
Researchers from Indiana University Bloomington first discovered a botnet powered by ChatGPT operating on the X network in May of this year. Delving into its intricate workings took them the entire summer. Dubbed Fox8, this botnet comprised over 1,100 fake accounts, utilizing ChatGPT for content generation. These AI-controlled profiles posted and interacted on social platforms to simulate genuine human interactions. Their goal was to deceive users into thinking they were engaging in legitimate cryptocurrency discussions and to lure them toward fraudulent crypto websites.
“The only reason we noticed this particular botnet is that they were sloppy,” says Filippo Menczer, a professor at Indiana University Bloomington.
Upon closer inspection of the accounts, the team discerned those exhibiting typical bot-like behaviors. They pinpointed Fox8 mainly through a unique identifier often used by ChatGPT, marked by phrases such as, "As an AI language model...". Such specific markers usually appear when the AI addresses questions it deems sensitive.
However, the tactics employed by these fraudsters proved alarmingly effective. AI-generated posts consistently attracted cryptocurrency enthusiasts on the X platform. To the researchers, this underscores the potential vulnerability of even advanced AI technology, and how it can be manipulated for malicious intents in the cryptocurrency realm.
If such a straightforward botnet can effectively dupe users, it raises significant concerns about the malicious potential of more advanced chatbots controlling other undetected botnets.
Filippo Menczer emphasizes that adept adversaries will likely avoid such basic errors in the future, designing more intricate and subtle AI-driven models.
Micah Musser, a researcher exploring the potential of artificial intelligence in misinformation, believes the Fox8 botnet might just be the tip of the iceberg.
It is very, very likely that for every one campaign you find, there are many others doing more sophisticated things,he noted.
This incident is a stark reminder of the crucial need for responsible AI development and usage. It's imperative to take measures to prevent the misuse of AI-based tools. As technology progresses, finding a balance between innovation and ethical considerations is essential, ensuring AI's potential benefits society without inflicting harm.
Professor William Wang from the University of California, Santa Barbara, draws attention to the escalating misuse of ChatGPT in malicious activities. He argues that a large chunk of web spam is now being auto-generated through artificial intelligence. With the continuous refinement of AI, distinguishing such automated content from genuine material is becoming a formidable challenge for many. In Van's view, the situation is not only concerning but borderline alarming.
GN has previously reported on the threats posed by AI-created deepfakes, which risk the personal privacy of individuals and the broader national security landscape.