A recent study by researchers from Indiana University has highlighted a problem on the social media platform formerly known as Twitter, now called X. The study discovered around 1,140 artificial intelligence-powered accounts on X, which they named the “Fox8” botnet. These accounts utilize technology like ChatGPT to create fake content and steal pictures to create fake profiles. The bots primarily aim to deceive people into investing in fraudulent cryptocurrencies and spread misinformation about elections and public health crises.
The bots in the Fox8 network were using stolen photos to appear more human, a practice known as pseudo-humanification. Similarly, they frequently tweaked posts about cryptocurrency news to seem more legitimate and encourage people to follow them and subscribe to their channel. This type of activity is growing in popularity because it can help a malicious account gain credibility amongst real users. It is also becoming easier to do with AI tools as they have become more affordable and easy to use.
In the case of the Fox8 botnet, these accounts were promoting a variety of scams, including fake crypto investments and fake romances. The researchers believe the accounts attempted to scam unsuspecting individuals by spamming their Twitter feeds with crypto-hyping links. This helps them to reach a larger audience of real Twitter users and increases the likelihood that someone will click on the harmful link.
According to the researchers, the Fox8 accounts also shared links that promoted shady business practices, such as ransomware and malware. In addition, the bots were also promoting fake ICOs and other investments often associated with Ponzi schemes. The research team published their findings in a recently published paper in the journal Computer Science and Security. The paper also outlines how the bots could post so much information on X because AI language models often automate them.
This is a growing problem, as these AI-generated bots have been used to amplify disinformation and incite violence. Experts have warned that the proliferation of such bots will further degrade the quality of online information if left unchecked.
As a result, developers and creators of AI-generated content must be vigilant about the risks. They should always keep an eye out for suspicious behavior and be willing to report such accounts. Additionally, social media platforms must take measures to prevent the use of these bots on their sites.
Elon Musk-owned X has also been taking steps to improve the overall safety of its platform, including rolling out content payments for some users. This includes famous AI blogger Paul Couvert, who recently reported he got paid for his X-related tweets.
While the payments make sense in some cases, it is still essential to be cautious of any exploitation that could occur on any platform. For example, the recent fake kidnapping incident where AI was used to extort a family’s money illustrates some of the more serious dangers that can occur.