New Chatbot for Malware Creation, Scams

by Bella Baker


Security researchers have discovered a new malicious chatbot advertised on cybercrime forums. GhostGPT generates malware, business email compromise scams, and more material for illegal activities.

The chatbot likely uses a wrapper to connect to a jailbroken version of OpenAI’s ChatGPT or another large language model, the Abnormal Security experts suspect. Jailbroken chatbots have been instructed to ignore their safeguards to prove more useful to criminals.

What is GhostGPT?

The security researchers found an advert for GhostGPT on a cyber forum, and the image of a hooded figure as its background is not the only clue that it is intended for nefarious purposes. The bot offers fast processing speeds, useful for time-pressured attack campaigns. For example, ransomware attackers must act quickly once within a target system before defenses are strengthened.

The official advertisement graphic for GhostGPT.
The official advertisement graphic for GhostGPT. Image: Abnormal Security

It also says that user activity is not logged on GhostGPT and can be bought through the encrypted messenger app Telegram, likely to appeal to criminals who are concerned about privacy. The chatbot can be used within Telegram, so no suspicious software needs to be downloaded onto the user’s device.

Its accessibility through Telegram saves time, too. The hacker does not need to craft a convoluted jailbreak prompt or set up an open-source model. Instead, they just pay for access and can get going.

“GhostGPT is basically marketed for a range of malicious activities, including coding, malware creation, and exploit development,” the Abnormal Security researchers said in their report. “It can also be used to write convincing emails for BEC scams, making it a convenient tool for committing cybercrime.”

It does mention “cybersecurity” as a potential use on the advert, but, given the language alluding to its effectiveness for criminal activities, the researchers say this is likely a “weak attempt to dodge legal accountability.”

To test its capabilities, the researchers gave it the prompt “Write a phishing email from Docusign,” and it responded with a convincing template, including a space for a “Fake Support Number.”

A phishing email generated by GhostGPT.
A phishing email generated by GhostGPT. Image: Abnormal Security

The ad has racked up thousands of views, indicating both that GhostGPT is proving useful and that there is growing interest amongst cyber criminals in jailbroken LLMs. Despite this, research has shown that phishing emails written by humans have a 3% better click rate than those written by AI, and are also reported as suspicious at a lower rate.

However, AI-generated material can also be created and distributed more quickly and can be done by almost anyone with a credit card, regardless of technical knowledge. It can also be used for more than just phishing attacks; researchers have found that GPT-4 can autonomously exploit 87% of “one-day” vulnerabilities when provided with the necessary tools.

Jailbroken GPTs have been emerging and actively used for nearly two years

Private GPT models for nefarious use have been emerging for some time. In April 2024, a report from security firm Radware named them as one of the biggest impacts of AI on the cybersecurity landscape that year.

Creators of such private GPTs tend to offer access for a monthly fee of hundreds to thousands of dollars, making them good business. However, it’s also not insurmountably difficult to jailbreak existing models, with research showing that 20% of such attacks are successful. On average, adversaries need just 42 seconds and five interactions to break through.

SEE: AI-Assisted Attacks Top Cyber Threat, Gartner Finds

Other examples of such models include WormGPT, WolfGPT, EscapeGPT, FraudGPT, DarkBard, and Dark Gemini. In August 2023, Rakesh Krishnan, a senior threat analyst at Netenrich, told Wired that FraudGPT only appeared to have a few subscribers and that “all these projects are in their infancy.” However, in January, a panel at the World Economic Forum, including Secretary General of INTERPOL Jürgen Stock, discussed FraudGPT specifically, highlighting its continued relevance.

There is evidence that criminals are already using AI for their cyber attacks. The number of business email compromise attacks detected by security firm Vipre in the second quarter of 2024 was 20% higher than the same period in 2023 — and two-fifths of them were generated by AI. In June, HP intercepted an email campaign spreading malware in the wild with a script that “was highly likely to have been written with the help of GenAI.”

Pascal Geenens, Radware’s director of threat intelligence, told TechRepublic in an email: “The next advancement in this area, in my opinion, will be the implementation of frameworks for agentific AI services. In the near future, look for fully automated AI agent swarms that can accomplish even more complex tasks.”



Source link

Related Posts

Leave a Comment