- Mindstream
- Posts
- Cybercriminals just found their AI sidekick
Cybercriminals just found their AI sidekick

CYBERSECURITY
AI misuse is no longer a hypothetical risk, and GhostGPT proves how accessible it has become for cybercriminals.
According to a report by Abnormal Security, GhostGPT is an uncensored AI chatbot designed to assist malicious activities like malware creation and phishing scams.
Unlike ethical AI systems with safeguards to block harmful requests, GhostGPT removes these restrictions entirely.
It provides unfiltered responses, enabling actions that traditional AI would flag or reject.
Researchers described it as “a chatbot specifically designed to cater to cybercriminals,” making it a significant security concern.
Why GhostGPT is alarming:
Cybercriminals can access GhostGPT by paying a fee on Telegram. Its ease of use makes it accessible even to those with minimal technical skills.
GhostGPT allows users to perform harmful tasks, such as writing exploit code, creating malware, and crafting convincing phishing emails.
Promotional materials boast fast processing, no user activity logs, and immediate access without downloading large language models or using jailbreak prompts. These features lower the entry barrier for new attackers.
Hackers, assemble
While GhostGPT's creators claim it has applications in cybersecurity, researchers find this claim dubious.
The tool is heavily promoted on cybercrime forums and focuses on business email compromise scams, making its true intent clear.
Imagine paying to make phishing emails. Lame.