AI tools are becoming part of the hacking toolkit
State-sponsored hackers are increasingly using AI tools like Google’s Gemini to support cyberattacks, according to a new report from Google’s Threat Intelligence Group (GTIG).
Actors linked to Iran, North Korea, China, and Russia are using AI to speed up phishing, reconnaissance, and malware development.
Iranian group APT42 reportedly used Gemini to create realistic email identities and believable scenarios to approach targets.
AI translation also helped phishing messages sound more natural and harder to spot.
North Korean actor UNC2970, known for impersonating recruiters, used AI to research defence and cybersecurity companies, map job roles, and build detailed phishing personas.
Google also observed a rise in model extraction attacks, where actors attempt to steal AI capabilities by forcing models to reveal reasoning processes.
One campaign involved over 100,000 prompts aimed at replicating Gemini’s reasoning.
GTIG identified malware such as HONESTCUE, which uses Gemini’s API to generate code dynamically and execute payloads in memory, making detection more difficult.
Another phishing kit, COINBAIT, was likely built faster using AI code generation tools.
Here’s what you should know:
Hackers are using AI to create more convincing phishing and faster reconnaissance.
Attempts to steal AI model capabilities are increasing.
Malware campaigns are beginning to integrate AI tools directly into their operations.
Trusted links, untrusted intentions
In December 2025, Google saw attackers abusing public AI chat-sharing features to host malicious instructions that spread ATOMIC malware on macOS, using trusted AI domains as part of the attack chain.
Google says it has disabled accounts tied to malicious activity and strengthened model safeguards to reduce the risk of similar misuse.
GTIG stresses that while AI is becoming part of attacker workflows, no state-backed group has achieved a breakthrough that fundamentally reshapes the threat landscape.
The report serves as a reminder for organisations, particularly in regions facing active state-sponsored targeting, to strengthen defences against AI-enhanced reconnaissance and social engineering.
It’s an AI-eat-AI world out there. - MV


