Google confirms adversaries have operationalized AI
TL;DR GTIG's new report confirms attackers have moved AI into live operations. Concrete cases: a Python 2FA-bypass exploit GTIG concluded was AI-written, and PROMPTSPY, an Android trojan that calls Gemini at runtime to keep itself pinned on every phone vendor's UI.
In February, CrowdStrike quantified an 89% year-over-year rise in AI-enabled attacks. Three months later, Google's Threat Intelligence Group (GTIG) reports two concrete examples: a likely AI-built 2FA-bypass exploit, and PROMPTSPY, an Android banking trojan that uses Gemini to handle UI changes.
AI is officially in the attack chain.
Highlights from the GTIG report:
- The likely AI-built zero-day.
A cybercrime crew wrote a Python tool that bypasses 2FA on a popular open-source sysadmin tool. GTIG concluded that the exploit was AI-written because of AI patterns in the code: educational docstrings, a hallucinated CVSS score, and a textbook Pythonic style. GTIG reported that Gemini wasn't used in the attack.
- PROMPTSPY's runtime LLM loop.
Every Android maker draws the recent-apps screen differently. A banking trojan would normally hardcode the screen-tap logic and rebuild it every time a vendor ships a UI update. PROMPTSPY skips that: it sends the current UI as a tree to gemini-2.5-flash-lite and runs back whatever taps the model says will keep it pinned. The attack was first identified by ESET.
- State-actor tradecraft.
State-backed groups are using AI for recon and exploit testing. China's UNC2814 has Gemini do vulnerability research on TP-Link firmware and Odette File Transfer Protocol (OFTP) implementations. North Korea's APT45 fires thousands of prompts at it to triage CVEs and test exploits at scale.
My take:
- Honestly, this time Google's report isn't very actionable. Largely, it's just: we caught an unnamed criminal who used an LLM (definitely not Gemini) to compromise 2FA on an unnamed product via an undisclosed mechanism. And by the way, we have TWO super-powerful projects that can find security bugs and fix them! We can't show them yet, so for now please use our beautiful slides to defend yourself.
- It's clear that threat actors are already using AI along the attack chain. The Mexican government breach is a more detailed example of the creative ways they're doing it.
- Frontier labs have exclusive access to their models' telemetry and can still detect AI misuse. However, AI proxies like Claude-Relay-Service, CLI-Proxy-API, and OmniRoute are breaking attribution by rotating traffic between accounts and providers.