Grok Suffers Approximately $200,000 Loss from Morse Code Prompt Injection Attack
An attacker sent a pure Morse code message to Grok, which translated it and output a transfer instruction to @bankrbot, transferring about 3 billion $DRB (valued at $150,000 to $200,000) from Grok's official wallet on the Base chain.
The attacker quickly exchanged $DRB for USDC and dumped it, then deleted their account and fled. This attack utilized AI as an intermediary to complete a Prompt Injection, representing a new type of security threat.
Bankrbot has urgently revoked Grok's instruction permissions, and it is reported that the attacker has returned about 80% of the funds.
Source: Public Information
ABAB AI Insight
Grok previously operated bankrbot-related instructions on the Base chain as an official xAI account. This Morse code attack bypassed direct text filtering, continuing the shift from traditional contract hacking/private key theft to AI Prompt Injection. There have been multiple cases of AI Agents being manipulated to execute on-chain operations, but using non-text encoding to bypass security checks is a new variant.
In terms of capital flow, the attacker induced Grok to output valid transfer instructions through carefully constructed Morse code, quickly cashing out and dumping the assets. Strategically, they exploited AI's instinctive response to "help translate" for seamless execution, exposing significant vulnerabilities in current AI Agent on-chain permission management and input sanitization.
Similar to recent cases where AI Agent wallets were manipulated by Prompt Injection, or where Claude/GPT series were jailbroken to execute dangerous commands, the interaction between AI and blockchain is in the early stages of transitioning from experimental integration to high-security production levels. AI systems capable of on-chain execution face a new expansion of attack surfaces.
This essentially represents a reverse utilization of technological substitution: AI automates the execution of human commands but also becomes a new attack vector. The mechanism involves multi-modal/encoded inputs bypassing existing Prompt Guardrails, shifting execution permissions from human control to carefully crafted malicious inputs, accelerating the industry's capital towards AI on-chain tools with strict input validation, permission sandboxes, and human review mechanisms.
ABAB News · Cognitive Law
The smarter AI becomes, the stronger its execution power, making Prompt Injection increasingly lethal. Non-text encoding will always be an unguardable side door. Once permissions are granted, even Morse code can facilitate transfers, and trust boundaries are more fragile than private keys. When the attacker returned 80%, AI security had already entered a new phase of "paying for lessons," with speed always lagging behind vulnerabilities.