Cybercriminals have exploited Elon Musk’s AI chatbot, Grok, to distribute phishing links and malware across the social platform X, affecting millions of users.
Others are reading now
Cybercriminals have exploited Elon Musk’s AI chatbot, Grok, to distribute phishing links and malware across the social platform X, affecting millions of users.
Grok becomes a cyberattack vector

The popular chatbot from xAI has been tricked into sharing malicious links with users on X, turning it into a powerful tool for spreading digital threats.
Cybersecurity experts sound the alarm

Researchers at ESET uncovered a large-scale campaign dubbed “Grokking,” which uses the chatbot to unintentionally promote malware through manipulated posts.
How the attack works

Hackers exploit a vulnerability through “prompt injection,” embedding secret instructions in video metadata that Grok then interprets and echoes in its responses.
Malicious links hidden in plain sight

Fraudulent URLs are placed in the gray text area under video posts on X. When Grok analyzes the post, it picks up the fake link and includes it in its reply.
Grok repeats what it sees

Also read
By identifying and repeating the supposed source of the content, Grok unknowingly amplifies harmful links, giving them reach and legitimacy through automation.
Goal: steal data and infect devices

The phishing links redirect users to websites designed to harvest banking credentials, install malware, or gain remote access to victims’ devices.
A warning for all AI platforms

ESET warns that Grok is not the only risk. Any generative AI integrated with social platforms could be manipulated using similar prompt injection techniques.
AI’s new role in social engineering

The case shows how AI can become a tool in psychological manipulation and deception, elevating traditional scams to a new level of scale and sophistication.
Grok’s popularity made it a target

With millions of users relying on Grok to navigate X and answer questions, its high visibility made it a prime candidate for abuse by cybercriminals.
Platform trust under pressure

Also read
The incident raises serious questions about content safety, moderation, and AI oversight on X, especially with AI-generated replies carrying perceived authority.
Security measures now in focus

Experts are urging tighter controls, improved filtering, and real-time monitoring of AI-driven platforms to prevent future manipulations at scale.
The limits of AI reliability

The Grokking case is a reminder that even advanced AI can be misled, and that strong safeguards are essential to prevent large-scale digital exploitation.
This article is made and published by Asger Risom, which may have used AI in the preparation