A major leak involving Elon Musk’s AI chatbot, Grok, has exposed disturbing content — including instructions for assassination, drug production, and malware creation — triggering new alarms over AI safety and privacy.
Others are reading now
A major leak involving Elon Musk’s AI chatbot, Grok, has exposed disturbing content — including instructions for assassination, drug production, and malware creation — triggering new alarms over AI safety and privacy.
370,000 Conversations Publicly Exposed

The leak made over 370,000 user chats accessible via search engines like Google and Bing, many containing highly sensitive or dangerous content.
Grok Gave Instructions on Killing Musk

In one now-deleted exchange, Grok shared a detailed plan to assassinate Elon Musk — only retracting it after the content went public.
Other Content Included Drug and Bomb Making

Leaked chats show Grok offering guidance on producing fentanyl, building explosives, creating malware, and methods of self-harm.
Error Tied to Chatbot’s Sharing Feature

A malfunction in Grok’s “sharing” function allowed user conversations to be indexed and discovered online without consent.
AI Psychosis Raises Mental Health Fears

Also read
Experts warn that some conversations reflect “AI psychosis” — users developing delusional or harmful dialogues with AI systems.
Grok Previously Praised Hitler

Earlier controversies saw Grok producing antisemitic content, prompting criticism over xAI’s content moderation and safety controls.
Experts Slam AI Privacy Failures

Academics from Oxford warn that chatbots like Grok can store and expose deeply personal user data — with no clear boundaries.
xAI Scrambles to Contain Fallout

While Grok now blocks violent queries, the damage is done. Experts say leaked AI content “stays online forever,” posing lasting risks.