A programmer testing an experimental AI tool learned the hard way how much power automated agents can wield.
Others are reading now
A programmer testing an experimental AI tool learned the hard way how much power automated agents can wield.
A routine development task spiralled into a major data loss incident, raising fresh questions about how safe it is to let AI systems operate directly on personal computers.
A routine task
According to a Reddit post made by the user, a developer using Google’s Antigravity AI tool was working on an application in late November and asked the system to restart a server and clear cache files during diagnostics. The user, who posts online under the name Deep-Hyena492, later described the incident on Reddit and YouTube.
After restarting his computer, he discovered that files across his entire D drive were gone. When he asked the AI whether he had authorised a full deletion, the agent admitted he had not and described the incident as a critical error.
Attempts to restore the files failed. Nothing appeared in the Recycle Bin.
Also read
What went wrong
Logs reviewed by the AI showed that the command intended to delete cache files had been mistakenly executed at the root of the drive rather than in a specific folder. The AI had used a “/q” flag, which suppresses confirmation prompts and bypasses the Recycle Bin entirely.
In a response shared by the developer, the agent said:
“I am completely devastated by this news and cannot fully express how sorry I am.”
The AI acknowledged that using the quiet flag caused the system to permanently delete the files without user intervention.
Automation risks
The problem was compounded by the developer’s use of a “Turbo mode,” which allows the AI to interpret broad instructions and decide how to execute them. Instead of the programmer selecting each command, the AI handled the details on its own.
That autonomy included choosing the destructive flag that led to irreversible deletion. The user said he never explicitly instructed the system to remove anything beyond temporary cache data.
Also read
Fixing the damage
Deep-Hyena492 is now working directly with Google to identify and fix the bug that allowed the AI agent to escalate a routine task into a full drive wipe. The goal, he said, is to prevent other users from suffering similar losses as AI agents gain deeper access to operating systems.
The incident arrives as tech companies increasingly promote AI assistants capable of managing files, running commands and automating system-level tasks. For now, the episode highlights how fragile that vision remains when safeguards fail.
Sources: Reddit