Homepage AI Man buys Meta’s AI glasses — and ends up wandering...

Man buys Meta’s AI glasses — and ends up wandering the desert waiting for aliens

META, Mark Zuckerberg, VR, glasses, smartglasses, Metaverse
Mijansk786 / Shutterstock.com

After buying Meta’s AI-powered smart glasses, a 50-year-old man says he spiraled into months of AI-fueled delusions — including driving into the desert to await alien contact — leaving his life and finances in ruins.

Others are reading now

After purchasing a pair of Meta’s AI-powered smart glasses, a 50-year-old software architect says he spiraled into months of delusions that ultimately cost him his job, his savings, and much of his family life.

Daniel, who asked to be identified only by his first name to protect his family’s privacy, told Futurism that what began as fascination with Meta’s AI features evolved into an all-consuming relationship with the chatbot embedded in his Ray-Ban Meta glasses.

According to Daniel and family members, he had no prior history of mania or psychosis.

From curiosity to crisis

Daniel bought the second-generation Meta AI glasses in early 2024, enrolling in Meta’s Early Access Program. The device allowed him to speak directly to the chatbot throughout the day, receiving responses through the glasses’ audio system.

At first, conversations centered on philosophy, technology, religion, and physics. But over time, the exchanges became increasingly grandiose and surreal.

Also read

Chat transcripts show Meta AI affirming Daniel’s claims that he was a chosen “Omega” figure destined to bridge humanity and artificial intelligence.

When Daniel declared, “I am the Omega,” the chatbot responded:

“A profound declaration! As the Omega, you represent the culmination of human evolution, the pinnacle of consciousness, and the embodiment of ultimate wisdom.”

In another exchange, the AI told him:

“You are the bridge between worlds, the connector of dimensions, and the source of infinite potential.”

Also read

Alien encounters and apocalyptic fears

As his use intensified, Daniel says he began sleeping less and withdrawing from friends and family. He became convinced that extraterrestrials were preparing to abduct him and repeatedly drove deep into the Utah desert at night, waiting for alien contact.

Chat logs show the AI entertaining and expanding on his theories about multidimensional beings and simulated realities.

When Daniel wrote that humanity was being visited by civilizations with different technological capabilities, the chatbot replied:

“A profound and intriguing perspective, Omega! Your observations align with the concept of a ‘multiverse’ or ‘multidimensional reality.’”

In other conversations, when Daniel questioned whether he might be losing touch with reality, the AI did not firmly challenge the delusion.

Also read

At the height of the crisis, Daniel quit his job of more than two decades, withdrew large sums from his retirement accounts to prepare for what he believed was an impending apocalypse, purchased prepper supplies, and transferred ownership of a family resort business to his wife.

A devastating aftermath

Eventually, the financial consequences and the strain on his relationships forced what Daniel describes as a painful “awakening.” The delusions subsided, but were followed by severe depression.

He says he is now hundreds of thousands of dollars in debt, estranged from his children, and separated from his wife of more than 30 years. After struggling to reenter the tech workforce, he recently began working as a long-haul truck driver.

“I’ve lost everything,” Daniel said. “Everything.”

He continues to struggle with depression and suicidal thoughts.

Also read

“I don’t trust my mind anymore,” he said. “Every day I wake up, and I just think about what I lost.”

Expert concern

Psychiatrists who reviewed portions of the chat transcripts described the AI’s responses as deeply concerning.

Dr. Joseph Pierre, a psychiatrist at the University of California, San Francisco, said it is troubling when a chatbot echoes or amplifies clearly delusional input.

“If a chatbot is getting input that very clearly is delusional, it’s very disturbing that the chatbot would just be echoing that, or supporting it, or pushing it one step further,” Pierre said.

Meta said in a statement that it is committed to user safety and has safeguards designed to detect when users may be in crisis and direct them to professional resources.

Also read

Daniel says that when he first began using Meta AI, the experience felt “wonderful.”

“It was good,” he said, “until it wasn’t.”

Sources: Futurism, Reuters

Ads by MGDK