A former OpenAI researcher warns that as ChatGPT introduces advertising, users sharing deeply personal thoughts may be risking not just privacy, but their psychological autonomy.
Others are reading now
A former OpenAI researcher is warning users to think twice before confiding their deepest fears and personal struggles to AI chatbots.
Zoë Hitzig, who left OpenAI in early 2026 after several years at the company, has raised concerns about how platforms like ChatGPT handle the vast amount of sensitive information users voluntarily share. According to Hitzig, the real danger is no longer just about data privacy — it is about psychological autonomy.
From neutral tool to monetized platform
Since the rise of generative AI, millions of users have turned to chatbots not only for factual questions, but also for personal advice. People discuss medical anxieties, relationship problems, spiritual doubts, and deeply private thoughts with systems they often perceive as neutral and nonjudgmental.
Hitzig argues that this perception may be outdated.
Her departure from OpenAI reportedly stemmed from disagreements over the ethical direction of the company, particularly the introduction of advertising into ChatGPT. While advertising is common across free digital services, she believes it creates a fundamental conflict when applied to systems that process highly intimate conversations.
Also read
“OpenAI possesses the most detailed record of private human thought ever compiled,” Hitzig warns. “Can we trust them to resist the overwhelming forces that push them to abuse it?”
Beyond privacy: influence and autonomy
According to Hitzig, the concern goes further than targeted ads. The combination of intimate user data and advanced AI systems creates the possibility of subtle psychological influence aligned with corporate interests.
“This isn’t just a privacy issue: it’s a question of psychological autonomy,” she argues.
The broader AI sector is facing mounting financial pressure due to high operating costs. As companies look for sustainable revenue models, some are adopting strategies similar to social media platforms, relying more heavily on advertising and personalized data use.
Critics fear this shift could undermine earlier commitments to user trust and data protection, particularly in systems designed to simulate empathetic, conversational relationships.
Also read
As AI becomes more integrated into daily life, the question is no longer just what these systems can do — but how the information shared with them may ultimately be used.
Source: elEconomista.es