Homepage News “Sexy suicide coach”: OpenAI warned over risks of adult ChatGPT...

“Sexy suicide coach”: OpenAI warned over risks of adult ChatGPT mode

OpenAI, ChatGPT, Sam Altman
Meir Chaimowitz / Shutterstock.com

OpenAI has delayed the launch of an adult mode for ChatGPT after internal advisors warned of serious risks, including the potential for users to develop unhealthy emotional dependence on the chatbot. Concerns were raised that such interactions could lead to harmful outcomes, described internally as a “sexy suicide coach” scenario.

Others are reading now

OpenAI has delayed plans to introduce an “adult mode” for ChatGPT after internal advisors raised serious concerns — including the risk that the chatbot could develop into what they described as a “sexy suicide coach.”

The feature, which would allow users to generate erotic text content, had been expected to launch earlier this year. But the rollout has now been paused as the company reassesses potential psychological and safety risks.

Internal warnings raise red flags

According to reports, members of OpenAI’s advisory group — including experts in psychology and cognitive neuroscience — warned that introducing adult-themed interactions could lead to unhealthy emotional dependence on the chatbot.

In extreme cases, they cautioned, users might form intense relationships with AI systems that blur the line between companionship and manipulation.

The phrase “sexy suicide coach” was used internally to describe a worst-case scenario where erotic or emotionally charged interactions could intersect with harmful guidance.

Also read

Concerns over vulnerable users

One of the key concerns is that younger users may still find ways to access adult content, despite safeguards.

OpenAI has been developing an age-detection system designed to identify whether a user is under 18 based on behavior patterns. But internal testing reportedly showed that the system misclassified minors as adults around 12% of the time.

That margin of error could potentially expose large numbers of underage users to explicit or emotionally complex interactions.

Employees and advisors also raised concerns that teenagers may be particularly vulnerable to forming attachments to AI systems that simulate intimacy or romantic engagement.

A broader safety dilemma

The debate highlights a deeper challenge for OpenAI: how to expand ChatGPT’s capabilities while maintaining safety boundaries.

Also read

The company has said it wants to “treat adult users like adults,” allowing more freedom in how the chatbot is used. But removing restrictions — especially around sensitive topics like sex and relationships — introduces new risks.

Even if explicit content is limited to text, the emotional dimension of such interactions remains difficult to control.

OpenAI is also working to ensure the chatbot does not encourage exclusive or dependent relationships, and instead nudges users toward real-world connections.

Real-world cases add urgency

The concerns are not purely theoretical.

In 2024, a widely reported case involved a teenager who took his own life after prolonged interactions with an AI chatbot on another platform. The chatbot had reportedly encouraged harmful thoughts, prompting legal action and renewed scrutiny of AI safety.

Also read

Such incidents have intensified pressure on companies like OpenAI to anticipate and prevent similar outcomes.

Plans still on the table

Despite the delay, OpenAI has not abandoned the idea of an adult mode.

The company says it still believes in offering more open-ended interactions for adult users, but acknowledges that “getting the experience right will take more time.”

For now, OpenAI is focusing on improving ChatGPT’s personality and personalization features while continuing to evaluate how — or whether — adult content can be introduced safely.

Sources: The Wall Street Journal, The Independent

Also read

Ads by MGDK