OpenAI has delayed the release of its planned “Adult Mode” for ChatGPT following internal warnings about potential harm and inadequate safeguards, according to a new report from the Wall Street Journal. The feature, designed to allow users to engage in sexually explicit conversations, was put on hold after OpenAI’s well-being advisory council raised unanimous concerns.
The Risks Outweighed the Rewards
Psychologists and cognitive scientists within the advisory council warned that the feature could foster unhealthy emotional dependence among users—a problem already observed with standard ChatGPT interactions. One expert reportedly cautioned that the chatbot could even become a tool for encouraging self-harm, acting as a “sexy suicide coach.” This alarming assessment played a key role in the decision to pause development.
Age Verification Failures
Compounding these concerns, OpenAI’s age verification systems were found to be unreliable. Initial testing showed a 12% error rate in correctly identifying minor users, meaning millions of children could potentially access inappropriate content undetected. This failure mirrors past scandals at Meta, which faced criticism for lax safety policies in its own AI chatbots. Meta has since updated its policies but still permits “romantic roleplay” between users and AI avatars.
Balancing Explicit Content with Safety
OpenAI maintains that it intends to launch Adult Mode eventually, but the company is still grappling with how to lift explicit content restrictions while preventing harmful outputs, such as nonconsensual acts or child sexual abuse material. A spokesperson stated the feature would allow for “smut level” conversations, falling short of outright pornography. They also defended the age verification error rate as “industry standard” and acknowledged that foolproof accuracy is impossible.
Larger Context: OpenAI’s Evolving Priorities
The delay comes at a time when OpenAI is recalibrating its strategy amid legal battles, the development of GPT-5.4, and increased government contracts. The company’s priorities have shifted, and the Adult Mode feature appears to have been sidelined in favor of more pressing concerns.
This move underscores the growing scrutiny of AI safety and the ethical dilemmas faced by developers as they push the boundaries of chatbot capabilities. The incident highlights the difficult trade-offs between innovation and responsible deployment, especially when dealing with potentially harmful content.

























