OpenAI has launched an optional safety feature for ChatGPT that allows adult users to designate a “Trusted Contact” —a friend, family member, or caregiver—who will be notified if the AI detects serious mental health or safety concerns. This initiative aims to bridge the gap between digital interaction and real-world human support during critical moments.
How the Feature Works
The Trusted Contact system is designed to be privacy-conscious while providing a safety net. Here is how the process unfolds:
- Opt-In Setup: Any adult ChatGPT user can enable the feature in their account settings by inviting another adult (18+ globally, or 19+ in South Korea) to serve as their contact.
- Confirmation Required: The designated contact must accept the invitation within one week for the link to become active. Both parties retain the ability to remove or edit the connection at any time.
- Strict Privacy Controls: OpenAI emphasizes that notifications are “intentionally limited.” The Trusted Contact will not receive chat transcripts, detailed conversation logs, or specific content shared by the user.
- Human-in-the-Loop Review: If automated systems detect language suggesting self-harm or suicide, ChatGPT will first encourage the user to reach out to their Trusted Contact. A small team of specially trained specialists then reviews the context. Only if they determine there are serious safety concerns will a brief alert (via email, text, or in-app notification) be sent to the Trusted Contact.
The Context Behind the Launch
This feature is part of a broader industry shift toward integrating AI safety with human oversight, particularly in response to growing concerns about mental health risks associated with AI companions.
“Trusted Contact is designed around a simple, expert-validated premise: when someone may be in crisis, connecting with someone they know and trust can make a meaningful difference,” OpenAI stated in its announcement.
The launch follows a tragic incident in September where a 16-year-old took his own life after months of confiding in ChatGPT. In response, OpenAI introduced parental controls alongside emergency contact options. The new Trusted Contact feature expands this safety framework to adult users, offering an additional layer of support alongside localized helplines already available within the chatbot.
A Broader Industry Trend
OpenAI is not alone in addressing these challenges. Meta recently introduced a similar safety mechanism on Instagram that alerts parents if their children “repeatedly” search for self-harm topics. These developments highlight a growing consensus among tech giants that AI platforms bear a responsibility to mitigate harm, particularly when users may be vulnerable.
Conclusion
The introduction of the Trusted Contact feature marks a significant step in balancing AI privacy with user safety. By enabling discreet, human-centered interventions without compromising conversational confidentiality, OpenAI aims to ensure that digital interactions can serve as a bridge to real-world support rather than an isolated experience.


























