OpenAI is introducing a new “Trusted Contact” feature designed to help protect ChatGPT users when a conversation may indicate risk of self-harm. The change expands the company’s safety measures and adds an additional step intended to connect relevant parties when the system detects concerning content. The rollout comes as scrutiny of ChatGPT’s safety grows, with reporting noting that legal disputes involving alleged self-harm-related conversations have increased pressure on the company to strengthen safeguards.

Across coverage, the feature is presented as part of OpenAI’s broader efforts to address self-harm risks in user interactions. While details of implementation are not fully specified in the provided excerpts, the central focus is on enabling a safety workflow that can involve a user’s chosen contact if potential self-harm concerns arise during the dialogue. OpenAI’s stated intent is to reduce harm by improving response processes in sensitive situations.