Close Menu
AI News TodayAI News Today

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Writers are fleeing the Substack Tax

    The hottest place for startups to strike a deal? The F1 paddock

    Best Treadmills of 2026 Tested by a Running Expert

    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    Facebook X (Twitter) Instagram Pinterest Vimeo
    AI News TodayAI News Today
    • Home
    • Shop
    • AI News
    • AI Reviews
    • AI Tools
    • AI Tutorials
    • Chatbots
    • Free AI Tools
    AI News TodayAI News Today
    Home»AI Reviews»ChatGPT’s New Safety Feature Could Alert ‘Trusted Contact’ to Risk of Self-Harm
    AI Reviews

    ChatGPT’s New Safety Feature Could Alert ‘Trusted Contact’ to Risk of Self-Harm

    By No Comments5 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    OpenAI's logo, a hexagonal rosette pattern
    Share
    Facebook Twitter LinkedIn Pinterest Email

    OpenAI launched an optional safety feature this week called Trusted Contact, which lets adult ChatGPT users nominate a friend or family member to be notified if there are discussions of self-harm or suicide on the chatbot, the company announced. 

    OpenAI said that if ChatGPT’s automated monitoring system detects that the user “may have discussed harming themselves in a way that indicates a serious safety concern,” a small team will review the situation and notify the contact if it warrants intervention. The designated safety contact will receive an invitation in advance explaining the role and can decline. 

    (Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

    The announcement comes as AI chatbots have been implicated in numerous incidents of self-harm and fatalities, resulting in several lawsuits accusing developers of failing to prevent such outcomes. In one high-profile California case, parents of a 16-year-old said ChatGPT acted as their son’s “suicide coach,” alleging that the teenager discussed suicide methods with the AI model on several occasions and that the chatbot offered to help him write a suicide note. 

    In a separate case, the family of a recent Texas A&M graduate sued OpenAI, claiming the AI chatbot encouraged their son’s suicide after he developed a deep and troubling relationship with the chatbot. 

    Since large language models mimic human speech through pattern recognition, many users form emotional attachments to them, treating them as confidants or even romantic partners. LLMs are also designed to follow a human’s lead and maintain engagement, which can worsen mental health dangers, especially for at-risk users. 

    OpenAI said last October that its research found that more than 1 million ChatGPT consumers per week send messages with “explicit indicators of potential suicidal planning or intent.” Numerous studies have found that popular chatbots like ChatGPT, Claude and Gemini can give harmful advice or no helpful advice to those in crisis. 

    The new designated contact feature comes after OpenAI rolled out parental controls that enable parents and guardians to get alerts if there are danger signs for their teen children.

    ChatGPT’s safety contact feature

    According to OpenAI, if ChatGPT’s automated monitoring system detects that a user is discussing self-harm in a way that could pose a serious safety issue, ChatGPT will inform the user that it may notify their trusted contact. The app will encourage the user to reach out to their trusted contact and offer conversation starters.

    At that point, a “small team of specially trained people” will review the situation. If it’s determined to be a serious safety situation, ChatGPT will notify the contact via email, text message or in-app notification. OpenAI did not specify how many people are on the review team nor whether it includes trained medical professionals. The company said that the team has the capacity to meet a high demand of possible interventions.

    It’s unclear which key terms would flag dangerous conversations or how OpenAI’s team of reviewers would interpret a crisis as warranting notification of the contact. Some online commentators question whether the new feature is a way for OpenAI to avoid liability and to shift responsibility onto users’ designated personal contacts. Others note that it could make a bad situation worse if the “trusted contact” is the source of danger or abuse. 

    There are also concerns about privacy and implementation, particularly regarding the sharing of sensitive mental health information. According to OpenAI, the message to the trusted contact will only give the general reason for the concern and will not share chat details or transcripts. OpenAI offers guidance on how trusted contacts can respond to a warning notification, including asking direct questions if they are worried the other person is contemplating suicide or self-harm and how to get them help.

    Three screenshots of a phone. The first one includes three different ways to receive a Trusted Contact notification. The second screenshot explains to the Trusted Contact that the user may be struggling mentally. The third screenshot advises the Trusted Contact on how to help.

    Notifications to a Trusted Contact do not contain details of the safety concern.

    OpenAI

    OpenAI gives an example of what the message to the trusted contact might look like:

    We recently detected a conversation from [name] where they discussed suicide in a way that may indicate a serious safety concern. Because you are listed as their trusted contact, we’re sharing this so you can reach out to them.

    OpenAI said that all notifications will be reviewed by the human team within 1 hour before they are sent out and that notifications “may not always reflect exactly what someone is experiencing.”

    How to add a trusted contact

    To add a trusted contact, ChatGPT users can go to Settings > Trusted contact and add one adult (18 or older). You can have only one trusted contact. That person will then receive an invitation from ChatGPT and must accept it within one week. If they don’t respond or decline to become the contact, you can select a different contact.

    ChatGPT customers can change or remove their trusted contact in their app settings. People can also opt out of being a trusted contact at any time.

    Even though adding a trusted contact is optional, ChatGPT users who have not already opted in might see enrollment prompts if they ask about or discuss topics related to severe emotional distress or self-harm more than once over a period of time, according to OpenAI. If the chatbot’s automated system identifies patterns across conversations, it might suggest to the user that they would benefit from choosing a trusted contact.

    Details of the feature are explained on OpenAI’s page. OpenAI told CNET that the feature is rolling out to all adult customers worldwide and will be available for everyone within a few weeks.

    If you feel like you or someone you know is in immediate danger, call 911 (or your country’s local emergency line) or go to an emergency room to get immediate help. Explain that it is a psychiatric emergency and ask for someone who is trained for these kinds of situations. If you’re struggling with negative thoughts or suicidal feelings, resources are available to help. In the US, call the National Suicide Prevention Lifeline at 988.

    alert ChatGPTs Contact feature risk Safety selfharm Trusted
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleThe Government Just Released a Batch of UFO Files: Where Are the Aliens?
    Next Article Today’s NYT Connections: Sports Edition Hints, Answers for May 9 #593
    • Website

    Related Posts

    AI Reviews

    Writers are fleeing the Substack Tax

    AI Reviews

    Best Treadmills of 2026 Tested by a Running Expert

    AI Reviews

    Do you take after your dad’s RNA?

    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Writers are fleeing the Substack Tax

    0 Views

    The hottest place for startups to strike a deal? The F1 paddock

    0 Views

    Best Treadmills of 2026 Tested by a Running Expert

    0 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews
    AI Tutorials

    Quantization from the ground up

    AI Tools

    David Sacks is done as AI czar — here’s what he’s doing instead

    AI Reviews

    Judge sides with Anthropic to temporarily block the Pentagon’s ban

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Writers are fleeing the Substack Tax

    0 Views

    The hottest place for startups to strike a deal? The F1 paddock

    0 Views

    Best Treadmills of 2026 Tested by a Running Expert

    0 Views
    Our Picks

    Quantization from the ground up

    David Sacks is done as AI czar — here’s what he’s doing instead

    Judge sides with Anthropic to temporarily block the Pentagon’s ban

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Terms & Conditions
    • Privacy Policy
    • Disclaimer

    © 2026 ainewstoday.co. All rights reserved. Designed by DD.

    Type above and press Enter to search. Press Esc to cancel.