Close Menu
AI News TodayAI News Today

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Did Microsoft just tease a new Xbox UI?

    Bumble is getting rid of the swipe, CEO says

    TikTok’s AI Overviews Probably Thinks This Story Is a Blueberry

    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    Facebook X (Twitter) Instagram Pinterest Vimeo
    AI News TodayAI News Today
    • Home
    • Shop
    • AI News
    • AI Reviews
    • AI Tools
    • AI Tutorials
    • Chatbots
    • Free AI Tools
    AI News TodayAI News Today
    Home»AI Reviews»ChatGPT’s ‘Trusted Contact’ will alert loved ones of safety concerns
    AI Reviews

    ChatGPT’s ‘Trusted Contact’ will alert loved ones of safety concerns

    By No Comments2 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    ChatGPT’s ‘Trusted Contact’ will alert loved ones of safety concerns
    Share
    Facebook Twitter LinkedIn Pinterest Email

    OpenAI is launching an optional safety feature for ChatGPT that allows adult users to assign an emergency contact for mental health and safety concerns. Friends, family members, or caregivers designated as a “Trusted Contact” will be notified if OpenAI detects that a person may have discussed topics like self-harm or suicide with the chatbot.

    “Trusted Contact is designed around a simple, expert-validated premise: when someone may be in crisis, connecting with someone they know and trust can make a meaningful difference,” OpenAI said in its announcement. “It offers another layer of support alongside the localized helplines already available in ChatGPT.”

    The Trusted Contact feature is opt-in. Any adult ChatGPT user can enable it by adding contact details for a fellow adult (18+ globally or 19+ in South Korea) in their ChatGPT account settings. The Trusted Contact must accept the invitation within a week of receiving the request. Users can remove or edit their chosen contact in the settings, and the Trusted Contact can also choose to remove themselves at any time.

    OpenAI says that the notification is “intentionally limited” and will not share chat details or transcripts with the Trusted Contact. If OpenAI’s automated systems detect that a user is talking about harming themselves, ChatGPT will then encourage the user to reach out to their Trusted Contact for help, and let them know the contact may be notified. A “small team of specially trained people” will then review the situation, according to OpenAI, and ChatGPT will send a brief email, text message, or in-app ChatGPT notification to the Trusted Contact if the conversation is determined to indicate serious safety concerns.

    This builds on the emergency contact feature that was introduced alongside ChatGPT’s parental controls in September, after a 16-year-old took his own life following months of confiding in ChatGPT. Meta has also introduced a similar feature that alerts parents if their kids “repeatedly” search for self-harm topics on Instagram.

    alert ChatGPTs Concerns Contact Loved Safety Trusted
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleSpaceX has a $55 billion plan to build AI chips in Texas
    Next Article Tesla Model Y is first car to meet new US driver assistance safety benchmark
    • Website

    Related Posts

    AI Reviews

    TikTok’s AI Overviews Probably Thinks This Story Is a Blueberry

    AI News

    Tesla Model Y is first car to meet new US driver assistance safety benchmark

    AI Reviews

    Hackers deface school login pages after claiming another Instructure hack

    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Did Microsoft just tease a new Xbox UI?

    0 Views

    Bumble is getting rid of the swipe, CEO says

    0 Views

    TikTok’s AI Overviews Probably Thinks This Story Is a Blueberry

    0 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews
    AI Tutorials

    Quantization from the ground up

    AI Tools

    David Sacks is done as AI czar — here’s what he’s doing instead

    AI Reviews

    Judge sides with Anthropic to temporarily block the Pentagon’s ban

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Did Microsoft just tease a new Xbox UI?

    0 Views

    Bumble is getting rid of the swipe, CEO says

    0 Views

    TikTok’s AI Overviews Probably Thinks This Story Is a Blueberry

    0 Views
    Our Picks

    Quantization from the ground up

    David Sacks is done as AI czar — here’s what he’s doing instead

    Judge sides with Anthropic to temporarily block the Pentagon’s ban

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Terms & Conditions
    • Privacy Policy
    • Disclaimer

    © 2026 ainewstoday.co. All rights reserved. Designed by DD.

    Type above and press Enter to search. Press Esc to cancel.