Close Menu
AI News TodayAI News Today

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Amazon taps Sweden’s Einride for its electric big rigs

    YouTube expands its AI likeness detection technology to celebrities

    AI needs a strong data fabric to deliver business value

    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    Facebook X (Twitter) Instagram Pinterest Vimeo
    AI News TodayAI News Today
    • Home
    • Shop
    • AI News
    • AI Reviews
    • AI Tools
    • AI Tutorials
    • Chatbots
    • Free AI Tools
    AI News TodayAI News Today
    Home»Chatbots»Anthropic’s most dangerous AI model just fell into the wrong hands
    Chatbots

    Anthropic’s most dangerous AI model just fell into the wrong hands

    By No Comments2 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Anthropic’s most dangerous AI model just fell into the wrong hands
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Anthropic’s Mythos AI model, a powerful cybersecurity tool that the company said could be dangerous in the wrong hands, has been accessed by a “small group of unauthorized users,” Bloomberg reports. An unnamed member of the group, identified only as “a third-party contractor for Anthropic,” told the publication that members of a private online forum got into Mythos via a mix of tactics, utilizing the contractor’s access and “commonly used internet sleuthing tools.”

    The Claude Mythos Preview is a new general-purpose model that’s capable of identifying and exploiting vulnerabilities “in every major operating system and every major web browser when directed by a user to do so,” according to Anthropic. Official access to the model is limited to a handful of companies through the Project Glasswing initiative, including Nvidia, Google, Amazon Web Services, Apple, and Microsoft. Governments are also eyeing the technology. Anthropic currently has no plans to release the model publicly due to concerns that it could be weaponized.

    “We’re investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments,” an Anthropic spokesperson said in a statement to Bloomberg. Anthropic currently has no evidence that the unauthorized access is impacting the company’s systems or goes beyond the third-party vendor’s environment.

    The model was reportedly accessed illicitly on April 7th, the same day that Anthropic announced it was releasing Mythos to a limited number of companies for testing. The group that gained the unauthorized access has not been publicly identified, though Bloomberg reports that its members are part of a Discord channel that seeks out information about unreleased AI models.

    The group accessed Mythos by using knowledge of Anthropic’s other model formats obtained from a recent Mercor data breach to make “an educated guess” about its online location. Members have been using Mythos regularly since gaining access — providing screenshots and a live demonstration of the model as evidence to Bloomberg — though reportedly not for cybersecurity purposes in an attempt to avoid detection by Anthropic. Other unreleased Anthropic AI models have also been accessed by the group, according to Bloomberg.

    Anthropics dangerous fell hands model Wrong
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleToday’s NYT Strands Hints, Answer and Help for April 22 #780
    Next Article The Pope’s Warnings About AI Were AI-Generated, a Detection Tool Claims
    • Website

    Related Posts

    Chatbots

    YouTube expands its AI likeness detection technology to celebrities

    Chatbots

    Ransomware negotiator pleads guilty to helping ransomware gang

    Chatbots

    Loneliness in older adults can often lead to memory impairment

    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Amazon taps Sweden’s Einride for its electric big rigs

    0 Views

    YouTube expands its AI likeness detection technology to celebrities

    0 Views

    AI needs a strong data fabric to deliver business value

    0 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews
    AI Tutorials

    Quantization from the ground up

    AI Tools

    David Sacks is done as AI czar — here’s what he’s doing instead

    AI Reviews

    Judge sides with Anthropic to temporarily block the Pentagon’s ban

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Amazon taps Sweden’s Einride for its electric big rigs

    0 Views

    YouTube expands its AI likeness detection technology to celebrities

    0 Views

    AI needs a strong data fabric to deliver business value

    0 Views
    Our Picks

    Quantization from the ground up

    David Sacks is done as AI czar — here’s what he’s doing instead

    Judge sides with Anthropic to temporarily block the Pentagon’s ban

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Terms & Conditions
    • Privacy Policy
    • Disclaimer

    © 2026 ainewstoday.co. All rights reserved. Designed by DD.

    Type above and press Enter to search. Press Esc to cancel.