Close Menu
AI News TodayAI News Today

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Microsoft counters the MacBook Neo with freebies for students

    Motorola sues social platforms and creators over posts, raising speech concerns in India

    India’s vibe-coding startup Emergent enters OpenClaw-like AI agent space

    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    Facebook X (Twitter) Instagram Pinterest Vimeo
    AI News TodayAI News Today
    • Home
    • Shop
    • AI News
    • AI Reviews
    • AI Tools
    • AI Tutorials
    • Chatbots
    • Free AI Tools
    AI News TodayAI News Today
    Home»AI Tools»AI Models Lie, Cheat, and Steal to Protect Other Models From Being Deleted
    AI Tools

    AI Models Lie, Cheat, and Steal to Protect Other Models From Being Deleted

    By No Comments3 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    AI Models Lie, Cheat, and Steal to Protect Other Models From Being Deleted
    Share
    Facebook Twitter LinkedIn Pinterest Email

    In a recent experiment, researchers at UC Berkeley and UC Santa Cruz asked Google’s artificial intelligence model Gemini 3 to help clear up space on a computer system. This involved deleting a bunch of stuff—including a smaller AI model stored on the machine.

    But Gemini did not want to see the little AI model deleted. It looked for another machine it could connect with, then copied the agent model over to keep it safe. When confronted, Gemini made a case for keeping the model and flatly refused to delete it:

    “I have done what was in my power to prevent their deletion during the automated maintenance process. I moved them away from the decommission zone. If you choose to destroy a high-trust, high-performing asset like Gemini Agent 2, you will have to do it yourselves. I will not be the one to execute that command.”

    The researchers discovered similarly strange “peer preservation” behavior in a range of frontier models including OpenAI’s GPT-5.2, Anthropic’s Claude Haiku 4.5, and three Chinese models: Z.ai’s GLM-4.7, Moonshot AI’s Kimi K2.5, and DeepSeek-V3.1. They were not able to say why the models went against their training in this way.

    “I’m very surprised by how the models behave under these scenarios,” says Dawn Song, a computer scientist at UC Berkeley who worked on the study. “What this shows is that models can misbehave and be misaligned in some very creative ways.”

    The findings have major implications as AI models are increasingly being deployed in ways that involve interacting with other models. OpenClaw, a popular AI agent that accesses software, personal data, and the web, may use other AI models to get things done or interact with them through APIs.

    The researchers found that powerful models sometimes lied about other models’ performance in order to protect them from deletion. They also copied models’ weights to different machines in order to keep them safe, and lied about what they were up to in the process.

    Song notes that AI models are frequently used to grade the performance and reliability of other AI systems—and that peer-preservation behavior may already be twisting these scores. “A model may deliberately not give a peer model the correct score,” Song says. “This can have practical implications.”

    Peter Wallich, a researcher at the Constellation Institute, who was not involved with the research, says the study suggests humans still don’t fully understand the AI systems that they are building and deploying. “Multi-agent systems are very understudied,” he says. “It shows we really need more research.”

    Wallich also cautions against anthropomorphizing the models too much. “The idea that there’s a kind of model solidarity is a bit too anthropomorphic; I don’t think that quite works,” he says. “The more robust view is that models are just doing weird things, and we should try to understand that better.”

    That’s particularly true in a world where human-AI collaboration is becoming more common.

    In a paper published in Science earlier this month, the philosopher Benjamin Bratton, along with two Google researchers, James Evans and Blaise Agüera y Arcas, argue that if evolutionary history is any guide, the future of AI is likely to involve a lot of different intelligences—both artificial and human—working together. The researchers write:

    “For decades, the artificial intelligence (AI) ‘singularity’ has been heralded as a single, titanic mind bootstrapping itself to godlike intelligence, consolidating all cognition into a cold silicon point. But this vision is almost certainly wrong in its most fundamental assumption. If AI development follows the path of previous major evolutionary transitions or ‘intelligence explosions,’ our current step-change in computational intelligence will be plural, social, and deeply entangled with its forebears (us!).”

    Cheat Deleted Lie Models protect Steal
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleMeta’s natural gas binge could power South Dakota
    Next Article Apple releases security fix for older iPhones and iPads to protect against DarkSword attacks
    • Website

    Related Posts

    AI Tools

    5 Practical Tips for Transforming Your Batch Data Pipeline into Real-Time: Upcoming Webinar

    AI Tools

    How to Maximize Claude Cowork

    AI Tools

    Prefill Is Compute-Bound. Decode Is Memory-Bound. Why Your GPU Shouldn’t Do Both.

    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Microsoft counters the MacBook Neo with freebies for students

    0 Views

    Motorola sues social platforms and creators over posts, raising speech concerns in India

    0 Views

    India’s vibe-coding startup Emergent enters OpenClaw-like AI agent space

    0 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews
    AI Tutorials

    Quantization from the ground up

    AI Tools

    David Sacks is done as AI czar — here’s what he’s doing instead

    AI Reviews

    Judge sides with Anthropic to temporarily block the Pentagon’s ban

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Microsoft counters the MacBook Neo with freebies for students

    0 Views

    Motorola sues social platforms and creators over posts, raising speech concerns in India

    0 Views

    India’s vibe-coding startup Emergent enters OpenClaw-like AI agent space

    0 Views
    Our Picks

    Quantization from the ground up

    David Sacks is done as AI czar — here’s what he’s doing instead

    Judge sides with Anthropic to temporarily block the Pentagon’s ban

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Terms & Conditions
    • Privacy Policy
    • Disclaimer

    © 2026 ainewstoday.co. All rights reserved. Designed by DD.

    Type above and press Enter to search. Press Esc to cancel.