Close Menu
AI News TodayAI News Today

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    OpenAI updates its Agents SDK to help enterprises build safer, more capable agents

    FTC pushes ad agencies into dropping brand safety rules

    Ticketmaster is an illegal monopoly, jury rules

    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    Facebook X (Twitter) Instagram Pinterest Vimeo
    AI News TodayAI News Today
    • Home
    • Shop
    • AI News
    • AI Reviews
    • AI Tools
    • AI Tutorials
    • Chatbots
    • Free AI Tools
    AI News TodayAI News Today
    Home»AI Reviews»Research finds AI users scarily willing to “surrender” their cognition to LLMs
    AI Reviews

    Research finds AI users scarily willing to “surrender” their cognition to LLMs

    By No Comments2 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Research finds AI users scarily willing to "surrender" their cognition to LLMs
    Share
    Facebook Twitter LinkedIn Pinterest Email

    “Lowering the threshold for scrutiny”

    Overall, across 1,372 participants and over 9,500 individual trials, the researchers found subjects were willing to accept faulty AI reasoning a whopping 73.2 percent of the time, while only overruling it 19.7 percent of the time. The researchers say this “demonstrate[s] that people readily incorporate AI-generated outputs into their decision-making processes, often with minimal friction or skepticism.” In general, “fluent, confident outputs [are treated] as epistemically authoritative, lowering the threshold for scrutiny and attenuating the meta-cognitive signals that would ordinarily route a response to deliberation,” they write.

    Subjects with high trust in AI were more likely to be misled by faulty responses, while those with high “Fluid IQ” were less likely to be misled by the AI.

    Subjects with high trust in AI were more likely to be misled by faulty responses, while those with high “Fluid IQ” were less likely to be misled by the AI.


    Credit:

    Shaw and Nave


    These kinds of effects weren’t uniform across all test subjects, though. Those who scored highly on separate measures of so-called fluid IQ were less likely to rely on the AI for help and were more likely to overrule a faulty AI when it was consulted. Those predisposed to see AI as authoritative in a survey, on the other hand, were much more likely to be led astray by faulty AI-provided answers.

    Despite the results, though, the researchers point out that “cognitive surrender is not inherently irrational.” While relying on an LLM that’s wrong half the time (as in these experiments) has obvious downsides, a “statistically superior system” could plausibly give better-than-human results in domains such as “probabilistic settings, risk assessment, or extensive data,” the researchers suggest.

    “As reliance increases, performance tracks AI quality,” the researchers write, “rising when accurate and falling when faulty, illustrating the promises of superintelligence and exposing a structural vulnerability of cognitive surrender.”

    In other words, letting an AI do your reasoning means your reasoning is only ever going to be as good as that AI system. As always, let the prompter beware.

    cognition finds LLMs Research scarily surrender users
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleOpenAI’s AGI boss is taking a leave of absence
    Next Article Trump ignores biggest reasons his AI data center buildout is failing
    • Website

    Related Posts

    AI Reviews

    FTC pushes ad agencies into dropping brand safety rules

    AI Reviews

    Hightouch reaches $100M ARR fueled by marketing tools powered by AI

    AI Reviews

    FCC exempts Netgear from ban on foreign routers, doesn’t explain why

    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    OpenAI updates its Agents SDK to help enterprises build safer, more capable agents

    0 Views

    FTC pushes ad agencies into dropping brand safety rules

    0 Views

    Ticketmaster is an illegal monopoly, jury rules

    0 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews
    AI Tutorials

    Quantization from the ground up

    AI Tools

    David Sacks is done as AI czar — here’s what he’s doing instead

    AI Reviews

    Judge sides with Anthropic to temporarily block the Pentagon’s ban

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    OpenAI updates its Agents SDK to help enterprises build safer, more capable agents

    0 Views

    FTC pushes ad agencies into dropping brand safety rules

    0 Views

    Ticketmaster is an illegal monopoly, jury rules

    0 Views
    Our Picks

    Quantization from the ground up

    David Sacks is done as AI czar — here’s what he’s doing instead

    Judge sides with Anthropic to temporarily block the Pentagon’s ban

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Terms & Conditions
    • Privacy Policy
    • Disclaimer

    © 2026 ainewstoday.co. All rights reserved. Designed by DD.

    Type above and press Enter to search. Press Esc to cancel.