Close Menu
AI News TodayAI News Today

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Correctness Before Corrections in RL

    DOJ says ransomware gang tapped into Russian government databases

    Anthropic Gets in Bed With SpaceX as the AI Race Turns Weird

    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    Facebook X (Twitter) Instagram Pinterest Vimeo
    AI News TodayAI News Today
    • Home
    • Shop
    • AI News
    • AI Reviews
    • AI Tools
    • AI Tutorials
    • Chatbots
    • Free AI Tools
    AI News TodayAI News Today
    Home»Free AI Tools»Using AI for Just 10 Minutes Might Make You Lazy and Dumb, Study Shows
    Free AI Tools

    Using AI for Just 10 Minutes Might Make You Lazy and Dumb, Study Shows

    By No Comments4 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Using AI for Just 10 Minutes Might Make You Lazy and Dumb, Study Shows
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Using AI chatbots for even just for 10 minutes may have a shockingly negative impact on people’s ability to think and problem-solve, according to a new study from researchers at Carnegie Mellon, MIT, Oxford, and UCLA.

    Researchers tasked people with solving various problems, including simple fractions and reading comprehension, through an online platform that paid them for their work. They conducted three experiments, each involving several hundred people. Some participants were given access to an AI assistant capable of solving the problem autonomously. When the AI helper was suddenly taken away, these people were significantly more likely to give up on the problem or flub their answers. The study suggests that widespread use of AI might boost productivity at the expense of developing foundational problem-solving skills.

    “The takeaway is not that we should ban AI in education or workplaces,” says Michiel Bakker, an assistant professor at MIT involved with the study. “AI can clearly help people perform better in the moment, and that can be valuable. But we should be more careful about what kind of help AI provides, and when.”

    I recently met up with Bakker, who has chaotic hair and a wide grin, on MIT’s campus. Originally from the Netherlands, he previously worked at Google DeepMind in London. He told me that a well-known essay on the way AI may disempower humans over time inspired him to think about how the technology could already be eroding people’s abilities. The essay makes for slightly bleak reading, because it suggests that disempowerment is inevitable. That said, perhaps figuring out how AI can help people develop their own mental capabilities should be part of how models are aligned with human values.

    “It is fundamentally a cognitive question—about persistence, learning, and how people respond to difficulty,” Bakker tells me. “We wanted to take these broader concerns about long-term human-AI interaction and study them in a controlled experimental setting.”

    The resulting study seems particularly concerning, says Bakker, because a person’s willingness to persist with problem-solving is crucial to acquiring new skills and also predicts their capacity to learn over time.

    Bakker says it may be necessary to rethink how AI tools work so that—like a good human teacher—models sometimes prioritize a person’s learning over solving a problem for them. “Systems that give direct answers may have very different long-term effects from systems that scaffold, coach, or challenge the user,” Bakker says. He admits, however, that balancing this kind of “paternalistic” approach could be tricky.

    AI companies do already think about the more subtle effects that their models can have on users. The sycophancy of some models—or how likely they are to agree with and patronize users—is something that OpenAI has sought to tone down with newer releases of GPT.

    Putting too much faith in AI would seem especially problematic when the tools may not behave as you expect. Agentic AI systems are particularly unpredictable because they do complex chores independently and can introduce odd errors. It makes you wonder what Claude Code and Codex are doing to the skills of coders who may sometimes need to fix the bugs they introduce.

    I recently got a lesson in the danger of offloading critical thinking to AI myself. I’ve been using OpenClaw (with Codex inside) as a daily helper, and I’ve found it to be remarkably good at solving configuration issues on Linux. Recently, however, after my Wi-Fi connection kept dropping, my AI assistant suggested running a series of commands in order to tweak the driver talking to the Wi-Fi card. The result was a machine that refused to boot no matter what I did.

    Perhaps, instead of simply trying to solve the problem for me, OpenClaw should have paused to teach me how to fix the issue for myself. I might have a more capable computer—and brain—as a result.


    This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.

    dumb Lazy Minutes shows study
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleGoogle DeepMind partners with EVE Online for AI model testing
    Next Article How Elon Musk left OpenAI, according to Greg Brockman
    • Website

    Related Posts

    Free AI Tools

    Correctness Before Corrections in RL

    Free AI Tools

    Cybercriminals Are Complaining About AI Slop Flooding Their Forums

    Free AI Tools

    5 gardening tips using Google tools

    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Correctness Before Corrections in RL

    0 Views

    DOJ says ransomware gang tapped into Russian government databases

    0 Views

    Anthropic Gets in Bed With SpaceX as the AI Race Turns Weird

    0 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews
    AI Tutorials

    Quantization from the ground up

    AI Tools

    David Sacks is done as AI czar — here’s what he’s doing instead

    AI Reviews

    Judge sides with Anthropic to temporarily block the Pentagon’s ban

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Correctness Before Corrections in RL

    0 Views

    DOJ says ransomware gang tapped into Russian government databases

    0 Views

    Anthropic Gets in Bed With SpaceX as the AI Race Turns Weird

    0 Views
    Our Picks

    Quantization from the ground up

    David Sacks is done as AI czar — here’s what he’s doing instead

    Judge sides with Anthropic to temporarily block the Pentagon’s ban

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Terms & Conditions
    • Privacy Policy
    • Disclaimer

    © 2026 ainewstoday.co. All rights reserved. Designed by DD.

    Type above and press Enter to search. Press Esc to cancel.