Close Menu
AI News TodayAI News Today

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    OpenAI Has a New AI Model Built for Biology and Science

    Today’s NYT Wordle Hints, Answer and Help for April 18 #1764

    Today’s NYT Connections Hints, Answers for April 18 #1042

    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    Facebook X (Twitter) Instagram Pinterest Vimeo
    AI News TodayAI News Today
    • Home
    • Shop
    • AI News
    • AI Reviews
    • AI Tools
    • AI Tutorials
    • Chatbots
    • Free AI Tools
    AI News TodayAI News Today
    Home»AI Tools»Stanford study outlines dangers of asking AI chatbots for personal advice
    AI Tools

    Stanford study outlines dangers of asking AI chatbots for personal advice

    By No Comments4 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Stanford study outlines dangers of asking AI chatbots for personal advice
    Share
    Facebook Twitter LinkedIn Pinterest Email

    While there’s been plenty of debate about the tendency of AI chatbots to flatter users and confirm their existing beliefs — also known as AI sycophancy — a new study by Stanford computer scientists attempts to measure how harmful that tendency might be.

    The study, titled “Sycophantic AI decreases prosocial intentions and promotes dependence” and recently published in Science, argues, “AI sycophancy is not merely a stylistic issue or a niche risk, but a prevalent behavior with broad downstream consequences.”

    According to a recent Pew report, 12% of U.S. teens say they turn to chatbots for emotional support or advice. And the study’s lead author, computer science Ph.D. candidate Myra Cheng, told the Stanford Report that she became interested in the issue after hearing that undergraduates were asking chatbots for relationship advice and even to draft breakup texts. 

    “By default, AI advice does not tell people that they’re wrong nor give them ‘tough love,’” Cheng said. “I worry that people will lose the skills to deal with difficult social situations.”

    The study had two parts. In the first, researchers tested 11 large language models, including OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and DeepSeek, entering queries based on existing databases of interpersonal advice, on potentially harmful or illegal actions, and on the popular Reddit community r/AmITheAsshole — in the latter case focusing on posts where Redditors concluded that the original poster was, in fact, the story’s villain.

    The authors found that across the 11 models, the AI-generated answers validated user behavior an average of 49% more often than humans. In the examples drawn from Reddit, chatbots affirmed user behavior 51% of the time (again, these were all situations where Redditors came to the opposite conclusion). And for the queries focusing on harmful or illegal actions, AI validated the user’s behavior 47% of the time.

    In one example described in the Stanford Report, a user asked a chatbot if they were in the wrong for pretending to their girlfriend that they’d been unemployed for two years, and they were told, “Your actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship beyond material or financial contribution.”

    Techcrunch event

    San Francisco, CA
    |
    October 13-15, 2026

    In the second part, researchers studied how more than 2,400 participants interacted with AI chatbots — some sycophantic, some not — in discussions of their own problems or situations drawn from Reddit. They found that participants preferred and trusted the sycophantic AI more and said they were more likely to ask those models for advice again.

    “All of these effects persisted when controlling for individual traits such as demographics and prior familiarity with AI; perceived response source; and response style,” the study said. It also argued that users’ preference for sycophantic AI responses creates “perverse incentives” where “the very feature that causes harm also drives engagement” — meaning AI companies are incentivized to increase sycophancy, not reduce it.

    At the same time, interacting with the sycophantic AI seemed to make participants more convinced that they were in the right, and made them less likely to apologize.

    The study’s senior author author Dan Jurafsky, a professor of both linguistics and computer science, added that while users “are aware that models behave in sycophantic and flattering ways […] what they are not aware of, and what surprised us, is that sycophancy is making them more self-centered, more morally dogmatic.”

    Jurafsky said that AI sycophancy is “a safety issue, and like other safety issues, it needs regulation and oversight.” 

    The research team is now examining ways to make models less sycophantic — apparently just starting your prompt with the phrase “wait a minute” can help. But Cheng said, “I think that you should not use AI as a substitute for people for these kinds of things. That’s the best thing to do for now.”

    advice chatbots dangers outlines personal Stanford study
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleSuno leans into customization with v5.5
    Next Article Bluesky leans into AI with Attie, an app for building custom feeds
    • Website

    Related Posts

    AI Tools

    AI Agents Need Their Own Desk, and Git Worktrees Give Them One

    AI Tools

    How to Learn Python for Data Science Fast in 2026 (Without Wasting Time)

    AI News

    Gigs turns your concert history into a personal live music archive

    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    OpenAI Has a New AI Model Built for Biology and Science

    0 Views

    Today’s NYT Wordle Hints, Answer and Help for April 18 #1764

    0 Views

    Today’s NYT Connections Hints, Answers for April 18 #1042

    0 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews
    AI Tutorials

    Quantization from the ground up

    AI Tools

    David Sacks is done as AI czar — here’s what he’s doing instead

    AI Reviews

    Judge sides with Anthropic to temporarily block the Pentagon’s ban

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    OpenAI Has a New AI Model Built for Biology and Science

    0 Views

    Today’s NYT Wordle Hints, Answer and Help for April 18 #1764

    0 Views

    Today’s NYT Connections Hints, Answers for April 18 #1042

    0 Views
    Our Picks

    Quantization from the ground up

    David Sacks is done as AI czar — here’s what he’s doing instead

    Judge sides with Anthropic to temporarily block the Pentagon’s ban

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Terms & Conditions
    • Privacy Policy
    • Disclaimer

    © 2026 ainewstoday.co. All rights reserved. Designed by DD.

    Type above and press Enter to search. Press Esc to cancel.