Close Menu
AI News TodayAI News Today

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    I Was Cooking Bacon Wrong for Decades, and You Probably Are Too

    Clarifying HEVC licensing fees, royalties, and why vendors kill HEVC support

    Tech CEOs Think AI Will Let Them Be Everywhere at Once

    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    Facebook X (Twitter) Instagram Pinterest Vimeo
    AI News TodayAI News Today
    • Home
    • Shop
    • AI News
    • AI Reviews
    • AI Tools
    • AI Tutorials
    • Chatbots
    • Free AI Tools
    AI News TodayAI News Today
    Home»Free AI Tools»OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters
    Free AI Tools

    OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters

    By No Comments4 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters
    Share
    Facebook Twitter LinkedIn Pinterest Email

    OpenAI is throwing its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage.

    The effort seems to mark a shift in OpenAI’s legislative strategy. Until now, OpenAI has largely played defense, opposing bills that could have made AI labs liable for their technology’s harms. Several AI policy experts tell WIRED that SB 3444—which could set a new standard for the industry—is a more extreme measure than bills OpenAI has supported in the past.

    The bill, SB 3444, would shield frontier AI developers from liability for “critical harms” caused by their frontier models as long as they did not intentionally or recklessly cause such an incident, and have published safety, security, and transparency reports on their website. It defines frontier model as any AI model trained using more than $100 million in computational costs, which likely could apply to America’s largest AI labs like OpenAI, Google, xAI, Anthropic, and Meta.

    “We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois,” said OpenAI spokesperson Jamie Radice in an emailed statement. “They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards.”

    Under its definition of critical harms, the bill lists a few common areas of concern for the AI industry, such as a bad actor using AI to create a chemical, biological, radiological, or nuclear weapon. If an AI model engages in conduct on its own that, if committed by a human, would constitute a criminal offense and leads to those extreme outcomes, that would also be a critical harm. If an AI model were to commit any of these actions under SB 3444, the AI lab behind the model may not be held liable, so long as it wasn’t intentional and they published their reports.

    Federal and state legislatures in the US have yet to pass any laws specifically determining whether AI model developers, like OpenAI, could be liable for these types of harm caused by their technology. But as AI labs continue to release more powerful AI models that raise novel safety and cybersecurity challenges, such as Anthropic’s Claude Mythos, these questions feel increasingly prescient.

    In her testimony supporting SB 3444, a member of OpenAI’s Global Affairs team, Caitlin Niedermeyer, also argued in favor of a federal framework for AI regulation. Niedermeyer struck a message that’s consistent with the Trump administration’s crackdown on state AI safety laws, claiming it’s important to avoid “a patchwork of inconsistent state requirements that could create friction without meaningfully improving safety.” This is also consistent with the broader view of Silicon Valley in recent years, which has generally argued that it’s paramount for AI legislation to not hamper America’s position in the global AI race. While SB 3444 is itself a state-level safety law, Niedermeyer argued that those can be effective if they “reinforce a path toward harmonization with federal systems.”

    “At OpenAI, we believe the North Star for frontier regulation should be the safe deployment of the most advanced models in a way that also preserves US leadership in innovation,” Niedermeyer said.

    Scott Wisor, policy director for the Secure AI project, tells WIRED he believes this bill has a slim chance of passing, given Illinois’ reputation for aggressively regulating technology. “We polled people in Illinois, asking whether they think AI companies should be exempt from liability, and 90 percent of people oppose it. There’s no reason existing AI companies should be facing reduced liability,” Wisor says.

    AIEnabled Backs Bill Deaths Disasters Financial Liability Limit Mass OpenAI
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleWhat founders can learn from Anjuna’s layoffs and recovery
    Next Article Ugandan chimps split into two factions, then killed rivals
    • Website

    Related Posts

    Free AI Tools

    Tech CEOs Think AI Will Let Them Be Everywhere at Once

    Free AI Tools

    Claude comes for the design stack

    Free AI Tools

    Inside Canva AI 2.0 with CPO Cameron Adams

    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    I Was Cooking Bacon Wrong for Decades, and You Probably Are Too

    0 Views

    Clarifying HEVC licensing fees, royalties, and why vendors kill HEVC support

    0 Views

    Tech CEOs Think AI Will Let Them Be Everywhere at Once

    0 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews
    AI Tutorials

    Quantization from the ground up

    AI Tools

    David Sacks is done as AI czar — here’s what he’s doing instead

    AI Reviews

    Judge sides with Anthropic to temporarily block the Pentagon’s ban

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    I Was Cooking Bacon Wrong for Decades, and You Probably Are Too

    0 Views

    Clarifying HEVC licensing fees, royalties, and why vendors kill HEVC support

    0 Views

    Tech CEOs Think AI Will Let Them Be Everywhere at Once

    0 Views
    Our Picks

    Quantization from the ground up

    David Sacks is done as AI czar — here’s what he’s doing instead

    Judge sides with Anthropic to temporarily block the Pentagon’s ban

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Terms & Conditions
    • Privacy Policy
    • Disclaimer

    © 2026 ainewstoday.co. All rights reserved. Designed by DD.

    Type above and press Enter to search. Press Esc to cancel.