Close Menu
AI News TodayAI News Today

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    As electric aspirations fade, Porsche sells its stake in Bugatti

    ‘Apex’ Review: Charlize Theron Netflix Thriller Avoids Rock Bottom, but Barely

    How to Improve Claude Code Performance with Automated Testing

    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    Facebook X (Twitter) Instagram Pinterest Vimeo
    AI News TodayAI News Today
    • Home
    • Shop
    • AI News
    • AI Reviews
    • AI Tools
    • AI Tutorials
    • Chatbots
    • Free AI Tools
    AI News TodayAI News Today
    Home»AI Tutorials»A pelican for GPT-5.5 via the semi-official Codex backdoor API
    AI Tutorials

    A pelican for GPT-5.5 via the semi-official Codex backdoor API

    By No Comments4 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Pelican has gradients now, body is much better put together, bicycle is nearly the right shape albeit with one extra bar between pedals and front wheel, clearly a better image overall.
    Share
    Facebook Twitter LinkedIn Pinterest Email

    A pelican for GPT-5.5 via the semi-official Codex backdoor API

    23rd April 2026

    GPT-5.5 is out. It’s available in OpenAI Codex and is rolling out to paid ChatGPT subscribers. I’ve had some preview access and found it to be a fast, effective and highly capable model. As is usually the case these days, it’s hard to put into words what’s good about it—I ask it to build things and it builds exactly what I ask for!

    There’s one notable omission from today’s release—the API:

    API deployments require different safeguards and we are working closely with partners and customers on the safety and security requirements for serving it at scale. We’ll bring GPT‑5.5 and GPT‑5.5 Pro to the API very soon.

    When I run my pelican benchmark I always prefer to use an API, to avoid hidden system prompts in ChatGPT or other agent harnesses from impacting the results.

    The OpenClaw backdoor

    One of the ongoing tension points in the AI world over the past few months has concerned how agent harnesses like OpenClaw and Pi interact with the APIs provided by the big providers.

    Both OpenAI and Anthropic offer popular monthly subscriptions which provide access to their models at a significant discount to their raw API.

    OpenClaw integrated directly with this mechanism, and was then blocked from doing so by Anthropic. This kicked off a whole thing. OpenAI—who recently hired OpenClaw creator Peter Steinberger—saw an opportunity for an easy karma win and announced that OpenClaw was welcome to continue integrating with OpenAI’s subscriptions via the same mechanism used by their (open source) Codex CLI tool.

    Does this mean anyone can write code that integrates with OpenAI’s Codex-specific APIs to hook into those existing subscriptions?

    The other day Jeremy Howard asked:

    Anyone know whether OpenAI officially supports the use of the /backend-api/codex/responses endpoint that Pi and Opencode (IIUC) uses?

    It turned out that on March 30th OpenAI’s Romain Huet had tweeted:

    We want people to be able to use Codex, and their ChatGPT subscription, wherever they like! That means in the app, in the terminal, but also in JetBrains, Xcode, OpenCode, Pi, and now Claude Code.

    That’s why Codex CLI and Codex app server are open source too! 🙂

    And Peter Steinberger replied to Jeremy that:

    OpenAI sub is officially supported.

    llm-openai-via-codex

    So… I had Claude Code reverse-engineer the openai/codex repo, figure out how authentication tokens were stored and build me llm-openai-via-codex, a new plugin for LLM which picks up your existing Codex subscription and uses it to run prompts!

    (With hindsight I wish I’d used GPT-5.4 or the GPT-5.5 preview, it would have been funnier. I genuinely considered rewriting the project from scratch using Codex and GPT-5.5 for the sake of the joke, but decided not to spend any more time on this!)

    Here’s how to use it:

    1. Install Codex CLI, buy an OpenAI plan, login to Codex
    2. Install LLM: uv tool install llm
    3. Install the new plugin: llm install llm-openai-via-codex
    4. Start prompting: llm -m openai-codex/gpt-5.5 'Your prompt goes here'

    All existing LLM features should also work—use -a filepath.jpg/URL to attach an image, llm chat -m openai-codex/gpt-5.5 to start an ongoing chat, llm logs to view logged conversations and llm --tool ... to try it out with tool support.

    And some pelicans

    Let’s generate a pelican!

    llm install llm-openai-via-codex
    llm -m openai-codex/gpt-5.5 'Generate an SVG of a pelican riding a bicycle'

    Here’s what I got back:

    I’ve seen better from GPT-5.4, so I tagged on -o reasoning_effort xhigh and tried again:

    That one took almost four minutes to generate, but I think it’s a much better effort.

    Pelican has gradients now, body is much better put together, bicycle is nearly the right shape albeit with one extra bar between pedals and front wheel, clearly a better image overall.

    If you compare the SVG code (default, xhigh) the xhigh one took a very different approach, which is much more CSS-heavy—as demonstrated by those gradients. xhigh used 9,322 reasoning tokens where the default used just 39.

    API backdoor Codex GPT5.5 pelican semiofficial
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleApple’s Next Chapter, SpaceX and Cursor Strike a Deal, and Palantir’s Controversial Manifesto
    Next Article How to get started with Codex
    • Website

    Related Posts

    AI Tutorials

    What is Codex? | OpenAI

    AI Tutorials

    Extract PDF text in your browser with LiteParse for the web

    AI Tutorials

    Working with Codex | OpenAI

    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    As electric aspirations fade, Porsche sells its stake in Bugatti

    0 Views

    ‘Apex’ Review: Charlize Theron Netflix Thriller Avoids Rock Bottom, but Barely

    0 Views

    How to Improve Claude Code Performance with Automated Testing

    0 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews
    AI Tutorials

    Quantization from the ground up

    AI Tools

    David Sacks is done as AI czar — here’s what he’s doing instead

    AI Reviews

    Judge sides with Anthropic to temporarily block the Pentagon’s ban

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    As electric aspirations fade, Porsche sells its stake in Bugatti

    0 Views

    ‘Apex’ Review: Charlize Theron Netflix Thriller Avoids Rock Bottom, but Barely

    0 Views

    How to Improve Claude Code Performance with Automated Testing

    0 Views
    Our Picks

    Quantization from the ground up

    David Sacks is done as AI czar — here’s what he’s doing instead

    Judge sides with Anthropic to temporarily block the Pentagon’s ban

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Terms & Conditions
    • Privacy Policy
    • Disclaimer

    © 2026 ainewstoday.co. All rights reserved. Designed by DD.

    Type above and press Enter to search. Press Esc to cancel.