Originally published at news.skila.ai
OpenAI shipped the biggest Codex desktop update since launch on April 16. Not a version bump. A rewrite of what the app does.
Computer use on Mac. GPT-Image-1.5 inside the coding flow. An in-app browser that takes direct comments. Memory. And 90+ new plugins dropped in one release.
Weekly developer count jumped from 1.2M in January to 3M now. That's 150% growth in three months from a product that already owned the enterprise coding agent conversation.
Everybody's covering the feature list. Three things nobody's pointing at matter more.
Thing 1: Computer Use Is Background, Not Takeover
Read the headlines and you'd think Codex just seized your Mac. It didn't.
The computer use mode runs alongside you, not instead of you. OpenAI's own phrasing from the April 16 announcement: Codex can "take actions as directed in said applications, and, in the case of Mac users, even do so while you continue manually using your computer simultaneously to your agents working in the background."
That phrase matters. Anthropic's computer use, launched October 2024, requires you to hand over the mouse. Watching the cursor move by itself is jarring and unusable for real work. You go make coffee.
OpenAI flipped the model. Codex now does the Jira ticket update, the Slack thread dig, the screenshot annotation — in a sandbox layer — while your keyboard stays in Cursor or VS Code. You don't stop coding to ask it a question.
The practical impact: Codex is the first mainstream agent that feels like a coworker instead of a robot assistant.
Availability: Mac first. EU and UK users are locked out until OpenAI finishes a regional compliance pass. Windows support is "soon" with no date.
Thing 2: GPT-Image-1.5 Isn't About Pretty Pictures. It's About Closing the Design Loop.
The press angle on GPT-Image-1.5 is generation quality. Miss the point.
The real shift is workflow compression. Before this update, a frontend task looked like: take screenshot, open Figma, draft mockup, export, paste into chat, ask Codex to implement. Five windows, three apps, two copy-pastes.
Now it's: screenshot the bug, tell Codex "show me three redesigns in the same dimensions, then pick your favorite and patch the JSX." One conversation, no context switch.
Real iconography and precise brand colors remain the weakness — Stable Diffusion's last gen variants still beat it on 2D art from scratch. But for "make this card 10% taller and swap the accent color," it wins because it never leaves the editor.
Thing 3: The 90 Plugins Are a Trojan Horse for MCP
OpenAI called it "90+ additional plugins." Look closer. The release bundle has three categories mashed into one number: skills, app integrations, and MCP servers.
This is the first time a major AI vendor has shipped MCP servers as a first-class install experience. Click an integration. It registers. Done. No npm install, no JSON editing, no stdio plumbing.
The integration list reads like an enterprise wishlist: Atlassian Rovo for Jira and Confluence, CircleCI and GitLab Issues for CI/CD, Microsoft for Teams and Office.
For developers building on the Model Context Protocol, this is validation at a level the spec hasn't had before. GitHub's official MCP server added Streamable HTTP the same week. The stack is consolidating fast.
The sleeper feature buried in the announcement: the in-app browser now treats webpage comments as agent instructions. Highlight a button, type "this should be disabled when the form is invalid," and Codex reads it as a task. That's a UX primitive other agent tools will copy within six months.
What the Memory Feature Actually Does
Preview memory shipped alongside the big three. It's not ChatGPT-style trivia recall. It's a behavior model.
Codex now remembers your corrections. Tell it "I prefer tabs over spaces" once and it stops asking. Correct its import sort style twice and it internalizes the pattern for every future file.
The catch: memory is not available to Enterprise, Education, EU, or UK users yet. And unlike ChatGPT's memory, there's no per-project isolation yet.
Who This Actually Kills
Not Cursor. Cursor owns the "IDE with AI" category and this update doesn't invade it.
The real casualty is the middle layer: standalone agent apps that were trying to sit between your terminal and your ticketing system. Tools that marketed "autonomous engineer on your desktop" now have to explain why you'd use them when Codex is free with a ChatGPT subscription.
The 3M Weekly Developer Number
OpenAI confirmed 3M weekly developers use Codex. That's roughly 10% of the global professional developer population. GitHub Copilot reported about 10M paid seats in its last update. Codex is the free-tier version of that scale, running on ChatGPT Plus and Pro accounts.
The implication for hiring: "familiar with Codex" is now table stakes for any AI-forward engineering role. Expect it on job specs by July.
Frequently Asked Questions
What is the OpenAI Codex desktop app?
Codex is OpenAI's desktop coding agent for ChatGPT Plus and Pro subscribers, available on macOS and Windows. It runs an AI agent that can write code, browse your codebase, execute shell commands, and as of April 16, 2026 control other Mac apps, generate images, and use 90+ plugins.
How does Codex computer use compare to Anthropic's computer use?
Anthropic's version takes over your mouse and keyboard, so you can't work while it runs. Codex runs computer actions in the background while you keep using your machine.
How much does OpenAI Codex cost in April 2026?
Codex is included in ChatGPT Plus ($20/month) and ChatGPT Pro ($200/month). The 90+ plugins and computer use mode are included at no extra charge.
What are the best alternatives to OpenAI Codex in 2026?
The closest IDE-native alternative is Cursor. For agent-style coding, Claude Code and GitHub Copilot Workspace cover different slices. For visual app building, Lovable 2.0 handles full-stack generation from prompts.




