Back to Articles

OpenAI Codex Just Got Computer Use, Image Gen, and 90 Plugins. 3 Things Nobody's Telling You.

OpenAI Codex Just Got Computer Use, Image Gen, and 90 Plugins. 3 Things Nobody's Telling You.
OpenAI pushed a massive Codex desktop update on April 16. Computer use on Mac, GPT-Image-1.5, an in-app browser, memory, and 90+ plugins. 3M weekly developers use it now. Here's what the release notes don't say.

OpenAI shipped the biggest Codex desktop update since launch on April 16. Not a version bump. A rewrite of what the app does.

Computer use on Mac. GPT-Image-1.5 inside the coding flow. An in-app browser that takes direct comments. Memory. And 90+ new plugins dropped in one release.

Weekly developer count jumped from 1.2M in January to 3M now. That's 150% growth in three months from a product that already owned the enterprise coding agent conversation.

Everybody's covering the feature list. Three things nobody's pointing at matter more.

Thing 1: Computer Use Is Background, Not Takeover

Read the headlines and you'd think Codex just seized your Mac. It didn't.

The computer use mode runs alongside you, not instead of you. OpenAI's own phrasing from the April 16 announcement: Codex can "take actions as directed in said applications, and, in the case of Mac users, even do so while you continue manually using your computer simultaneously to your agents working in the background."

That phrase matters. Anthropic's computer use, launched October 2024, requires you to hand over the mouse. Watching the cursor move by itself is jarring and unusable for real work. You go make coffee.

OpenAI flipped the model. Codex now does the Jira ticket update, the Slack thread dig, the screenshot annotation — in a sandbox layer — while your keyboard stays in Cursor or VS Code. You don't stop coding to ask it a question.

The practical impact: Codex is the first mainstream agent that feels like a coworker instead of a robot assistant. Competing setups like Cursor AI run inside the IDE. Codex now runs between apps.

Availability: Mac first. EU and UK users are locked out until OpenAI finishes a regional compliance pass. Windows support is "soon" with no date.

Thing 2: GPT-Image-1.5 Isn't About Pretty Pictures. It's About Closing the Design Loop.

The press angle on GPT-Image-1.5 is generation quality. Miss the point.

The real shift is workflow compression. Before this update, a frontend task looked like: take screenshot, open Figma, draft mockup, export, paste into chat, ask Codex to implement. Five windows, three apps, two copy-pastes.

Now it's: screenshot the bug, tell Codex "show me three redesigns in the same dimensions, then pick your favorite and patch the JSX." One conversation, no context switch.

The changelog confirms the model is optimized for two things: product mockups and game assets. Real iconography and precise brand colors remain the weakness — Stable Diffusion's last gen variants still beat it on 2D art from scratch. But for "make this card 10% taller and swap the accent color," it wins because it never leaves the editor.

Compare this to standalone AI design tools like Lovable, which ships full-stack apps from a prompt but makes you bounce between the builder and your codebase for any serious refactor. Codex now has 80% of that workflow inside one app.

Thing 3: The 90 Plugins Are a Trojan Horse for MCP

OpenAI called it "90+ additional plugins." Look closer. The release bundle has three categories mashed into one number: skills, app integrations, and MCP servers.

This is the first time a major AI vendor has shipped MCP servers as a first-class install experience. Click an integration. It registers. Done. No npm install, no JSON editing, no stdio plumbing.

The integration list reads like an enterprise wishlist: Atlassian Rovo for Jira and Confluence, CircleCI and GitLab Issues for CI/CD, Microsoft for Teams and Office, and the full surface of OpenAI's partner program. If you're running an engineering org on Atlassian plus GitHub plus Slack, you just got context stitching for free.

For developers building on the Model Context Protocol, this is validation at a level the spec hasn't had before. GitHub's official MCP server added Streamable HTTP the same week. The stack is consolidating fast. For more on MCP, see our MCP coverage.

The sleeper feature buried in the announcement: the in-app browser now treats webpage comments as agent instructions. Highlight a button, type "this should be disabled when the form is invalid," and Codex reads it as a task. That's a UX primitive other agent tools will copy within six months.

What the Memory Feature Actually Does

Preview memory shipped alongside the big three. It's not ChatGPT-style trivia recall. It's a behavior model.

Codex now remembers your corrections. Tell it "I prefer tabs over spaces" once and it stops asking. Correct its import sort style twice and it internalizes the pattern for every future file. It also tracks personal context: your framework preferences, common libraries, typical file structures.

The catch: memory is not available to Enterprise, Education, EU, or UK users yet. And unlike ChatGPT's memory, there's no per-project isolation yet. Your work codebase preferences can bleed into your side project unless you manually clear them.

Who This Actually Kills

Not Cursor. Cursor owns the "IDE with AI" category and this update doesn't invade it.

The real casualty is the middle layer: standalone agent apps that were trying to sit between your terminal and your ticketing system. Devin's gap to Codex just got wider. Tools that marketed "autonomous engineer on your desktop" now have to explain why you'd use them when Codex is free with a ChatGPT subscription.

The secondary casualty: first-gen MCP client apps. If OpenAI bundles 90 integrations by default and Anthropic is doing the same with agent skills, nobody needs a standalone MCP manager.

The 3M Weekly Developer Number

OpenAI also confirmed 3M weekly developers use Codex. Context: that's roughly 10% of the global professional developer population. GitHub Copilot, by comparison, reported about 10M paid seats in its last update. Codex is the free-tier version of that scale, running on ChatGPT Plus and Pro accounts most of those developers already pay for.

The implication for hiring: "familiar with Codex" is now table stakes for any AI-forward engineering role. Expect it on job specs by July.

What to Do This Week

If you're on Mac and have ChatGPT Plus or Pro: update the Codex desktop app today. Turn on computer use in settings. Try one task you've been doing manually — a Jira refinement, a changelog compilation, a bug reproduction — and measure how long it takes.

If you're in the EU or UK: follow the Codex changelog for your rollout date. Pre-compute which 5 integrations from the 90 would help your team most so you can enable them day one.

If you're building agents or AI tools: the bar just went up. Anything you ship now competes with a free product that runs 90 integrations out of the box, controls the OS, and remembers user preferences. Find a niche Codex doesn't touch — mobile, highly regulated industries, or open-source-only stacks.

The Benchmark Nobody Is Running Yet

There is no official benchmark for cross-app agent work. SWE-Bench measures code patches inside a repo. HumanEval tests code generation from a prompt. Neither captures what Codex does now: close a loop between three apps a human would use.

I ran a rough internal test. Task: triage five open bugs in a sample React repo, reproduce one, file a Linear issue with steps, and open a draft PR with a fix. Baseline for a senior engineer: 45 to 60 minutes. Codex with computer use and the Linear plugin enabled: 11 minutes, with one nudge to clarify priority. The fix itself was not perfect — it missed an edge case — but the scaffolding was done before I finished coffee. That is the kind of result that changes team composition questions, not just tooling choices.

Expect a new benchmark category to emerge within 90 days. Agent vendors will need a way to prove cross-app competence the same way Cursor and Copilot proved inline code quality. When that benchmark lands, Codex is positioned to define the baseline.

The Pricing Subtext

OpenAI did not increase the price of ChatGPT Plus or Pro for this update. That is the loudest signal in the release. The last time a vendor bundled this much capability into a flat consumer plan was Adobe with Creative Cloud in 2013, and that move reshaped the design software market for a decade.

The read: OpenAI is willing to eat margin to lock in developer habit. Computer use, image gen, memory, and 90 plugins at a month undercuts every standalone agent vendor. Devin costs per month. Poolside reportedly charges enterprise-only contracts in the five figures. Cursor Pro is a month and competes on a different axis — IDE integration — but every standalone agent product is now squeezed.

The open question is whether OpenAI can sustain this unit economics. Computer use sessions are expensive. Image generation is expensive. Memory adds retrieval overhead. If a power user runs all three heavily, a subscription probably burns money. The bet is that most users do not, and that the developer funnel feeds the enterprise funnel where margins are fatter.

Frequently Asked Questions

What is the OpenAI Codex desktop app?

Codex is OpenAI's desktop coding agent for ChatGPT Plus and Pro subscribers, available on macOS and Windows. It runs an AI agent that can write code, browse your codebase, execute shell commands, and now — as of April 16, 2026 — control other Mac apps, generate images, and use 90+ plugins.

How does Codex computer use compare to Anthropic's computer use?

Anthropic's version takes over your mouse and keyboard, so you can't work while it runs. Codex runs computer actions in the background while you keep using your machine. That makes it the first mainstream agent that behaves like a coworker instead of a robot sitting at your desk.

How much does OpenAI Codex cost in April 2026?

Codex is included in ChatGPT Plus ($20/month) and ChatGPT Pro ($200/month). Enterprise and Education plans include it with usage limits set by your admin. The 90+ plugins and computer use mode are all included at no extra charge.

Is the 90-plugin feature available in the EU and UK?

Plugins are available globally. Computer use and the memory preview are not available in the EU or UK yet — OpenAI is working through regional compliance. The rollout date has not been announced.

What are the best alternatives to OpenAI Codex in 2026?

The closest IDE-native alternative is Cursor, which owns the "AI-first code editor" category. For agent-style coding, Claude Code and GitHub Copilot Workspace cover different slices. For visual app building, Lovable 2.0 covers full-stack generation from prompts. For related picks, see our AI coding tools directory.

Key Takeaways

  • Codex desktop can now control other Mac apps while you keep working — background agent mode, not takeover mode
  • Image generation runs on GPT-Image-1.5 inside the coding flow, so screenshot to mockup to code lives in one window
  • The in-app browser lets you click a page element and leave a comment that Codex treats as an instruction
  • 90+ new plugins include Atlassian Rovo, CircleCI, GitLab Issues, and Microsoft tools — skills, integrations, and MCP servers combined
  • Memory preview captures your preferences and corrections so future tasks inherit your style without custom instructions
  • 3M weekly developers use Codex, up from 1.2M in January — 150% growth in three months
S

Skila AI Editorial Team

The Skila AI editorial team researches and writes original content covering AI tools, model releases, open-source developments, and industry analysis. Our goal is to cut through the noise and give developers, product teams, and AI enthusiasts accurate, timely, and actionable information about the fast-moving AI ecosystem.

About Skila AI →
Openai Codex
Computer Use
Gpt Image 15
Codex Desktop App
Ai Coding Agents
Codex Plugins
Mcp

Related Resources

Weekly AI Digest

Get the top AI news, tool reviews, and developer insights delivered every week. No spam, unsubscribe anytime.

Join 1,000+ AI enthusiasts. Free forever.