Cursor, Claude Code, and Codex Are Becoming One Stack Nobody Planned
The AI coding tool market isn't consolidating — it's layering. Cursor orchestrates, Claude Code executes, Codex reviews. Here's why that matters.
Cursor, Claude Code, and Codex Are Becoming One Stack Nobody Planned
The AI coding tool market was supposed to consolidate. One winner would emerge, developers would standardise, and we’d all move on. Instead, the opposite happened. In the first week of April 2026, Cursor shipped a rebuilt agent orchestration interface, OpenAI published an official plugin that runs inside Anthropic’s Claude Code, and early adopters started running all three together — not as competitors, but as layers in a stack nobody designed.
Three layers, one stack
What’s forming looks less like a product choice and more like an infrastructure decision:
Orchestration (Cursor 3). The new Agents Window isn’t an editor with AI bolted on. It’s a control plane for managing fleets of coding agents. You can run parallel agents across local machines, worktrees, and cloud sandboxes from a single sidebar. The /best-of-n command sends the same prompt to multiple models in isolated worktrees and compares outcomes. Design Mode lets you annotate UI elements and point agents at specific problems. Cursor is betting that managing agents matters more than editing files — and they might be right.
Execution (Claude Code, Codex). These are the agents that actually write, review, and debug code. Claude Code has the enthusiasm lead — the Pragmatic Engineer survey put it at 46% “most loved” among 900+ engineers. SemiAnalysis reckons it accounts for roughly 4% of all public GitHub commits. Codex just passed 3 million weekly active users. Neither dominates every scenario, which is precisely why developers are reaching for both.
Review (Codex plugin for Claude Code). This is the newest and most interesting layer. OpenAI’s codex-plugin-cc installs directly inside Claude Code and provides slash commands for code review, adversarial review, and task delegation. The adversarial review specifically pressure-tests auth, data loss, and race conditions. An optional review gate lets Codex automatically review Claude’s output before it finalises. That’s OpenAI shipping an official integration into a direct competitor’s product.
Why composability won
OpenAI building a plugin for Anthropic’s product is the most revealing strategic signal in this market. The conventional playbook says lock users in. Build walls. Make switching painful. OpenAI did the opposite, and the economics explain why.
Claude Code has built a large installed base among professional developers. Rather than waiting for those developers to switch, OpenAI embedded Codex where they already work. Every plugin-initiated review generates usage that counts against the developer’s ChatGPT subscription or API key. Zero acquisition cost, incremental billing.
Anthropic’s open MCP-based plugin architecture made this possible — including integrations from competitors. Both companies recognised that developers will use multiple tools regardless. The question isn’t “which tool wins?” It’s “is your tool in the stack or outside it?”
What this means for developers
Model choice becomes infrastructure. Cursor’s /best-of-n treats model selection the way we already treat database or cloud provider selection — an infrastructure decision driven by workload, not brand loyalty. Claude for precision on complex refactors. Codex for throughput on parallelisable tasks. The notion of being “a Claude shop” or “a GPT shop” starts to look as quaint as being “an AWS shop” in a multi-cloud world.
The editor recedes. For forty years, the code editor was the centre of gravity. Cursor’s Agents Window and Google’s Antigravity both challenge that assumption. The orchestration layer is beginning to compete with the editor as the primary interface. The editor’s still there, still useful, but it’s no longer guaranteed to be the default view.
Review goes adversarial. Single-model review was always structurally limited — asking the model that wrote your code to review it is asking someone to grade their own homework. Cross-provider review, where one model writes and another challenges, is the most promising mitigation for AI sycophancy yet. As this pattern matures, it’ll move from developer workflow to CI/CD pipeline.
The unanswered question
A coding agent stack is forming faster than most expected. But whether it stabilises or continues to fracture is anyone’s guess. GitHub Copilot is evolving its own agent capabilities. AWS Kiro shipped an agent-first IDE. Every major cloud provider now has a position.
The next phase is determined by which layers become commodities and which become control points. My bet? Orchestration is the new platform play. Execution becomes a commodity. Review becomes the differentiator. And the developers who learn to compose these tools — rather than pledging allegiance to one — will be the ones shipping faster.