May 8, 2026
8 mins
What Anthropic Actually Announced at Code with Claude 2026

Matt George
Managing Director
AI
Claude

Key Findings
No new model was announced — the entire conference focused on making existing models work smarter
Claude Code's 5-hour rate limits doubled immediately, backed by a new SpaceX compute partnership (220,000+ NVIDIA GPUs)
Claude Dreaming: Claude now self-improves overnight by reviewing past sessions and writing learnings to memory
Claude Routines: async automations — configure once, wake up to finished pull requests
Outcomes: write success criteria in a markdown file; Claude grades itself and iterates until it meets the bar
Anthropic's CPO Ami Vora walked onto the stage at Code with Claude 2026 in San Francisco and opened with something that would have caused panic at any other AI company.
No new model today.
The room didn't boo. They leaned in. Because the 47 minutes that followed made it clear Anthropic didn't need one. Here's everything that was announced — pulled directly from the keynote on May 6, 2026.
Code with Claude 2026: Rate Limits Doubled
The first announcement landed immediately. Anthropic partnered with SpaceX to use the full capacity of the Colossus One data center — over 220,000 NVIDIA GPUs and 300 megawatts of compute.
The direct result for developers: Claude Code's five-hour rate limits doubled for Pro, Max, Team, and seat-based Enterprise plans. API limits for Claude Opus were also raised considerably.
This wasn't a roadmap item. It went live the same day as the conference.
What Is Claude Dreaming?
Claude Dreaming is a new capability in Claude Managed Agents that lets Claude inspect its own previous sessions overnight, identify what it missed, and write those learnings to memory — so every future session starts smarter, without any re-prompting from you.
According to Anthropic (Code with Claude 2026): "With dreaming, Claude is actually able to self-learn. It's able to inspect over its previous sessions, figure out skills that it missed, lessons that it should have learned, and actually apply those directly to memory on its own."
The demo was hard to argue with. A fictional startup called Lumara built a multi-agent system to simulate landing drones across six moon sites. First run: four out of six handled correctly. After Dreaming ran overnight and wrote a "descent playbook" to memory, the next run hit six out of six — with no regression on the ones already solved. No prompting. No fine-tuning. Claude reviewed its own work and decided what to learn.
That is a different kind of system than what existed twelve months ago. Dreaming is currently available in the developer console as a research preview.
What Are Claude Routines? Async Code That Runs Without You
Claude Routines are async automations in Claude Code. You configure a trigger once — a webhook, a schedule, an API event — and Claude Code kicks off automatically without you starting it. Routines run either locally on your machine or on remote Claude compute.
Boris Cherny, who built Claude Code, described them on stage as "a higher order prompt." The developer equivalent of a higher-order function.
Here's what that looks like in practice. A teammate files a GitHub issue overnight. A Routine watching the repo picks it up, kicks off the work, and by morning there's a pull request ready to review and merge. You didn't open a terminal. Boris said on stage that most of his own code today is written by Routines he set up — he's the one creating the automation that does the prompting, not doing the prompting himself.
Developers can now wake up to finished work. That's not a feature. That's a shift in how software gets written.
Outcomes: You Define Done, Claude Figures Out How to Get There
Claude Outcomes is a new addition to Managed Agents that solves the most common complaint from anyone who's built with AI: getting the model to finish to your standard, not its own approximation.
You write a plain markdown file defining your success criteria. Claude runs a session, grades its output using a separate grader agent, and keeps iterating — up to a maximum iteration count you set — until it meets the rubric.
In the Lumara demo, the outcomes file specified three conditions: soft touchdowns, clear landing ground, and enough return fuel. Claude iterated until all three were met, with no human checking in between. Just a definition of done.
Outcomes is in public beta inside Claude Managed Agents.
Multi-Agent Orchestration Is Now in Public Beta
One of the bigger Code with Claude 2026 announcements: multi-agent orchestration for Claude Managed Agents is in public beta. You can run fleets of specialized agents — each with its own independent context window — coordinating on a single complex task.
The Lumara demo used a Commander, a Detector, and a Navigator running in parallel, merging results at the end. Anthropic found this architecture outperforms a single agent trying to hold everything at once — separate context windows, merged output, better performance.
For anyone building multi-step workflows — design, copy, dev, QA — this is the architecture worth testing now.
CI Auto Fix: No More Babysitting Your Pull Requests
CI Auto Fix watches your pull requests and handles the noise: code review comments, failed CI tests, and merge conflicts — without you watching the PR.
In the demo, Claude diagnosed a CI failure as a known network timeout flakiness issue, retried the job, and cleared it — before the engineer who owned the PR ever saw a red X.
From the keynote: "The engineer who owns the PR is never going to see a red X."
In the Claude Code codebase itself, they don't just retry flaky tests. They have Claude fix the root cause every time.
The Advisor Strategy: Frontier Quality at 5x Lower Cost
The Advisor Strategy is an architecture pattern now built into the Claude platform. A smaller model — Haiku or Sonnet — handles execution. When it needs guidance, it calls Opus as an advisor. The smaller model performs better than it would running alone, and costs less than running Opus end to end.
According to Anthropic (May 2026): Eve Legal implemented this and reported "frontier model quality at five times lower cost."
This is available now in the Messages API — update your tools array and you're using it.
The Mercado Libre Stat That Should Change How You Think About AI Timelines
Shopify rolled Claude Code out across engineering, design, product, and data science. Director of Applied AI Andrew McNamara called the speed "just crazy."
But the number that stopped people at Code with Claude 2026 came from Mercado Libre — Latin America's largest e-commerce platform. 23,000 engineers. All on Claude Code. More than 500,000 pull requests reviewed. Over 9,000 apps modernized. And Oscar Mullin, who leads technology, is targeting 90% autonomous coding — with a fully agent-driven PR loop — by Q3 2026.
This quarter.
Kat Wu, Head of Claude Code, said the detail she loved most isn't that number. It's the managers and VPs getting hands dirty in the codebase again. People who spent years in roadmap meetings are back building, because the barrier is finally low enough.
What It All Adds Up To
Boris Cherny closed the keynote with one line worth writing down: "The capabilities are ready. The gap left is how fast we put them to work."
That's the whole conference. Dreaming, Routines, Outcomes, CI Auto Fix — none of these make Claude smarter. They close the distance between what Claude can already do and what actually ships.
Synchronous coding, Cherny said on stage, is now just a slice of what's happening at any given moment. The rest is async, automated, self-improving. And it's not coming — it's already here.
If you build on Claude, this is the shift worth understanding now.
Frequently Asked Questions
What was announced at Code with Claude 2026?
Anthropic announced eight major updates at their May 6, 2026 developer conference: doubled Claude Code rate limits via a SpaceX compute deal, Claude Dreaming (overnight self-improvement), Claude Routines (async automations), Outcomes (success-criteria-driven iteration), multi-agent orchestration in public beta, CI Auto Fix, Claude Code on iOS and Android, and the Advisor Strategy for lower-cost frontier-quality outputs.
What is Claude Dreaming?
Claude Dreaming is a research preview feature in Claude Managed Agents. It allows Claude to inspect its own previous agent sessions, extract lessons from what went wrong or was missed, and write those learnings to memory — so future sessions start with accumulated knowledge from past runs, without any extra prompting from the developer.
What are Claude Routines?
Claude Routines are async automations in Claude Code. You configure a trigger once — a webhook, a scheduled time, or an API event — and Claude Code kicks off automatically. Routines can run locally on your machine or on remote Claude compute, letting you wake up to pull requests that are ready to merge.
Did Anthropic launch a new model at Code with Claude 2026?
No. CPO Ami Vora opened the keynote by saying: "Today is about how we're making our products work better for you." The conference had no new model launch — it focused entirely on workflow features, platform improvements, and compute infrastructure.
What is the Claude Advisor Strategy?
The Advisor Strategy is an architecture pattern where a smaller model (Haiku or Sonnet) handles execution and calls Opus as an on-demand advisor when it needs help deciding what to do next. Eve Legal used it to achieve frontier model quality at 5x lower cost. It's available now in the Messages API.
What is Claude Outcomes?
Claude Outcomes is a public beta feature in Claude Managed Agents. You write success criteria in a plain markdown file. Claude runs, grades its own output against that rubric using a grader agent, and iterates until it meets the criteria or hits your maximum iteration count.

Matt George
Managing Director
AI
Claude
