Anthropic Lifts Claude Code Limits via SpaceX Deal

Anthropic is raising Claude Code usage caps after striking a new compute deal with SpaceX, easing throttling complaints from developers and signaling an unusual entry by Elon Musk's space company into the AI infrastructure market.

Share
Anthropic Lifts Claude Code Limits via SpaceX Deal

Anthropic is raising usage limits on Claude Code, its agentic coding tool, and credits a newly signed compute deal with SpaceX for the additional capacity. The announcement marks both a relief for developers who have been bumping against aggressive rate caps and an unexpected entry by Elon Musk's space company into the increasingly crowded AI infrastructure supply chain.

What's changing for Claude Code users

Claude Code, Anthropic's terminal-based coding agent, has become one of the company's fastest-growing products since launch. It uses Claude's long-context reasoning to read entire codebases, plan multi-step edits, run shell commands, and iterate on tests autonomously. That workflow burns through tokens quickly — a single agent run can consume hundreds of thousands of tokens as the model reads files, reasons over them, and produces diffs.

The result has been chronic throttling. Pro and Max subscribers have publicly complained about hitting weekly caps within hours of heavy use, and Anthropic tightened limits earlier this year after a small number of power users were reportedly running Claude Code 24/7. With the new SpaceX-backed capacity, Anthropic says it can now loosen those restrictions across paid tiers, giving developers meaningfully more headroom for agentic sessions.

Why SpaceX?

The more surprising element is the supplier. SpaceX is not historically an AI compute vendor, but the company operates significant data center and networking infrastructure to support Starlink's global ground operations and has been investing in GPU capacity. Reports over the past year have indicated SpaceX is building out AI-capable facilities, partly to serve internal needs around satellite imagery, autonomy, and Starlink network optimization — and partly, it now appears, to monetize spare capacity on the merchant market.

For Anthropic, the deal fits a now-familiar pattern: rather than depend on a single hyperscaler, the company is stitching together capacity from multiple providers. Anthropic already runs heavily on Amazon Web Services (its largest investor and primary cloud partner), uses Google Cloud TPUs, and recently struck a deal with xAI's Colossus cluster. Adding SpaceX extends that diversification further and reduces single-vendor risk during a period when GPU supply remains the binding constraint on frontier model deployment.

The compute crunch behind the headline

This announcement is a window into how severe the inference compute shortage has become for frontier labs. Training new models gets the headlines, but it is inference — serving customer requests — that is squeezing capacity. Agentic products like Claude Code are particularly demanding because they generate long chains of tool calls and intermediate reasoning steps, often producing 10x or more tokens per user action than a standard chat interaction.

Anthropic CEO Dario Amodei has repeatedly said that demand for Claude has outstripped supply, and the company has explicitly traded off raw capacity against rate-limit generosity. By bringing on additional inference clusters from SpaceX, Anthropic can serve more concurrent agent sessions without degrading latency, which is critical for tools that interact with developers in real time.

Implications for the broader AI stack

The deal also signals a structural shift: non-traditional infrastructure players are becoming AI compute providers. SpaceX joining the merchant GPU market alongside CoreWeave, Lambda, Crusoe, and the major hyperscalers expands the supplier base for AI labs. For media-focused AI workloads — including video generation, voice cloning, and synthetic media pipelines that share the same GPU bottlenecks — this matters. More compute providers mean more capacity available to power the generative video and audio tools that increasingly compete for the same H100 and H200 inventory.

It also raises interesting questions about Musk's position in the AI ecosystem. xAI, his AI company, competes directly with Anthropic. Yet SpaceX, also Musk-controlled, is now selling compute to that same competitor — and Anthropic is separately consuming xAI's Colossus capacity. The lines between rival, supplier, and customer are blurring fast.

What to watch next

For developers, the immediate question is how much the new limits actually expand in practice — Anthropic has not published exact token caps. For the industry, the bigger story is whether SpaceX formalizes a broader AI compute business, and whether other unconventional infrastructure operators follow. Either way, the era of single-cloud AI deployments is clearly over for frontier labs.


Stay informed on AI video and digital authenticity. Follow Skrew AI News.