Claude hitches ride on SpaceX's datacenter capacity

The Register / 5/7/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsIndustry & Market Moves

Key Points

  • Claudeの利用は、SpaceXのデータセンター容量(Colossusの計算資源)に支えられることで、従来より利用制限が緩和されていると報じられています。
  • これにより、推論や応答のために必要な計算資源へのアクセスが相対的に改善し、利用体験が底上げされる可能性があります。
  • 生成AI/LLMの提供では計算基盤の確保がボトルネックになりやすく、データセンター供給元との関係が直接的にサービスの上限へ影響しうることが示唆されています。
  • 今後は、計算資源(GPU/サーバ/運用能力)の供給状況が、各社のモデル能力やレート制限方針の変化に結び付く注目材料になります。

AI

Claude hitches ride on SpaceX's datacenter capacity

Compute from Colossus leads to relaxed limits

Thomas Claburn Thomas Claburn
Published

Anthropic is partnering with SpaceX to ease capacity constraints that have stranded Claude customers, a gesture that may soothe developer discontent about service availability and cost.

Ami Vora, chief product officer at Anthropic, announced the expanded rate limits during Code for Claude, a developer event livestreamed from San Francisco.

"As of today, we are increasing rate limits for developers on Claude Code and the Claude Platform," said Vora. "More specifically, we are doubling Claude Code's five-hour rate limits for Pro, Max, Team, and seat-based enterprise plans. And we're raising our API limits considerably for Claude Opus."

REG AD

Anthropic is also ending its peak hours limit reduction on Claude Code for Pro and Max accounts.

REG AD

The AI biz is able to do this, she explained, thanks to a partnership with SpaceX that expands available inference capacity. Anthropic has struck a deal to use "all the capacity of [SpaceX’s] Colossus 1 data center."

According to SpaceX, "Colossus 1 features over 220,000 Nvidia GPUs, including dense deployments of H100, H200, and next-generation GB200 accelerators." 

The deal adds more than 300 megawatts of new capacity within the month and follows similar compute arrangements with Amazon and Google/Broadcom

The company's insatiable hunger for processing power may even take it into space. Anthropic says that it "expressed interest in partnering with SpaceX to develop multiple gigawatts of orbital AI compute capacity."

In recent months, Anthropic has struggled to meet unexpected demand for Claude services – its models became sufficiently capable to win over skeptical developers and usage patterns shifted as a result of the popularity of OpenClaw's long-running agents. 

"Year over year, API volume is up nearly 17x on the cloud platform," said Vora. "And on Claude Code, the average developer is now spending 20 hours per week running Claude."

Amid this growing popularity, Anthropic has also wrestled with bugs that affected model performance.

During her presentation, Vora tempered expectations by noting that no new model would be announced. Instead, she presided over a review of new and recent Claude features in an effort to frame model improvements as exponential.

REG AD

The salient exponent here would be two – the doubling of Claude's five-hour rate limits. Model performance, as measured by benchmarks, has been incremental. Opus 4.7 is a few percentage points better than Opus 4.6 in various measurements, not twice as capable or more. 

That didn't stop Vora from claiming, "even though model capabilities are improving on an exponential, most organizations are still adopting AI on a linear path." 

Vora's use of "exponential" may be more of a thematic framing device than a literal assertion of progress, a device to draw a contrast between Claude's capabilities and a more cautious pace of corporate AI adoption. She cast the upcoming feature review as an opportunity for customers to see where Claude development is headed, "So you can plan for it and ride the exponential with us."

The remainder of the presentation consisted of a summary of recent Claude feature improvements.

These include: multi-agent orchestration, outcomes, and dreaming – a capability that showed up in the recent Claude Code source leak.

"With Dreaming," explained Angela Jiang, head of product for the Claude platform, "Claude is actually able to self-learn. It's able to actually inspect over its previous sessions, figure out skills that it missed, lessons it should have learned, and actually apply those directly to memory on its own."

Boris Cherny, head of Claude Code, took a turn on stage to remind everyone about Routines, a way to trigger and run Claude jobs locally or on cloud servers.

"Routines can be run on a schedule, they can be kicked off by webhooks, or they can even be kicked off by arbitrary API calls, you can run them locally on your machine or on remote cloud compute," he said.

REG AD

Cherny said, "for me personally, a lot of my code nowadays is written by routines. I'm not the one doing the prompting. I'm the one creating a routine that does the prompting."

Who wouldn't want to "ride the exponential" when one's company is paying the API bill? ®