On May 5, 2026, The Information reported that Anthropic has committed to spending $200 billion on Google Cloud over the next five years. Combined with the $100 billion AWS deal Anthropic signed in April and OpenAI's roughly $600 billion in cloud commitments across Microsoft, Oracle, and AWS, two AI companies now account for about half of the $2 trillion in long-term contracts held by the world's largest cloud providers. That concentration is reshaping the supply side of every business AI decision being made today.
What Anthropic Just Committed To
The deal, first reported by The Information and corroborated by Reuters via Investing.com and Yahoo Finance, gives Google Cloud roughly $200 billion of contracted Anthropic spending over five years, with most of the capacity coming online starting in 2027. Reporting on Broadcom's earlier disclosures puts the agreement at approximately 3.5 gigawatts of TPU-based compute capacity provisioned through Google's Broadcom partnership.
The Google commitment now sits on top of a stack of similar deals Anthropic has signed in the last year. Anthropic's $100 billion AWS deal covers up to 5 gigawatts of Trainium capacity through 2036. A separate $30 billion Microsoft Azure agreement and a roughly $10 billion NVIDIA arrangement for Grace Blackwell and Vera Rubin systems round out a portfolio that, by public reporting, exceeds $340 billion in committed compute spend across four providers.
Anthropic's revenue justifies the bet, at least directionally. The company's run-rate revenue surpassed $30 billion in April 2026, up from approximately $9 billion at the end of 2025, and Alphabet shares were up about 2% in extended trading following the report.
Why "Half the Cloud Backlog" Is the Real Story
The dollar figures dominate the headlines, but the more important number is share. According to Cloud Wars analysis of recent earnings disclosures, the four major cloud providers now hold $2 trillion in combined backlog and remaining performance obligations: Microsoft at $627 billion, Oracle at $553 billion, Google Cloud at $462 billion, and AWS at $364 billion. Those figures grew between 49% and 325% year-over-year depending on the provider.
Reporting on the new Anthropic deal notes that the $200 billion accounts for more than 40% of Google Cloud's revenue backlog. Pair that with OpenAI's reported $250 billion Azure commitment, $300 billion Oracle commitment, and $38 billion AWS commitment, and the math is uncomfortable: Anthropic and OpenAI together represent roughly half of every dollar contracted to the world's largest clouds.
Our take: This is no longer purely a story about AI vendor strategy. It is a story about cloud strategy where AI vendor decisions are now upstream of every other procurement choice. If your company runs critical workloads on AWS, Azure, Oracle, or Google Cloud, your provider's capacity, pricing, and roadmap are now significantly shaped by what two AI companies in San Francisco need.
The 2027 Capacity Wall
Buried in the reporting is a date that matters more than any dollar figure: most of the Google Cloud capacity tied to this deal does not come online until 2027. Anthropic's AWS Trainium capacity ramps similarly, with significant blocks coming online in late 2026 and through 2027. NVIDIA's Vera Rubin systems are also on a 2027 deployment cadence.
That is not an abstract supply detail. It tells you when AI compute scarcity is likely to ease and when it is not.
Through 2026, frontier-model capacity stays tight. The hyperscalers are pre-selling capacity that does not yet exist. New customers asking for guaranteed throughput on flagship models will continue to face waitlists, regional restrictions, and rate limits.
2027 is when the supply curve bends. If the data center buildouts arrive on schedule, capacity should begin to expand meaningfully. Pricing pressure on commodity inference may follow. Long-context and agentic workloads, which consume tokens an order of magnitude faster than chat, will absorb much of that supply.
Procurement decisions made in 2026 should account for both phases. Locking into multi-year contracts at 2026 scarcity pricing is risky. Building consumption flexibility into AI contracts is now a procurement discipline, not an optional clause.
What Concentration Risk Looks Like for Buyers
The textbook framing is that two large customers commanding half the backlog is an oligopoly waiting to flip into pricing power. The more nuanced view is that concentration risk runs in multiple directions at once.
For Anthropic and OpenAI, dependence on a small set of hyperscalers means any single provider's capacity slip, geopolitical disruption, or pricing change shows up immediately in model availability. That is part of why both companies are spreading bets across AWS, Azure, Google Cloud, and Oracle. Diversification is a hedge, not a luxury.
For hyperscalers, having 40% or more of your backlog depend on one customer is a known risk profile, similar to a traditional enterprise software firm with a few mega-accounts. If Anthropic's revenue trajectory wobbles, Google Cloud's revenue forecast wobbles with it.
For business buyers, the risk is subtler. Your AI vendor's capacity strategy now runs through your own cloud provider in ways that did not matter eighteen months ago. If you are an Azure shop running Claude through AWS, you are exposed to two provider relationships and the integration friction between them. If you are a Google Cloud shop running OpenAI through Azure, the same dynamic applies. Multi-cloud AI is now table stakes, but it imports multi-cloud cost and operational complexity that smaller IT teams are not staffed for. We covered the procurement implications of OpenAI going multi-cloud after the Microsoft exclusivity ended last week.
How to Adjust Your AI Procurement This Quarter
Concrete moves businesses should make now.
-
Map your AI workloads to cloud regions, not just providers. Capacity is not uniform. Find out where your model provider has actually deployed accelerators that can serve your traffic, and what the regional fallback story is. Generic answers like "we run in us-east-1" are not enough in a 2026 capacity market.
-
Build in pricing protection that survives the 2027 supply shift. A multi-year deal at 2026 token prices may look reasonable today and expensive eighteen months from now. Negotiate floor commitments and price-reopener clauses tied to public list-price changes from the same vendor.
-
Treat data architecture as compute leverage. Token consumption is the real bill. Teams running tight retrieval pipelines, well-tuned context windows, and disciplined prompt engineering will absorb capacity shocks that less-mature buyers cannot. Investing in robust data pipeline architecture is now a hedge against AI pricing volatility, not just a one-time platform project.
-
Hold a second model on warm standby. Multi-model strategy used to be about quality and cost. It is now also about supply assurance. Maintain a benchmarked second-choice model with deployable prompts and minimal swap friction, even if you do not route production traffic to it today.
-
Reread your AI vendor contracts. Look for unilateral capacity-adjustment clauses, price-escalation language, and termination terms. Contracts written when frontier model providers were fighting for distribution are different from contracts being written now that those providers have customer leverage.
What Not to Do
Do not interpret the deal as a sign Claude is locked to Google. Anthropic explicitly maintains AWS Trainium, Google TPU, and NVIDIA GPU footprints. The strategic intent for Anthropic is independence from any one chip or cloud roadmap. The same logic applies to OpenAI.
Do not assume more compute means cheaper compute. Demand has scaled at least as fast as supply. PwC found that 20% of companies are capturing 74% of AI value, and those leaders are not waiting for unit costs to fall. They are buying capacity ahead of need.
Do not treat this as Anthropic-specific. The pattern repeats for OpenAI, will likely repeat for the next frontier lab to clear the $30 billion run-rate threshold, and is structurally aligned with how power, capital, and silicon are flowing this decade.
Key Takeaways
- Anthropic committed approximately $200 billion to Google Cloud over five years, per The Information's May 5, 2026 reporting, with most capacity arriving in 2027.
- Anthropic's total disclosed compute commitments now exceed $340 billion across Google, AWS, Microsoft, and NVIDIA partnerships.
- Anthropic and OpenAI together account for roughly half of the $2 trillion in long-term contracts held by the four largest cloud providers.
- AI compute supply stays tight through 2026, and 2027 is when capacity is scheduled to expand meaningfully.
- Buyers should map workloads to regions, negotiate price-protection clauses, hold a benchmarked second model on standby, and treat data architecture as a compute hedge.
The businesses that move early on AI compute concentration will have a meaningful advantage. If you want to be one of them, let's start with a conversation.