On May 11, 2026, Anthropic and AWS made Claude Platform on AWS generally available, giving AWS customers a native path to Anthropic's full Claude API through their existing AWS account. The launch creates a meaningful fork for enterprise buyers of Claude. Bedrock is no longer the only AWS-aligned option, and the choice between the two now affects feature speed, security boundary, and how AI spend retires against your AWS commitment.
What AWS and Anthropic Actually Shipped
Claude Platform on AWS is the canonical Claude API, operated by Anthropic, accessed through AWS IAM, billed through AWS Marketplace, and logged through CloudTrail. According to the AWS announcement and Anthropic's launch post, the platform is live in 17 AWS regions across the Americas, Europe, and Asia Pacific.
The feature set at general availability is the full Claude API surface, not a stripped-down hosted version. Per AWS documentation, that includes the Messages API, Claude Managed Agents in beta, the advisor strategy in beta, web search and web fetch, an MCP connector in beta, Agent Skills in beta, code execution, the files API in beta, prompt caching, batch processing, and citations.
The pricing model is unit-metered rather than token-metered. Usage is denominated in Claude Consumption Units, metered hourly, and invoiced monthly on the AWS bill. Critically for procurement, Claude Enterprise usage on the platform retires against an existing AWS Enterprise Discount Program or Private Pricing Agreement, which means Anthropic spend can now help meet AWS commitments that were previously a separate budget line.
This launch is the productized layer that sits on top of the Anthropic and AWS $100 billion compute partnership announced in April 2026. The compute deal was the infrastructure agreement. Claude Platform on AWS is what enterprise developers actually buy.
Two Paths to Claude on AWS, and Why That Matters
Claude has been available through Amazon Bedrock for some time. The new platform does not replace Bedrock. It runs alongside it as a second supported way to consume Claude inside an AWS account, and the two paths are not interchangeable.
Bedrock keeps Claude inside the AWS security boundary. AWS operates the inference stack, and AWS is the data processor for inputs and outputs. This is the right choice for workloads with strict data residency, regulated industries that require single-cloud audit lineage, or organizations whose security review has already cleared Bedrock and cannot easily clear a new processor.
Claude Platform on AWS runs the inference outside the AWS security boundary. As both The New Stack and Anthropic's own documentation make explicit, Anthropic is the data processor for inference inputs and outputs on the platform; AWS handles billing and identity metadata. The tradeoff buys same-day access to new Claude features. AWS Marketplace handles billing and IAM handles auth, but the request itself leaves the AWS perimeter.
Feature lag is the practical difference. Bedrock has historically shipped advanced Claude features like Managed Agents and Skills on a lag relative to Anthropic's native API. Claude Platform on AWS closes that gap to zero, with Anthropic stating that all new features and betas ship the same day they go live on the native API.
For most enterprises, this is not a single global choice. It is a workload-level choice. Compliance-heavy workloads can stay on Bedrock. Agentic workloads, coding assistants, and product features that depend on the newest Claude capabilities can move to Claude Platform on AWS without changing AWS account, billing relationship, or IAM model.
What This Changes for AI Procurement
A few practical implications follow for CIOs, CTOs, and procurement leads.
Anthropic spend is now visible on the AWS invoice. Before this launch, a typical enterprise running Claude in production had two billing relationships: an Anthropic contract for the API and an AWS contract for everything else. Claude Platform on AWS collapses that into one invoice, with Claude Enterprise spend drawing down against EDP or PPA commitments. For finance teams trying to hit AWS commitment thresholds, that is structurally favorable. For procurement, it means Anthropic moves from a standalone SaaS line item to a sub-line in the AWS relationship.
IAM and CloudTrail become the audit surface for Claude. Security teams that already operate AWS IAM policies, SCPs, and CloudTrail-based detections can extend the same controls to Claude usage without standing up a parallel auth stack. CloudTrail captures management events by default, and data event logging can be enabled to capture inference activity, which matters for incident response and access reviews. An AI governance program that has been built around AWS-native controls can incorporate Claude without bolting on a new identity provider.
The build versus buy calculation shifts for Claude-heavy applications. When the native Claude API is one IAM role away inside your AWS account, the friction of using Anthropic directly drops sharply. Internal teams that previously deferred to Bedrock because procurement was easier no longer have that excuse, which changes the build versus buy decision for AI solutions at the margin in favor of building on the native API where features matter.
Vendor concentration goes up. A single invoice with one cloud provider for both compute and the frontier model is convenient. It is also more concentrated. Enterprises with a stated multi-cloud or multi-model strategy should treat this as a moment to reaffirm those guardrails, not erode them.
The Security and Data Boundary Question
The most consequential detail in the launch is one most operators will skim past. Per Anthropic's documentation and corroborated in The New Stack's coverage, Claude Platform on AWS routes inference through Anthropic-operated infrastructure, not inside the AWS account's security boundary. That is different from how Bedrock is described, where AWS operates the model and processes data inside its own perimeter.
This is not a flaw. It is a tradeoff. Anthropic operating the platform is what enables same-day feature availability and the unified Claude experience across Console, Claude Code, and the API. But it does mean that security reviews built on the assumption "all AI inference stays in AWS" need to be updated when a team moves a workload to the new platform.
For regulated industries, the practical answer is straightforward. Document the boundary explicitly in your data flow diagrams, update your data processing addenda to include Anthropic as a sub-processor where required, and confirm regional alignment between the AWS region and Anthropic's processing region for that workload. For organizations evaluating sensitive use cases, the Bedrock path remains the option where AWS is the sole processor.
How to Decide: A Practical Framework
Three questions sort most workloads cleanly.
- Do you need same-day access to new Claude features, including Managed Agents and Skills? If yes, Claude Platform on AWS. If a feature lag of weeks or months is acceptable, Bedrock is fine.
- Does your data classification require a single-processor inference path inside AWS? If yes, Bedrock. If your data classification permits Anthropic as a named sub-processor in the same region, Claude Platform on AWS is open.
- Are you trying to retire AWS EDP or PPA commitments with AI spend? If yes, Claude Enterprise on Claude Platform on AWS now drains down those commitments directly, which is the cleanest billing posture available today.
For organizations with a mixed portfolio, the answer is rarely one path for everything. Workload-by-workload routing is the more durable pattern, and it tends to surface in AI strategy and procurement planning once Claude usage spans multiple business units.
What Not to Do
Do not rip and replace Bedrock workloads on the announcement alone. A working Bedrock integration with completed security review is worth more than a marginal feature gap. Migrate when a specific feature on Claude Platform on AWS justifies the security re-review, not as a blanket policy.
Do not let the unified invoice create a false sense of unified responsibility. AWS bills you and IAM authenticates you, but Anthropic is the entity running inference and shipping new features. The relationship to manage carefully is still Anthropic, and the native Anthropic vendor relationship does not become less important because the bill arrives from AWS.
Do not skip the regional residency check. Seventeen regions is broad, but it is not every region, and not every Claude model is in every region at launch. Verify that your workload's required region carries the model you need before committing.
Key Takeaways
- Claude Platform on AWS went generally available on May 11, 2026, in 17 AWS regions, per AWS's official announcement.
- The platform is Anthropic-operated, uses AWS IAM and CloudTrail, bills through AWS Marketplace in Claude Consumption Units, and ships new features the same day as the native Claude API.
- Bedrock remains the right path when workloads require AWS as the sole data processor inside the AWS security boundary.
- Claude Enterprise spend on the new platform retires against AWS Enterprise Discount Program and Private Pricing Agreement commitments, which materially changes procurement math.
- The right enterprise posture is workload-by-workload routing between Bedrock and Claude Platform on AWS, not a single global choice.
The businesses that move early on consolidating their Claude procurement and security posture will have a meaningful advantage. If you want to be one of them, let's start with a conversation.