On April 28, 2026, Google signed a classified contract letting the Department of Defense use Gemini AI models for "any lawful government purpose" on classified networks. According to TechCrunch, the deal followed Anthropic's public refusal to grant the Pentagon the same terms, a refusal that got Anthropic branded a "supply chain risk" two months earlier. For businesses buying AI, this is no longer a story about defense procurement. It is the moment vendor ethics became a procurement variable that sits next to price and capability on the comparison sheet.
What Actually Happened on April 28
Google's deal allows Pentagon workers to use Gemini for classified work, with advisory guardrails against domestic mass surveillance and autonomous weapons. As 9to5Google reported, the contract preserves Google's right to set safety policies but also lets the government request adjustments to those settings. Bloomberg confirmed the agreement covers air-gapped classified networks where Google cannot observe queries or outputs.
The same day, CNBC reported that Pentagon AI chief Cameron Stanley described expanding the Department's reliance on Gemini, partly because, in his words, depending on a single model is "never a good thing." That framing is meaningful. The Pentagon is publicly diversifying its AI vendor stack, and which vendors qualify is being decided in part by their willingness to accept the Pentagon's preferred contract terms.
The Google deal happened roughly 24 hours after more than 580 Google employees, including DeepMind researchers and 20 vice presidents and directors, signed an open letter urging CEO Sundar Pichai to refuse classified Pentagon contracts without stricter guardrails. Google signed anyway. On the same day, the company quietly exited a separate $100 million Pentagon drone swarm contest, a partial concession that did not satisfy the protesting employees.
How Anthropic Got Here First
Anthropic's path to "supply chain risk" status started in February 2026. As CNN Business reported, the Trump administration ordered federal agencies and military contractors to halt business with Anthropic after the company refused to let its AI be used for autonomous weapons or domestic mass surveillance. The supply chain risk designation, normally applied to companies tied to foreign adversaries, effectively cut Anthropic out of Department of Defense procurement.
Anthropic sued. In late March, a federal judge in California granted a preliminary injunction, with the court noting that the record "strongly suggests" the supply chain risk reasoning was pretextual and that the real motive was retaliation. An appeals court partially narrowed that ruling on April 8, leaving Anthropic excluded from direct DoD contracts but able to keep working with other federal agencies during the litigation.
The legal mechanics matter less than the precedent. A frontier AI vendor publicly drew a line on use cases. The Pentagon punished it with a designation that previously implied adversary nation status. The case is now active law about how far the federal government can go to coerce vendor behavior. Whatever the courts ultimately decide, the procurement signal is already received.
Why This Is a Business Story, Not Just a Policy Story
It is tempting to read all of this as inside baseball for defense procurement officers. That misses what is happening to AI vendor selection across the broader economy.
For most of 2024 and 2025, businesses chose between Claude, GPT, and Gemini almost entirely on capability and price. Vendor ethics policies were treated like terms of service: standard, boring, mostly identical. That is no longer true. The frontier labs have visibly different positions on weapons, surveillance, biosecurity, and government access, and those positions now have measurable financial and operational consequences.
Anthropic's stance closed a revenue line. Defense and intelligence buying is a multi-billion-dollar federal market. Whatever you think of the ethics, losing access to it affects the runway, hiring, and investment pace of a key vendor.
Google's stance triggered an internal stability problem. Hundreds of senior engineers and researchers signed a public letter against their CEO's decision, with IBTimes reporting that DeepMind scientists called the deal something they were "incredibly ashamed" of. Talent retention is one of the few real moats in frontier AI. Persistent protest erodes it.
The category itself is now politicized. If the next administration shifts policy on AI export controls, surveillance authorities, or content moderation, vendor stances that look prudent today could create exposure tomorrow. Procurement teams that have not started tracking vendor policy positions are running a risk they cannot quantify.
Our take: When a vendor's ethical commitments materially shape its revenue, talent, and contract availability, those commitments stop being public relations and become balance-sheet variables. That is true whether the vendor's stance is permissive or restrictive.
What This Means for Your AI Stack
If you are evaluating AI vendors for production use, three concrete shifts follow from the events of the last two months. We have written before about why model and cloud choices are converging into one decision; ethics is the third axis on the same chart.
Map your use cases to each vendor's policy stack. Anthropic publishes a usage policy that excludes weapons and certain surveillance categories. Google's enterprise terms are now meaningfully different from its defense terms. OpenAI maintains separate policies by deployment surface. If you operate in regulated industries, security, healthcare, or sensitive content moderation, your specific use case may be approved by one vendor and prohibited by another. Surface those conflicts before procurement, not during a compliance audit.
Treat vendor stability as part of the evaluation. A vendor under government pressure, employee revolt, or active litigation has a different risk profile than a stable vendor at the same capability and price. That does not mean avoiding vendors with controversy. It means asking what happens to your support, pricing, and roadmap if a controversy escalates. Multi-vendor resilience, which we covered after Sora's shutdown, applies here too.
Negotiate contract clauses tied to vendor policy changes. A material change in your vendor's acceptable-use policy, a court-ordered designation, or a sudden government restriction can break your stack overnight. Contracts signed in 2024 mostly did not anticipate this. New AI contracts should include notice and exit provisions tied to material vendor policy changes, comparable to the terms a sophisticated buyer would demand from a critical SaaS provider.
The deeper governance work, including documenting acceptable use, mapping vendor policies to internal risk categories, and assigning ownership for ongoing monitoring, sits inside the practical AI governance framework we use with mid-market clients.
What Not to Do
Do not pick vendors on ethics theater alone. A press release about "responsible AI" is not the same as a documented usage policy that has survived a confrontation with a major customer. Anthropic's posture has been tested and quantified; many vendors' commitments have not.
Do not assume the question stays in defense. The same logic that put Anthropic on the supply chain risk list is portable. Future administrations, foreign governments, or large strategic customers can apply similar pressure on questions about content moderation, intelligence gathering, biological or chemical research, or election integrity. Build vendor diversity now while it is cheap, not after a procurement crisis.
Do not over-correct toward in-house everything. Some teams will read this story and conclude they should run open-source models on private infrastructure to avoid vendor risk altogether. That is a real option for narrow use cases, but it transfers risk rather than eliminating it. Most businesses still need at least one frontier vendor relationship; the goal is resilience, not isolation. The careful work of matching the right model to each business problem is what lets you exercise vendor diversity without paying for it twice.
How Vectrel Is Advising Clients This Quarter
Three things have changed in the AI vendor scorecards we use in client engagements over the last sixty days.
We added a column for documented usage policy and a column for known active conflicts (litigation, employee protests, government designations). We tightened contract review to flag any agreement longer than 24 months in fast-moving categories without a vendor-policy-change exit clause. And we now require clients in regulated industries to maintain at least one fully provisioned alternative model behind their primary vendor, even when the alternative is not currently in production use, so a vendor disruption does not become a business outage.
None of this is exotic. It is supply chain hygiene applied to a category that finally has supply chain consequences. The events of the last week make that hygiene table-stakes rather than optional.
Key Takeaways
- Google signed a classified Pentagon contract on April 28, 2026 letting the Department of Defense use Gemini for any lawful purpose on classified networks, per reporting from TechCrunch, Bloomberg, and CNBC.
- The deal followed Anthropic's February 2026 refusal of similar terms, after which the Pentagon branded Anthropic a "supply chain risk" and excluded it from DoD procurement, a designation a federal judge has partially blocked.
- More than 580 Google employees, including DeepMind researchers and senior leaders, urged Pichai to refuse the classified deal one day before he signed it.
- AI vendor ethics now have measurable financial and operational consequences, including revenue access, talent retention, and contract enforceability.
- Practical responses: map use cases to vendor policies, score vendor stability alongside capability and price, and add policy-change exit clauses to AI contracts.
The businesses that move early on building vendor-ethics resilience into their AI stack will have a meaningful advantage. If you want to be one of them, let's start with a conversation.