Mar 3, 2026
7 min read
What Happened to OpenAI and Claude This Week and Why Enterprises Should Pay Attention

In five days, OpenAI signed a military deal with the Pentagon. Anthropic refused the same terms and was blacklisted from all federal contracts. Claude hit #1 on the App Store. Then Claude crashed worldwide for three hours. And 700,000 users pledged to cancel their ChatGPT subscriptions.
None of these were model failures. No hallucinations, no training data problems, no technical bugs. Every disruption came from decisions, politics, and demand surges that had nothing to do with the AI itself.
For anyone running these tools in production, that should change how you think about AI risk.
What Actually Happened
The Department of Defense wanted Anthropic to allow Claude to be used for "all lawful purposes," including potential applications in mass domestic surveillance and fully autonomous weapons systems. Anthropic refused. CEO Dario Amodei said publicly, "We cannot in good conscience accede to their request." His argument was specific: Claude's current models are not reliable enough for fully autonomous weapons, and mass domestic surveillance crosses a line that existing U.S. law does not adequately prevent.
On February 27, the consequences landed fast. President Trump ordered every federal agency to stop using Anthropic's technology. Defense Secretary Pete Hegseth designated Anthropic a "supply chain risk to national security", a category normally reserved for entities tied to foreign adversaries. It was the first time this designation had been applied to an American company.
Hours later, OpenAI announced it had signed a deal with the Pentagon to deploy its models in classified military networks. The deal accepted the "any lawful use" standard that Anthropic had rejected. Sam Altman later admitted the deal was "definitely rushed" and that "the optics don't look good."
The fallout spread across the industry within days. Over 430 employees at Google, OpenAI, Microsoft, and Amazon signed an open letter urging their companies to refuse the Pentagon's demands. Leo Gao, an OpenAI alignment researcher, publicly called the contract safeguards "not really operative except as window dressing." The QuitGPT movement went viral, with 700,000 users pledging to cancel ChatGPT subscriptions.
Public opinion swung toward Anthropic. Claude jumped from outside the top 100 to #1 on the U.S. App Store, overtaking ChatGPT. Free users grew more than 60% since January. Paid subscribers more than doubled.
Then on March 2nd and 3rd, Claude went down multiple times. Anthropic attributed the outage to "unprecedented demand." The authentication infrastructure buckled under the user surge. Consumer services including claude.ai, mobile apps, and Claude Code went offline for hours per outage.
Five Enterprise AI Risks This Exposed
1. A government decision can disrupt your AI provider's entire customer base
Anthropic's blacklisting did not just affect federal agencies. The "supply chain risk" designation means any company with a Pentagon contract must now certify they do not use Claude, even for non-defense work. That includes defense contractors, their subcontractors, consulting firms with government divisions, and technology companies with federal arms.
Many of these companies are also Anthropic's enterprise customers. Fortune reported that risk-averse legal teams at large enterprises may preemptively disable Claude across their entire environment, regardless of whether they have direct defense exposure. The chilling effect reaches any company whose legal team decides the risk of association is not worth it.
You do not need to be a government customer for a government decision to disrupt your AI provider's business, its roadmap, its funding, and the service you depend on.
2. Your provider's government agreements create compliance exposure you did not sign up for
OpenAI's Pentagon deal permits the Department of Defense to use its models for "all lawful purposes." Legal experts have flagged significant gaps in what that language actually prevents. The contract references existing laws and policies, but those policies can be modified. An OpenAI alignment researcher described the safeguards as "window dressing."
For most enterprises, the military specifics are beside the point. What matters is the new due diligence question: what has your AI provider agreed to with governments, and does any of it conflict with your own regulatory obligations?
If you operate under the EU AI Act or GDPR and your AI provider participates in government data programs with loosely defined boundaries, your legal team needs to evaluate whether that creates indirect exposure. Six months ago, nobody was asking this during vendor evaluation. Now it belongs in every AI procurement checklist.
3. Your provider's popularity is now a reliability risk
Claude's March 2 outage was not caused by a bug or a capacity planning failure in the traditional sense. It was caused by public support. After the Pentagon standoff, users flooded to the platform. Free sign-ups broke all-time records every day that week. The authentication infrastructure could not keep up, and consumer services went down for nearly three hours.
OpenAI hit a similar wall a month earlier. After launching its Codex desktop app, which pulled in 200,000 downloads on day one, ChatGPT went down for thousands of users two days in a row. ChatGPT's overall uptime currently sits at 98.90%, compared to 99.76% for the API.
If you are an enterprise paying for an SLA, your provider's reliability is now affected by consumer demand surges triggered by news cycles, product launches, and social media momentum. None of those are in your control, and none of them show up in a vendor's uptime history until they happen.
4. Your workforce can switch AI providers faster than IT can respond
The QuitGPT movement pushed 700,000 users to pledge subscription cancellations in days. Anthropic responded by improving its competitor-switching tools, making it easier to migrate from ChatGPT to Claude.
This matters for enterprises because your teams already use these tools daily. If developers, analysts, and operators switch AI tools based on personal conviction or viral campaigns, your standardization breaks. Prompts built for one model do not transfer cleanly to another. Workflows tuned to specific model behavior produce different results on a different provider. Shadow AI was already a growing problem for enterprises. Now it has a new accelerant.
5. AI vendor selection now requires a category of diligence that did not exist before
Until last week, picking an AI provider was a question of model quality, pricing, API reliability, and compliance certifications. Those still matter. But the events of this week added new dimensions that belong in every evaluation.
What is your provider's exposure to government contracts, and what terms did they accept? If your provider is restricted in a market where you operate, how fast can you switch? If your provider's user base surges or collapses, will your SLA hold? Does your architecture support failover to a different model without rebuilding your workflows?
Enterprises running production AI agents on a single provider are carrying concentration risk that did not exist six months ago. That is not speculation. Last week proved it.
What Enterprises Should Do Now
Audit your provider dependency. Map every production workflow to its AI provider. Identify which ones have no fallback. Gartner forecasts $2.52 trillion in global AI spending this year, but ServiceNow's Enterprise AI Maturity Index found fewer than 1% of organizations score above 50 on a 100-point maturity scale. The gap between spending and readiness is where the risk lives.
Build for model portability. The question is whether your workflows are locked to a specific model or whether they can route to alternatives when needed. A recent CIO analysis introduced the term "agentic amnesia": the state where enterprises become so dependent on AI systems that they lose the capacity to recover when those systems fail. Unlike a database you can restore from a snapshot, AI systems accumulate embeddings, fine-tuned weights, and agent workflows that represent organizational intelligence. Losing access to your provider means losing that intelligence.
Run a failure drill. Simulate the loss of your primary AI provider for 48 hours. Find out what breaks, who can bridge the gap, and how long recovery takes. Most enterprises have never done this. After last week, the ones that have are ahead.
This Will Keep Happening
A year ago, enterprise AI risk management meant monitoring hallucination rates, tracking token costs, and reviewing data privacy policies. Those concerns have not gone away, but they are no longer the full picture.
AI providers now make decisions about government contracts, ethical boundaries, and access policies that directly affect their enterprise customers. Governments can restrict access to a provider overnight. Viral movements can shift hundreds of thousands of users in a week. Demand surges from news cycles can take down services that enterprises depend on for daily operations.
The organizations that will navigate this best are the ones whose AI infrastructure supports multiple models, enables provider switching without rebuilding workflows, and treats resilience as a core operational requirement.





