4 Min. Lesezeit

NVIDIA’s China Market Share Went from 95% to Zero. The $12 Billion Question Is Who Filled the Gap.

Two years ago, Nvidia owned 95% of China’s AI accelerator market. As of GTC 2026, Jensen Huang says that number is zero.

Not declining. Not under pressure. Zero. Nvidia recognized $4.5 billion in charges tied to export restrictions in its first fiscal quarter of 2026 alone. China used to account for 20-25% of Nvidia’s data center revenue. That line item is now a write-off.

The story everyone is telling is about trade policy and geopolitics. The story that matters more for anyone building with AI agents is about what happened next: China did not wait. It built.

Huawei’s $12 billion answer

When Nvidia’s access closed, Huawei’s opened. The company’s AI chip revenue is projected to hit $12 billion in 2026, a 60% jump from $7.5 billion in 2025. That is not a rounding error. That is a new market leader forming in real time.

The hardware driving it is the Ascend 950PR, which entered mass production in March. It delivers up to 2 petaflops of FP4 performance with 128 GB of locally produced HBM memory. An updated version, the 950DT, is expected in Q4 2026 with 144 GB of HBM. For inference workloads, which is where most production AI agents actually run, the 950PR is competitive enough that Chinese hyperscalers are building around it rather than waiting for sanctions to lift.

Huawei is on track to control 60% of China’s AI chip market by end of year. Behind it, Cambricon, Moore Threads, and MetaX are carving out their own positions. The total addressable market for AI accelerators in China is projected to reach $30-35 billion in 2026. None of that spend is going to American companies.

The CUDA problem nobody is talking about

Chips are the headline. Software is the real story.

For over a decade, Nvidia’s CUDA platform has been the default programming layer for AI workloads. Nearly every major model, every training pipeline, every inference stack was built on CUDA. Developers learned it in school. Companies built internal tooling around it. The switching cost was supposed to be Nvidia’s permanent moat.

Export controls broke that moat in China. When Chinese developers can no longer buy Nvidia hardware, they stop writing CUDA code. They build on Huawei’s Ascend ecosystem instead, on CANN (Compute Architecture for Neural Networks), on MindSpore, on a software stack that did not exist at scale three years ago.

Jensen Huang made this point explicitly: “Conceding an entire market the size of China probably does not make a lot of strategic sense. I think that has already largely backfired.” His concern is not about the hardware revenue Nvidia lost. It is about the software ecosystem China is gaining. When Chinese developers build on Ascend instead of CUDA, they create an alternative stack that can then be exported globally.

Two AI stacks, not one

The practical result is a split. American AI infrastructure runs on Nvidia. Chinese AI infrastructure runs on Huawei. Both ecosystems are now developing independently, with their own hardware, their own software layers, their own optimization patterns.

For enterprises deploying AI agents, this matters in ways that go beyond geopolitics. Model providers are already diverging. DeepSeek, Qwen, and other Chinese-origin models are trained and optimized on Ascend hardware. Western models run on Nvidia. If your agent orchestration layer is locked to one hardware ecosystem, you lose access to half the models being built.

This is where hardware-agnostic agent platforms become a structural advantage, not just a convenience. An agent that runs on any model, regardless of what silicon trained it, does not care whether the next breakthrough comes from Santa Clara or Shenzhen.

What $4.5 billion in write-offs actually bought

The US export controls achieved their stated goal: China does not have access to Nvidia’s latest chips. But the second-order effects are harder to celebrate.

China’s domestic AI chip market went from nearly zero to an estimated $30-35 billion in under three years. Huawei built a competitive inference chip and a software stack to go with it. Chinese hyperscalers, ByteDance, Alibaba, Tencent, built their training pipelines on domestic hardware because they had no choice. China’s Cyberspace Administration then told domestic companies to stop ordering Nvidia chips entirely, even after the US softened some restrictions. The door closed from both sides.

Nvidia’s $4.5 billion charge bought the creation of a parallel AI ecosystem. One that does not need American hardware. One that develops its own software standards. And one that will compete with CUDA globally for the next decade.

What this means for enterprise AI

The 95-to-zero story is dramatic, but the strategic question is simpler: where do you want your AI infrastructure to sit?

Companies that bet on a single model provider, a single hardware vendor, or a single ecosystem are making a bet on one side of a split that is already happening. The enterprises that come out ahead will be the ones whose agent infrastructure can run any model on any hardware, and swap between them when the supply chain shifts.

The chip war is a hardware story. The real question is whether your AI stack is flexible enough to survive it.

Heute starten

Starten Sie mit KI-Agenten zur Automatisierung von Prozessen

Nutzen Sie jetzt unsere Plattform und beginnen Sie mit der Entwicklung von KI-Agenten für verschiedene Arten von Automatisierungen

Heute starten

Starten Sie mit KI-Agenten zur Automatisierung von Prozessen

Nutzen Sie jetzt unsere Plattform und beginnen Sie mit der Entwicklung von KI-Agenten für verschiedene Arten von Automatisierungen