6 دقيقة قراءة

OpenAI and Anthropic Just Spent $5.5B on Consultants. Here's What That Tells You.

OpenAI's share of the enterprise LLM API market fell from 50% to 25% between late 2023 and mid-2025, according to Menlo Ventures. Anthropic now leads with 32%. Google holds 20%. In code generation specifically, Claude captures 42% market share versus OpenAI's 21%.

OpenAI's response was not a better model. It was a consulting company.

In May, both OpenAI and Anthropic announced enterprise AI services ventures. OpenAI launched the OpenAI Deployment Company with $4 billion from 19 institutional investors, valued at $10 billion. Anthropic launched a competing services firm with $1.5 billion from Blackstone, Goldman Sachs, and Hellman & Friedman. Combined, two AI labs just committed $5.5 billion to the thesis that their models alone are not enough.

The Palantir playbook and why everyone is copying it

Both ventures explicitly copy Palantir's Forward Deployed Engineer model. The concept: embed full-stack engineers directly inside customer environments to build production systems on top of the vendor's platform. Not a proof of concept, not a workshop. Engineers who stay for months, understand the business, and ship real workflows.

Palantir's results with this model are hard to argue with. Revenue hit $4.5 billion in FY2025 with 70% year-over-year growth. Their top 20 customers spend an average of $93.9 million per year. Adjusted operating margins sit at 60%. The FDE model is why Palantir's commercial revenue is growing 115% annually while traditional enterprise software companies fight for single-digit gains.

But there is a critical difference in how Palantir uses FDEs versus what OpenAI and Anthropic are building. Palantir's FDEs are a product discovery mechanism. They sit inside customer operations, find repeated problems, and feed those patterns back to the engineering team, which then encodes solutions as platform features. The FDE model is a flywheel: services inform product, product reduces the need for services, and each deployment makes the platform better for every customer.

OpenAI's Deployment Company is structured differently. It is a distribution channel. The 19 PE partners sponsor more than 2,000 portfolio companies. Every deployment generates API calls, inference workloads, and compute demand that flows back to OpenAI's infrastructure. It is a compute-pull strategy built on a services wrapper.

$700 million a year in guaranteed returns

The financial structure of OpenAI's Deployment Company is unusual. PE investors receive a 17.5% guaranteed annual return over five years. On $4 billion in committed capital, that is roughly $700 million per year in guaranteed payouts regardless of whether the venture is profitable.

OpenAI is projected to lose $14 billion in 2026. Thoma Bravo, one of the largest software-focused PE firms in the world, declined to participate, questioning the long-term profit profile. Multiple analysts have drawn comparisons to above-market guaranteed yields from entities that spend far more than they earn.

The question for enterprise buyers is not whether OpenAI can fund this venture. It is what happens to the FDEs embedded in your organization if the economics do not hold. Consulting relationships depend on continuity. When the consultant leaves, the institutional knowledge leaves with them.

The deployment gap is real. The solution is wrong.

The signal in these announcements is worth separating from the strategy. The signal is correct: deployment is the bottleneck, not model capability. 88% of organizations running AI agents reported a security incident in the past year. 42% of C-suite executives say AI adoption is creating organizational conflict. The technology works in demos. Getting it into production, connected to real data, governed properly, and trusted by the people who use it is where most projects stall.

Both OpenAI and Anthropic are acknowledging this by investing billions in it. That part is right.

The strategy of solving it with embedded consultants is where the logic breaks. Forward Deployed Engineers cost between $171,000 and $800,000 per year in total compensation. Palantir's model only works economically at seven-figure annual contract values. That prices out the vast majority of enterprises. OpenAI's PE partners get priority access for their portfolio companies. Everyone else waits.

More fundamentally, the consulting model scales linearly. Every new customer requires more engineers. Every new deployment requires more hours. The cost of the 100th deployment is roughly the same as the first. That is the structural limitation of services businesses, and it is exactly why Palantir uses FDEs to feed product development rather than treating them as a standalone revenue line.

Platforms versus consultants

The alternative to embedding engineers in every customer is building a platform where deployment capability is the product. Instead of sending a person to connect your AI agents to your data, tools, and business processes, you build a system that lets teams do it themselves with proper guardrails, governance, and observability built in.

Platform-first deployment has structural advantages over the consulting model. It scales with software economics: the marginal cost of the 1,000th deployment approaches zero. It retains institutional knowledge in the system rather than in a person who eventually leaves. It gives enterprises control over their own AI operations instead of depending on vendor availability. And it improves with every deployment because the platform learns from usage patterns, not from individual consultant experience that walks out the door.

The total cost comparison between consulting-led and platform-led deployment is not close. Consulting implementations average $228,000 in year one versus $77,000 for platform-based automation, a 66% difference. And the gap compounds: consulting maintenance runs 20-30% of initial cost annually, while platform maintenance runs 10-15%.

This does not mean consultants have no role. Complex enterprise environments often need expert guidance during initial architecture and integration. But the consultants should be accelerating your team's ability to use the platform, not becoming a permanent fixture. The goal is independence, not dependency.

What this means for your AI deployment decision

OpenAI and Anthropic just validated the most important thesis in enterprise AI: the models are good enough. The problem is getting them into production. Their answer is to throw billions at human-powered deployment. That will work for the PE portfolio companies that get priority access and can afford seven-figure engagements.

For everyone else, the takeaway is different. If the two leading AI labs both concluded that their models need a human services layer to reach production, your deployment platform needs to solve that same gap through software. That means multi-model orchestration so you are not locked into one vendor's API. It means built-in governance so your security team does not need to audit from scratch. It means workflow builders that let operations teams connect agents to business processes without waiting for an FDE allocation.

The $5.5 billion signal is clear: deployment is the game now. The question is whether you solve it by renting engineers or by choosing the right platform. One of those scales. The other does not.

ابدأ اليوم

ابدأ في بناء وكلاء الذكاء الاصطناعي لأتمتة العمليات

انضم إلى منصتنا وابدأ في بناء وكلاء الذكاء الاصطناعي لمختلف أنواع الأتمتة.

ابدأ اليوم

ابدأ في بناء وكلاء الذكاء الاصطناعي لأتمتة العمليات

انضم إلى منصتنا وابدأ في بناء وكلاء الذكاء الاصطناعي لمختلف أنواع الأتمتة.