MCP Hit 97 Million Downloads. The Companies That Built It Are Already Working Around It.

In sixteen months, Anthropic’s Model Context Protocol went from a niche SDK to 97 million monthly downloads. Every major AI lab adopted it. Over 5,800 servers now exist for databases, CRMs, cloud providers, and dev tools. By any reasonable metric, MCP won the standard war for AI agent tool integration.

And the companies that championed it are already routing around it in production.

What MCP Actually Does

If you’re building AI agents that need to talk to external tools, MCP gives you a universal connector. Think of it like USB-C for AI integrations. Before MCP, every tool connection required custom code. Now, one protocol handles discovery, authentication, and execution across thousands of services.

Anthropic created it in November 2024. OpenAI adopted it by Q2 2025. Google DeepMind, Microsoft Copilot, and AWS Bedrock followed within months. By December 2025, Anthropic donated MCP to the Linux Foundation’s Agentic AI Foundation, co-founded with OpenAI and Block.

The adoption curve was steeper than React’s. The ecosystem grew faster than anyone expected. And then production reality showed up.

The Token Tax Nobody Budgeted For

Perplexity’s CTO Denis Yarats announced in March that the company is moving away from MCP internally. The reasons: context window overhead and authentication friction.

Cloudflare put hard numbers on the problem. Their MCP server covers 2,500 API endpoints. A native implementation would consume roughly 244,000 tokens just to describe available tools to the model. That’s more than most models’ entire context window. Their workaround, called “Code Mode,” cuts token usage by 81% — but it bypasses MCP’s standard tool-calling mechanism to get there.

This is the core tension. MCP is brilliant at discovery. It lets an agent find and connect to tools through a single protocol. But when you dump full tool schemas into context for every interaction, you burn through token budgets fast. For a business running thousands of agent interactions daily, that overhead compounds into real cost.

The 2026 MCP roadmap lists enterprise readiness as its fourth and final priority. The project’s own documentation describes it as “the least defined” of its four goals. No Enterprise Working Group exists yet.

Meanwhile, a new security category has emerged to fill the gap. PointGuard AI, TrueFoundry, MintMCP, and IBM’s open-source ContextForge all launched MCP Security Gateway products in Q1 2026. When third parties are building security layers the protocol itself should provide, that tells you where the maturity curve actually sits.

The Practical Playbook for 2026

None of this means MCP is the wrong bet. It means betting on it incorrectly is expensive.

Gartner predicts 40% of enterprise applications will include task-specific AI agents by end of 2026, up from less than 5% today. Businesses evaluating agentic AI can’t avoid MCP. But they can avoid the common mistake of treating protocol adoption as architecture.

Here’s what we’ve seen work across deployments:

Use MCP for discovery and development. It cuts integration development time by 60-70% for multi-tool agent setups. For local workflows and prototyping, it’s genuinely excellent.

Add a security gateway before production. The protocol’s auth model wasn’t designed for enterprise trust boundaries. Products like PointGuard or ContextForge sit between your agents and MCP servers, handling access control and audit logging.

Keep direct API fallbacks for high-volume execution paths. When an agent calls the same three endpoints thousands of times daily, paying the MCP token tax on every call makes no sense. Route discovery through MCP, route execution through optimised direct integrations.

The companies winning at agentic AI in 2026 aren’t the ones that adopted MCP fastest. They’re the ones that understood where the standard ends and engineering decisions begin. That distinction is worth more than any protocol specification.

Leave a Reply

Your email address will not be published. Required fields are marked *