When the internet was being standardized, TCP/IP wasn’t the only protocol in town. OSI had its own vision. SNA was popular in enterprise. But TCP/IP won — not because it was perfect, but because it was simple, open, and good enough to build on top of.
On March 25, 2026, Anthropic’s Model Context Protocol (MCP) crossed 97 million installs. Then it was donated to the Linux Foundation’s new Agentic AI Foundation (AAIF), co-founded with Block and OpenAI, with backing from Google, Microsoft, AWS, and Cloudflare.
I’ve been building with MCP since early 2025. This moment feels like the early days of HTTP — when everyone realized we needed a shared standard, not fifteen competing ones.
What MCP Actually Solves
Before MCP, connecting an LLM to an external tool was chaos. Every AI provider had its own function-calling format. Every tool integration required custom connectors. Your Claude-based agent couldn’t reuse the same tools as your GPT-based one.
The fundamental problem: AI models and external systems spoke different dialects of the same language.
MCP standardizes this with a simple client-server architecture:
AI Model (Client) <---> MCP Protocol <---> Tool/Data Source (Server)
An MCP server exposes three primitives:
- Tools — functions the AI can call (e.g.,
read_file,query_database) - Resources — data the AI can read (e.g., file contents, API responses)
- Prompts — reusable prompt templates
That’s it. Simple enough to implement in a weekend, powerful enough to connect any AI to any system.
Why 97 Million Is a Meaningful Number
97 million monthly SDK downloads isn’t just a vanity metric. Compare it to other infrastructure standards at similar stages:
- npm (Node.js package manager) took 3 years to reach comparable monthly downloads
- Docker Hub took 2 years to reach 100M pulls per month
MCP reached this adoption in roughly 16 months since its November 2024 launch. More telling: 10,000 active MCP servers now exist in the wild, covering everything from file systems and databases to Slack, GitHub, Jira, and hundreds of enterprise APIs.
From my team’s experience building internal AI tools: when we switched from custom function-calling implementations to MCP in Q1 2025, integration time per new tool dropped from ~2 days to ~4 hours. The protocol forces you to think in the right abstractions.
The Linux Foundation Move: Why It Matters
Anthropic’s decision to donate MCP wasn’t altruistic — it was strategic AND beneficial to the ecosystem. Here’s the calculation:
For Anthropic: MCP becomes more valuable if every AI provider adopts it, not just Claude users. By giving it to a neutral foundation, they remove the “this is Anthropic’s protocol” objection from competitors.
For developers: The Linux Foundation governance means MCP’s roadmap is now community-driven. No single company can deprecate it or break backward compatibility for competitive advantage.
For the industry: We now have a founding coalition — OpenAI, Anthropic, Google, Microsoft, AWS — committed to a shared standard. This is rare. This is how lasting infrastructure gets built.
The AAIF also took in two other projects: goose (an open-source AI agent framework from Block) and AGENTS.md (a specification for describing AI agent capabilities). Together, these form a coherent open stack for agentic AI.
Hands-On: Building with MCP Today
Here’s a minimal MCP server in TypeScript that I’ve used in production for connecting our internal documentation to AI assistants:
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
CallToolRequestSchema,
ListToolsRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";
const server = new Server(
{ name: "docs-server", version: "1.0.0" },
{ capabilities: { tools: {} } }
);
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [
{
name: "search_docs",
description: "Search internal documentation",
inputSchema: {
type: "object",
properties: {
query: { type: "string", description: "Search query" },
},
required: ["query"],
},
},
],
}));
server.setRequestHandler(CallToolRequestSchema, async (request) => {
if (request.params.name === "search_docs") {
const { query } = request.params.arguments as { query: string };
// Your search logic here
const results = await searchInternalDocs(query);
return {
content: [{ type: "text", text: JSON.stringify(results) }],
};
}
throw new Error(`Unknown tool: ${request.params.name}`);
});
const transport = new StdioServerTransport();
await server.connect(transport);
This server works immediately with Claude Desktop, Claude Code, VS Code Copilot, and any other MCP-compatible client. Write once, works everywhere — that’s the TCP/IP parallel.
What This Means for How We Architect AI Systems
I’ve updated our team’s architecture guidelines based on where MCP is heading:
1. Stop building proprietary tool connectors If you’re building a new AI feature that needs to connect to external systems, build it as an MCP server. The overhead is minimal, and you future-proof against AI client changes.
2. Design for composability MCP servers are composable. An agent can connect to multiple MCP servers simultaneously. Design your servers with single responsibilities — one server for your CRM, one for your docs, one for your codebase.
3. Think about security early MCP’s simplicity is also its risk. A poorly secured MCP server is an RCE waiting to happen. The protocol doesn’t mandate authentication — you need to add it yourself. Use HTTPS transports for remote servers. Implement proper input validation on every tool handler.
4. Consider the multi-model future Today your team might use Claude. Tomorrow they might use Gemini or GPT-5. Building on MCP means your tool integrations aren’t tied to any single AI provider.
The Real Milestone Isn’t the Downloads
The 97 million number is impressive, but the real signal is the governance transfer. When Anthropic moved MCP to the Linux Foundation, they accepted that they can’t control its future — and they chose to do it anyway.
That’s the inflection point. That’s when a vendor-specific protocol becomes infrastructure.
We’re early in the agentic AI era, but the plumbing is getting laid right now. In five years, we’ll look back at early 2026 the same way we look back at 1994 and the early web: a moment when the standards that would define a generation were quietly decided.
MCP is becoming that standard. Build on it accordingly.
Technical Lead perspective: I’ve been building AI agent infrastructure since 2024. The patterns I’m seeing emerge around MCP closely mirror the consolidation we saw around REST APIs in 2010-2012. The developers who built REST-native systems then had a massive advantage for the next decade. The same opportunity is here now.