In December 2025, Anthropic’s Model Context Protocol joined the Linux Foundation under a new umbrella called the Agentic AI Foundation, alongside OpenAI’s AGENTS.md and Block’s goose framework. Three months later, MCP has crossed 10,000 published servers spanning developer tools to enterprise deployments.
That number is worth pausing on. Not because it’s a vanity metric, but because it represents a genuine inflection point in how AI systems integrate with the rest of your software stack.
What MCP Actually Is (And Why It’s Different)
If you’ve only read the marketing copy, MCP sounds like yet another API abstraction layer. It’s not. The key insight is simpler and more consequential: MCP inverts the integration model.
In the traditional model, you write code to call external tools:
# Traditional approach
def get_database_data(query):
conn = psycopg2.connect(DATABASE_URL)
return conn.execute(query).fetchall()
def search_docs(query):
return requests.get(f"{DOCS_API}/search?q={query}").json()
# Pass these as tools to your LLM
tools = [get_database_data, search_docs]
response = llm.complete(prompt, tools=tools)
You maintain the integration. You handle authentication, versioning, error handling. You update the code when the external service changes.
With MCP, the external service declares its own capabilities:
{
"name": "postgres-server",
"version": "1.0.0",
"tools": [
{
"name": "query",
"description": "Execute a read-only SQL query",
"inputSchema": {
"type": "object",
"properties": {
"sql": { "type": "string" }
}
}
}
]
}
Your AI agent discovers this capability at runtime. The database team maintains the MCP server. You just point your agent at it.
This is the same architectural shift that happened when REST + JSON replaced SOAP. The friction reduction compounds at scale.
Why 10,000 Servers Is the Tipping Point
Network effects work differently for protocols than for products. A social network with 10,000 users is irrelevant. A protocol with 10,000 implementations is approaching standardization.
When I evaluate whether to build on a protocol vs. a vendor API, my key question is: “If I need to swap the AI provider tomorrow, what breaks?” With MCP:
- Your MCP servers keep working regardless of which AI model you use
- Claude, GPT-5, Gemini, and any open-source model that supports MCP can use the same servers
- Your enterprise integrations become AI-provider-agnostic
That’s not theoretical. I’ve already migrated a production agent from Claude to a local open-source model for a client with strict data residency requirements. Because we’d built on MCP from the start, the migration was a config change, not a rewrite.
The NIST Factor: Why Enterprise Cares Now
The NIST AI Agent Standards Initiative, announced February 17, 2026, explicitly frames its work around “industry-led standards” and “open-source protocol development.” MCP is already named in the framing documents.
For enterprise customers — especially in regulated industries — NIST involvement changes the procurement conversation. When your AI agent integration is built on a NIST-referenced protocol, the security review gets shorter. The vendor lock-in concern gets smaller. The board-level sign-off becomes more straightforward.
I’ve been in those conversations. The difference between “we built this on a custom integration” and “we built this on the Linux Foundation’s standard protocol” is measurable in months of enterprise sales cycles.
Practical MCP: What It Looks Like in .NET
The .NET SDK for MCP has matured significantly. Here’s a minimal but production-ready server:
using ModelContextProtocol.Server;
using ModelContextProtocol.Protocol.Types;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddMcpServer()
.WithHttpTransport()
.WithTool<DatabaseQueryTool>()
.WithTool<DocumentSearchTool>()
.WithResource<CompanyPoliciesResource>();
var app = builder.Build();
app.MapMcp("/mcp");
app.Run();
[McpTool("query_database")]
public class DatabaseQueryTool(IDbConnection db)
{
[Description("Execute a read-only query against the company database")]
public async Task<string> QueryAsync(
[Description("The SQL SELECT statement")] string sql,
CancellationToken ct)
{
// Validate it's actually read-only
if (!sql.TrimStart().StartsWith("SELECT", StringComparison.OrdinalIgnoreCase))
throw new McpException("Only SELECT queries are permitted");
var result = await db.QueryAsync(sql, cancellationToken: ct);
return JsonSerializer.Serialize(result);
}
}
The key observation: this is a standard ASP.NET Core application. Deployment, monitoring, authentication, and rate limiting all work exactly as you’d expect. You’re not learning a new runtime — you’re adding MCP capabilities to existing .NET patterns.
Where the 10,000 Servers Actually Live
The distribution of MCP servers tells you where the ecosystem is maturing first:
Developer tools (highest density): GitHub, GitLab, Jira, Linear, Notion, Confluence, VS Code extensions. This makes sense — developer tools are MCP’s origin story.
Data and databases: PostgreSQL, MySQL, MongoDB, Snowflake, BigQuery, Elasticsearch. The ability to give an agent safe, scoped database access is one of MCP’s killer use cases.
Enterprise integration: Salesforce, SAP, ServiceNow, Workday. This is where I see the most new activity. Enterprise software does not have good APIs. MCP servers that wrap these systems are valuable because they translate between the AI’s natural language interface and the enterprise system’s arcane APIs.
Internal tools (fastest growing): This is the quiet revolution. Teams are building MCP servers for their internal documentation, their deployment systems, their monitoring dashboards. Every internal tool that gets an MCP server becomes accessible to any AI agent in the organization.
The Computer-Use Connection
There’s a related trend worth noting: computer-use agents — models that can operate actual UIs instead of APIs — are getting production-ready. This matters because most enterprise software doesn’t have good APIs, and the alternatives are MCP servers (if someone builds them) or computer-use agents that navigate the actual interface.
I expect these two approaches to converge. The pattern I predict: MCP servers for enterprise software that has stable, accessible interfaces; computer-use agents for legacy systems or software where building an MCP server would require more effort than the automation is worth.
The combination means that by end of 2026, most enterprise workflows should be automatable by AI agents — either through MCP integration or computer-use.
What You Should Build Now
Three practical recommendations:
1. Audit your internal tools for MCP candidacy. The best first MCP server is something your team already uses daily. Your deployment pipeline. Your internal documentation system. Your monitoring dashboard. Pick one, build the MCP server, and deploy it. The learning curve is about two days for an experienced .NET developer.
2. Design new integrations as MCP servers from the start. If your team is building a new internal API, consider whether it should be MCP-native. The overhead is minimal, and the future compatibility with AI agents is significant.
3. Watch AGENTS.md. OpenAI’s AGENTS.md specification joined MCP in the Agentic AI Foundation. It solves a different problem — how agents declare their capabilities and protocols to each other, rather than how they access external tools. The combination of MCP (tool access) and AGENTS.md (agent-to-agent communication) will likely be the full stack for enterprise agent systems in 2027.
The Honest Assessment
MCP is not perfect. The spec is still evolving. Authentication patterns are inconsistent across servers. Discovery (finding which MCP servers exist in your organization) is largely unsolved. The tooling for debugging MCP interactions is immature.
But 10,000 servers means the ecosystem has passed the “will this survive?” threshold. The question now is “how do we use it well?” — which is a much better problem to have.
The developers who learn MCP deeply in the next six months will be significantly ahead of those who wait for it to “mature further.” It’s mature enough. Build with it.