Two years ago, GitHub Copilot completing a function signature felt magical. Today, I’m watching AI agents file GitHub issues, write the code, run the tests, and request a human review — all while I’m in a meeting. March 2026 isn’t a milestone, it’s a threshold we already crossed without noticing.

This post is my honest assessment as a Technical Lead who’s been integrating AI tools into real .NET and cloud projects. Not a product review — a practitioner’s guide.

What “Agentic” Actually Means in Practice

The word is overused, so let me define it precisely: an agentic AI system can perceive its environment, set intermediate goals, call tools, evaluate results, and adjust its plan — without you holding its hand at each step.

The old model: you write a prompt → AI returns text → you copy-paste → you run the code.

The new model: you describe an outcome → the agent figures out the steps → it reads your files, writes code, runs tests, checks logs, and loops until it’s done.

The practical difference is enormous. On my last sprint, I delegated writing integration tests for a legacy payment API to a Claude Code agent. It read the existing endpoint contracts, inferred the business rules from old test comments (yes, really), and produced 87 tests covering edge cases I hadn’t thought of. That would have been a day and a half of my time.

The New Landscape: Every Lab Has an Agent Framework

This is the messy reality right now — there are too many frameworks:

FrameworkByLanguageFocus
OpenAI Agents SDKOpenAIPythonGeneral
Claude Agent SDKAnthropicPython/TSTool-heavy
Google ADKGooglePythonGemini-native
Semantic KernelMicrosoftC#/PythonEnterprise
Dapr Agents v1.0CNCFPythonCloud-native
SmolagentsHuggingFacePythonLightweight

My recommendation: don’t framework-hop. Pick one based on your stack and commit. For .NET shops, Semantic Kernel has matured significantly. For Python-first teams, Dapr Agents v1.0 (released March 23, 2026) is compelling because it gives you production-grade state management and failure recovery out of the box.

Dapr Agents v1.0: Why It Matters

Most agent frameworks assume your agent runs in a single process forever. Dapr Agents assumes it will crash, restart, get scaled horizontally, and need to resume mid-task. For enterprise workloads, that assumption is correct.

from dapr_agents import Agent, tool

@tool
def query_database(sql: str) -> dict:
    """Execute a read-only SQL query and return results."""
    # ... your DB logic
    pass

agent = Agent(
    name="data-analyst",
    tools=[query_database],
    state_store="redis",           # survives restarts
    message_bus="servicebusqueue", # async, durable
)

await agent.run("Analyze sales trends for Q1 2026 and flag anomalies")

The state_store and message_bus parameters are what separate toys from production tools. If your agent crashes mid-analysis, it picks up where it left off.

JetBrains Central: Managing Agent Fleets

JetBrains is shipping something genuinely interesting: JetBrains Central, launching early access in Q2 2026. The premise is that once you have multiple agents working on your codebase simultaneously, you need governance — a control plane.

Think of it as Kubernetes for AI agents. You define what agents can do, see their progress in real time, and intervene when they go sideways.

From a Technical Lead perspective, this solves my biggest pain point: visibility. When an AI agent is refactoring a module, I want to see its plan before it touches 40 files. JetBrains Central surfaces this.

GitHub Agent HQ and the Multi-Agent Workflow

GitHub’s Agent HQ lets you run multiple agents side-by-side with full audit trails. The workflow that’s becoming standard in high-performing teams:

┌─────────────────────────────────────────────────────┐
│                   GitHub Issue                       │
└────────────────────────┬────────────────────────────┘

            ┌────────────▼────────────┐
            │    Planning Agent        │
            │  (breaks into subtasks)  │
            └────────────┬────────────┘

         ┌───────────────┼───────────────┐
         │               │               │
    ┌────▼────┐    ┌─────▼────┐   ┌─────▼────┐
    │ Code    │    │  Tests   │   │  Docs    │
    │ Agent   │    │  Agent   │   │  Agent   │
    └────┬────┘    └─────┬────┘   └─────┬────┘
         │               │               │
         └───────────────┼───────────────┘

            ┌────────────▼────────────┐
            │    Review Agent          │
            │  (checks conflicts,      │
            │   quality, security)     │
            └────────────┬────────────┘

                   Human Approval

The key insight: you’re not replacing engineers, you’re changing what engineers do. The best engineers on AI-augmented teams are spending their time on architecture decisions, requirements clarification, and reviewing agent output — not writing boilerplate.

MCP: The Protocol That Makes All of This Work

The Model Context Protocol (MCP) deserves its own post, but briefly: MCP is Anthropic’s open standard for connecting AI agents to external tools and data sources. It’s become the de facto integration layer in 2026.

Every major agent framework now speaks MCP. What this means practically: you build an MCP server for your internal APIs once, and any compliant AI agent can use it.

// mcp-server.ts — expose your internal API to any AI agent
import { MCPServer, tool } from "@anthropic-ai/mcp-server";

const server = new MCPServer({
  name: "internal-api",
  version: "1.0.0",
});

server.addTool({
  name: "get_order_status",
  description: "Get the status of a customer order",
  parameters: {
    order_id: { type: "string", required: true },
  },
  handler: async ({ order_id }) => {
    const order = await db.orders.findById(order_id);
    return { status: order.status, updatedAt: order.updatedAt };
  },
});

server.start();

Once this is running, Claude, GPT-5, Gemini — any MCP-compatible agent — can query your order system without you writing custom integration code for each one.

Real Productivity Numbers (My Experience)

I’ve been tracking this carefully across three projects:

  • Writing integration tests: ~3× faster with a well-prompted coding agent
  • Documentation of existing code: ~5× faster (agents are particularly good at this)
  • Refactoring legacy code: ~1.5× faster (requires more supervision, more risk)
  • Architecture decisions: agents are useful as sparring partners but still need human judgment
  • Debugging production issues: mixed — great for hypothesis generation, unreliable for root cause analysis on complex distributed systems

The 2–3× overall productivity gain cited in industry reports feels roughly right for greenfield work. For brownfield, legacy, or high-security systems, be more conservative.

What Technical Leads Need to Watch

Context engineering is becoming a real skill. The quality of agent output correlates directly with the quality of context you provide. Teams that invest in good CLAUDE.md files, rich MCP tool descriptions, and curated code examples for agents are seeing dramatically better results.

Security boundaries matter more now. When an agent can read files, execute code, and call external APIs, a prompt injection attack has real consequences. Microsoft’s March 2026 guidance on agentic AI security is worth reading. Every tool your agent can call is an attack surface.

Start small, expand gradually. The teams failing at agentic AI are the ones that tried to automate everything at once. The teams succeeding started with one specific, well-defined task (like test generation) and expanded from there.

My Current Setup (As of March 2026)

For the projects I’m leading:

  1. Claude Code as the primary coding agent (handles context well, MCP support is mature)
  2. Semantic Kernel for .NET agent orchestration
  3. Custom MCP servers for each internal service
  4. JetBrains IDE for daily coding (waiting on JetBrains Central for agent fleet management)
  5. GitHub Agent HQ in evaluation for PR automation

The stack will likely look different in six months. That’s the nature of this moment.

Conclusion

Agentic AI tools have crossed from experiment to infrastructure in 2026. As a Technical Lead, the question is no longer “should we adopt this?” but “how do we govern it?”

Build your team’s AI literacy. Define clear ownership for agent-generated code (it’s your code, you’re responsible for it). Invest in your MCP layer. And remember: the goal is better software, faster — not just more AI in the pipeline.

The engineers who learn to orchestrate agents effectively are going to have an enormous advantage. That’s where I’d invest time right now.

Export for reading

Comments