Every role on a software team is changing. Not disappearing — changing. The Business Analyst who used to spend three days synthesising stakeholder interviews can now do it in three hours. The QA Engineer who manually explored edge cases now directs an AI agent to do it and reviews the findings. The Tech Lead who dreaded sprint planning now has an AI that pre-populates estimates from historical velocity.

This 12-part series maps every role in a modern software delivery team — BA, PO, PM, SA, Tech Lead, Dev, QA, TA, Security Engineer, and DevOps Engineer — through the lens of AI transformation. For each role, we cover what changes, what stays human, the new workflows, and the tools that make it real.

This first post sets the frame: the AI Team Model.


The Core Shift: Amplifiers, Not Replacements

AI agents in a software team function as cognitive amplifiers. They accelerate the low-signal, high-volume work — processing requirements, writing boilerplate, generating test cases, scanning for vulnerabilities — so humans can focus on the high-signal, high-judgment work: stakeholder relationships, architectural taste, ethical trade-offs, team culture.

The team still needs every role. But each role now has a different leverage point:

RoleOld bottleneckNew leverage with AI
BAInterview synthesis, story writingRapid synthesis + structured outputs
POBacklog grooming, prioritisationAI-assisted impact scoring, release notes
PMStatus reporting, risk trackingAutomated dashboards, risk flags
SAArchitecture diagrams, ADR writingAI-drafted options + trade-off analysis
Tech LeadPR reviews, standards enforcementAI pre-review + auto-linting
DevBoilerplate, documentationAI pair, test generation, refactoring
QAManual test cases, regressionAI exploratory testing, smart coverage
TAIaC, infra sizing, cost modellingAI-assisted IaC generation + cost forecast
SecurityManual threat modelling, code reviewAI SAST, dependency scanning, threat maps
DevOpsPipeline config, runbook authoringAI-generated IaC, incident summarisation

The AI Team Topology

A well-functioning AI-augmented team is not a team where everyone just uses ChatGPT individually. It is a structured system with shared context, shared prompts, and agreed human checkpoints.

AI Team Overview Diagram

The model has four layers:

1. Shared Context Layer

  • A shared product brief, architecture decision log, and coding standards — kept as living Markdown documents in the repo
  • AI agents are given this context on every prompt; they don’t hallucinate in a vacuum

2. Role-Specific AI Tooling

  • Each role uses AI tools optimised for their domain (see table above)
  • Prompt libraries are maintained per role — not ad-hoc prompts invented each time

3. Human Checkpoints

  • Every AI output passes through a human before it becomes a decision, a commit, or a deliverable
  • Checkpoints are agreed in advance: PR review, ADR sign-off, UAT approval, security gate

4. Shared Delivery Pipeline

  • The pipeline is AI-observable: CI/CD surfaces AI-generated quality and security signals
  • No AI output bypasses the pipeline (no “LGTM, ship it without tests”)

What Doesn’t Change: The Human Irreducibles

For each role, there is a set of responsibilities that AI cannot and should not own:

  • BA: Building stakeholder trust, navigating organisational politics, reading the room in workshops
  • PO: Making the call on “what we’re NOT building” — AI can score, humans must decide
  • PM: Escalating risk to executives, managing people through uncertainty
  • SA/TA: Architectural taste — knowing which technically-correct solution is actually right for this team, this budget, this timeline
  • Tech Lead: Setting engineering culture, mentoring junior developers, making the hard call on tech debt
  • Dev: Judging whether AI-generated code is actually correct, readable, and maintainable — not just syntactically valid
  • QA: Deciding when the product is good enough to ship — AI can flag, humans must judge
  • Security: Explaining risk to non-technical stakeholders, owning the breach response, making the “acceptable risk” call
  • DevOps: Incident command during production outages — AI helps diagnose, humans drive recovery

AI Team Anti-Patterns to Avoid

These are the failure modes we see most often:

1. The “Prompt Dump” Anti-Pattern Everyone uses AI individually with no shared context. Each role reinvents the prompts, misses the shared constraints, and produces inconsistent outputs that don’t connect.

Fix: Maintain a shared /ai folder in the repo with role-specific prompt templates and a shared context document.

2. The “AI Says So” Anti-Pattern AI output is treated as ground truth. ADRs written by AI are signed off without challenge. Test cases generated by AI are assumed to cover the right scenarios.

Fix: Establish a “human must review” rule at every checkpoint. AI outputs are proposals, not decisions.

3. The “Only Devs Use AI” Anti-Pattern AI adoption is siloed to the development team. BA writes requirements manually, PM tracks status manually, QA tests manually — while devs fly ahead with AI assistance.

Fix: AI tooling is a team-wide initiative. Every role gets training, tooling, and a prompt library.

4. The “AI Replaced the Junior” Anti-Pattern Junior roles are eliminated because AI can do the work. The team loses its pipeline for growing the next generation of tech leads and architects.

Fix: Junior roles shift to AI review, quality checking, and prompt engineering — not elimination.


The 12-Part Map

Here’s where each post in this series takes you:

PartRole(s)Key question
Part 2BAHow does AI change requirements discovery?
Part 3PO, PMHow do AI tools reshape backlog and delivery?
Part 4SAHow does AI assist system design and ADRs?
Part 5Tech LeadHow does AI change code governance?
Part 6DevWhat is the AI developer’s daily workflow?
Part 7QA/QCHow does AI transform the testing pyramid?
Part 8TAHow does AI help with infrastructure and non-functionals?
Part 9SecurityHow does AI reshape the secure SDLC?
Part 10DevOpsHow does AI change CI/CD and platform engineering?
Part 11AllHow do team ceremonies change in the AI era?
Part 12AllWhat does the full AI delivery playbook look like?

Getting Started: The First Week

If you want to introduce this model to your team without a big-bang transformation:

Day 1–2: Audit which roles currently use AI, which tools they use, and how. No judgement — just baseline.

Day 3–4: Run a half-day workshop: “What is the most painful, repetitive task in your role right now?” Map those to AI tools.

Day 5: Create the /ai folder in the repo. Add one shared context document (a one-page product brief AI can use as context). Each role adds one prompt template.

Week 2: Run one sprint with those prompts. Retrospect on what worked, what hallucinated, what saved time.

The series that follows gives you the depth for each role. Start with whoever is most willing — momentum matters more than completeness.


Next: Part 2 — The AI Business Analyst: Requirements, User Stories & Discovery →

This is Part 1 of the AI-Powered Software Teams series.

Export for reading

Comments