Security is the domain where AI both provides the greatest acceleration and introduces the most dangerous failure modes. AI can dramatically accelerate threat modelling, SAST analysis, and security policy generation. But AI also generates plausible-sounding code that has subtle security flaws, and its threat models reflect training data — not the specific context of your system and organisation. The Security Engineer in an AI team is not less important — they are more important, because they must now review AI outputs across a faster-moving, higher-volume codebase.
Security in the AI Era: The Problem Statement
Two intersecting challenges define security in AI-augmented teams:
-
Velocity problem: More code is produced faster. AI-assisted development means a team of 5 might produce what a team of 10 did previously. The attack surface grows proportionally.
-
AI code quality problem: AI-generated code can contain subtle security flaws — SQL injection via string interpolation, improper secret handling, insufficient input validation — that look reasonable in isolation but are exploitable in context.
The response is not to slow down AI adoption. It is to build automated, frictionless security gates into every stage of the pipeline, with a human Security Engineer reviewing and maintaining them.
The AI-Augmented Secure SDLC
Security shifts left — automated checks as early as possible, with decreasing speed and increasing depth as the pipeline progresses.
Stage 1: Requirements (Threat Modelling)
Before any code is written, the Security Engineer (with AI assistance) performs threat modelling on the feature requirements.
AI-assisted threat modelling prompt:
Perform STRIDE threat modelling for this feature:
[paste feature description + data flows]
For each threat category (Spoofing, Tampering, Repudiation, Info Disclosure, DoS, Elevation of Privilege):
1. Identify specific threats relevant to this feature
2. Rate likelihood (Low/Medium/High) with reasoning
3. Rate impact (Low/Medium/High) with reasoning
4. Suggest mitigating controls
Also identify:
- Data classification for data handled (PII, financial, health, public)
- Applicable compliance requirements (GDPR, PCI-DSS, etc.)
- Security acceptance criteria that should be added to the user story
The Security Engineer reviews the output, adds organisation-specific threat intelligence, and produces the final threat model. AI cannot know your internal architecture, your actual threat actors, or your organisation’s specific risk profile — it can only provide a generic baseline.
Stage 2: Development (IDE security plugins)
- Snyk or SonarLint IDE plugin flags known vulnerable code patterns in real time
- AI coding assistants are configured with security-focused system prompts that prefer safe patterns
- Developers follow “AI code trust” rules (see Part 6): security-sensitive code always requires full human review
Stage 3: Pre-commit (Secret scanning)
git-secretsorgitleaksruns on every commit- Hard-blocked: no commit with detected secrets, API keys, or credentials
- AI-generated code is particularly prone to hardcoded strings that look like placeholders but are real
- The rule: run secret scanning before every commit, not just in CI
Stage 4: Pull Request (SAST)
Automated Static Application Security Testing on every PR:
| Tool | What it checks |
|---|---|
| Semgrep | Custom security rules + known vulnerability patterns |
| CodeQL | Deep data-flow analysis (injection, auth bypass, unsafe deserialisation) |
| Snyk Code | AI-enhanced SAST with false-positive reduction |
| Checkov | IaC security misconfigurations (for infra PRs) |
SAST gate rule: High and Critical findings block the PR. Medium findings require Security Engineer triage. Low findings are informational.
Stage 5: Dependency Scanning (Continuous)
- Dependabot / Snyk monitors all dependencies continuously
- New critical CVE in a dependency → automatic PR to upgrade
- Security Engineer reviews before merge (not auto-merged)
- Weekly SBOM (Software Bill of Materials) update
Stage 6: DAST (Pre-release)
Dynamic Application Security Testing runs against a deployed staging environment:
| Tool | Purpose |
|---|---|
| OWASP ZAP | OWASP Top 10 automated scan |
| Burp Suite (automated scans) | API security, authentication testing |
| Nuclei | Template-based vulnerability scanning |
DAST results are reviewed by the Security Engineer before each release. New High/Critical findings block release.
Stage 7: Production (Continuous monitoring)
- WAF (Web Application Firewall) with AI-driven threat intelligence
- Runtime application self-protection (RASP) for high-risk applications
- Security event SIEM: AI-assisted anomaly detection
- Penetration testing: Annual external pen test (humans, not AI — AI-driven pen tests are supplemental only)
Security Policies as Code
In AI teams, security policies should be machine-readable and version-controlled:
# security-policy.yaml
sast:
block_on: [high, critical]
review_on: [medium]
inform_on: [low]
dependency_scan:
block_on_cve_severity: [critical]
review_on_cve_severity: [high]
auto_pr_on_patch_available: true
secrets:
block_commit: true
allowed_patterns: [] # nothing allowed
branch_protection:
require_reviews: 1
require_security_review_on: "security/**"
dismiss_stale_approvals: true
AI assists in generating these policies from requirements. Security Engineer reviews and owns them.
AI Security Anti-Patterns
The “AI-generated auth code is probably fine” mistake: It isn’t. Authentication, authorisation, session management, and cryptographic operations require line-by-line human review, every time. AI generates plausible-looking but subtly incorrect auth code regularly.
Suppressing SAST findings to meet a deadline: Security debt compounds faster than technical debt. Never suppress a finding without a documented risk acceptance reviewed by the Security Engineer.
Treating SAST as sufficient: SAST finds known patterns. It does not find logic flaws, business logic abuse, or novel attack chains. DAST, threat modelling, and pen testing remain essential.
AI threat models without human review: AI threat models are baseline-quality starting points. They miss organisation-specific threats, internal threat actors, and the architectural quirks that make your system uniquely vulnerable.
The Human-Irreplaceable Security Work
Threat intelligence interpretation: Knowing which threats in the wild are relevant to your specific system, technology stack, and organisation — and which are theoretical but unlikely — requires understanding your organisation’s actual threat model.
Incident response: When a breach or security incident occurs, a Security Engineer must lead the forensic investigation, communicate with legal and executive leadership, engage external response partners if needed, and make decisions under pressure. AI assists with log analysis; humans drive the response.
Risk acceptance: The decision to accept a known security risk (because the cost to remediate exceeds the probability-adjusted impact) is a human decision with legal and ethical implications. AI can calculate expected value; humans must own the accountability.
Security culture: Building a development team that cares about security — that sees a SAST finding as an interesting problem rather than a bureaucratic hurdle — is a culture and education challenge, not a technical one.
Tools for the AI Security Engineer
| Tool | Purpose |
|---|---|
| Semgrep | Custom SAST rules, community rule sets |
| CodeQL | Deep data-flow security analysis |
| Snyk | Dependency scanning + SAST with AI assist |
| gitleaks | Pre-commit secret detection |
| OWASP ZAP | Automated DAST |
| Checkov / tfsec | IaC security scanning |
| Trivy | Container image vulnerability scanning |
| Claude | Threat modelling, security policy generation, incident analysis |
Previous: Part 8 — The AI Technical Architect ←
Next: Part 10 — The AI DevOps Engineer →
This is Part 9 of the AI-Powered Software Teams series.