The Tweet That Changed Everything
One year ago, Andrej Karpathy — co-founder of OpenAI, former AI lead at Tesla — dropped a tweet that set the entire developer world on fire:
“There’s a new kind of coding I call ‘vibe coding’, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.”
That was February 2, 2025. The post hit 4.5 million views. Within weeks, it was everywhere.
YouTube exploded:
- “Build a SaaS in a weekend”
- “No-code with AI”
- “Just prompt it”
Tools like ChatGPT, Claude, GitHub Copilot, and Cursor made writing software feel impossibly easy. Suddenly:
- A founder could build an MVP overnight.
- A marketer could create a landing page in an hour.
- A designer could prototype a product idea before lunch.
Idea → product in hours. That had never happened in the history of software.
Merriam-Webster added the term. Collins English Dictionary named it Word of the Year 2025. Y Combinator reported that 25% of startups in its Winter 2025 batch had codebases that were 95% AI-generated.
The vibe was immaculate.
But then reality showed up.
Why Vibe Coding Was So Seductive
The appeal was dead simple: speed.
Before AI, you needed:
- Weeks to build a prototype
- Months to ship a product
With vibe coding, you could:
- ✅ Generate UI components
- ✅ Write API endpoints
- ✅ Connect a database
- ✅ Deploy to production
All from a single prompt:
"Build a flashcard app for vocabulary learning with React,
a Node.js API, and PostgreSQL."
AI would generate the entire system. You’d run it. It worked.
For the first few weeks, everything felt magical.
Then Month Three Arrives
It always starts small.
You fix one tiny bug. Then another feature breaks. You ask AI to fix it. Something else breaks. You fix that — and the first bug comes back.
It feels like whack-a-mole, except the moles are multiplying.
Your codebase has become a never-ending bug loop.
You’re no longer building a product.
You’re just chasing bugs.
This pattern repeats so consistently across vibe-coded projects that it’s practically a law of nature. And it happens for three specific reasons.
The Three Killers of Vibe-Coded Projects
1. Context Collapse
AI is powerful. But it doesn’t see your entire codebase at once. It only sees the snippet you paste into the prompt window.
After a few months of development:
- Dozens of new features
- Countless bug fixes
- Multiple refactors
The AI gradually loses track of earlier decisions. It doesn’t remember why you structured the auth module that way. It doesn’t know about the edge case you handled in the payment flow three weeks ago.
Result: New code starts contradicting old code. The architecture fractures.
Research confirms this: vibe-coded projects show an 8x increase in code duplication because each prompt starts fresh, unaware of what already exists.
2. No System Design
When you vibe code, AI silently makes all the architectural decisions:
- Which libraries to use
- How to organize files
- How to structure the database
- Which patterns to follow
In the short term, this feels efficient. You didn’t have to think about it!
But after a few months, your codebase becomes a Frankenstein’s monster:
- Three different state management approaches
- Two competing auth patterns
- Database queries scattered across random files
- No consistent error handling
Nobody — not you, not the AI — actually understands how the system works as a whole.
3. Cognitive Debt
We all know technical debt — shortcuts in code that cost you later.
But the AI era introduced something new: cognitive debt.
Cognitive debt is the mental cost of understanding code you didn’t write and never reviewed.
Professor Margaret-Anne Storey of the University of Victoria formally defined this concept in 2026. It describes the systemic erosion of human understanding when AI writes code on our behalf.
Here’s what it looks like in practice:
- You open a file. AI wrote 600 lines of code.
- You need to fix one small bug.
- But to fix that bug, you have to:
- Read all 600 lines
- Understand AI’s organizational logic
- Trace decisions you never made
- Figure out side effects you never considered
AI writes code faster than you. But you still have to understand it.
A 2026 academic study on “epistemic debt” found a devastating result: developers who relied heavily on AI without reviewing code had a 77% failure rate when asked to maintain that code without AI assistance. They’d become what researchers called “fragile experts” — functionally productive but critically incompetent at debugging.
The Numbers Don’t Lie
The data coming out of 2026 paints a sobering picture:
| Metric | Finding |
|---|---|
| Code duplication | 8x increase in vibe-coded projects |
| PR review times | Up 91% on heavily AI-assisted teams |
| Total development cost | 12% higher than traditional development |
| Junior hiring | 54% of engineering leaders plan to hire fewer juniors |
| Projected tech debt | $1.5 trillion by 2027 from AI-generated code |
| Quality gap | 40% — code volume exceeds review capacity |
The cruel irony: AI tools promise a 50% increase in speed, but the maintenance burden more than erases the gains.
Enter Agentic Engineering
On February 4, 2026 — almost exactly one year after coining “vibe coding” — Karpathy posted again. This time, his message was different:
“Vibe coding is passé. Programming via LLM agents is increasingly becoming a default workflow for professionals, except with more oversight and scrutiny.”
His new term: Agentic Engineering.
He broke it down:
- “Agentic” — because you’re not writing code directly 99% of the time. You’re orchestrating agents.
- “Engineering” — because there is an art, a science, and an expertise to doing it well.
The distinction is razor-sharp:
| Vibe Coding | Agentic Engineering | |
|---|---|---|
| Who decides architecture? | AI | You |
| Who reviews code? | Nobody | You |
| Who owns quality? | Nobody | You |
| Who designs the system? | AI (implicitly) | You (explicitly) |
| Who runs tests? | ”It seems to work” | Automated test suite |
| Result | Demo-ready | Production-ready |
The core philosophy shift:
Vibe Coding: AI decides how to build the system. Agentic Engineering: You decide the system. AI helps you build it.
The Agentic Engineering Workflow
Developers who use AI effectively in 2026 follow a structured four-step process:
Step 1: Write Clear Requirements First
Before touching any AI tool, define:
- What does this feature do?
- Where is data stored?
- What are the edge cases?
- How should errors be handled?
Even a few bullet points dramatically improve AI output quality. The spec doesn’t need to be a 50-page document — just enough to give the AI (and yourself) a clear target.
## Feature: Password Reset
- User requests reset via email
- System generates token (expires in 15 min)
- Token is single-use
- Rate limit: 3 requests per hour per email
- On success: redirect to login with flash message
- On failure: generic error (don't leak user existence)
Step 2: Break Tasks Into Small, Specific Pieces
Instead of:
“Build an authentication system”
Say:
“Create a POST /api/auth/reset-password endpoint that accepts an email, generates a cryptographically secure token stored in Redis with a 15-minute TTL, and sends a reset link via SendGrid.”
Smaller tasks → fewer hallucinations → better code.
The research backs this up: task decomposition is the single highest-leverage technique for improving AI code quality.
Step 3: Review What AI Writes
You don’t need to read every single line. But you must understand:
- What’s the main logic? Does it match your intent?
- What’s the data flow? Where does data enter, transform, and exit?
- What’s missing? Error handling? Validation? Edge cases?
The developers who thrive in the AI era are the ones who can read code critically — not write it from scratch, but evaluate whether generated code is correct, secure, and maintainable.
Step 4: Test Before You Ship
This is the single biggest differentiator between vibe coding and agentic engineering.
With a solid test suite, an AI agent can iterate in a loop until tests pass — giving you high confidence in correctness. Without tests, the agent will cheerfully declare “done” on broken code.
# The agentic engineering loop
write spec → generate code → run tests → fix failures → repeat
Tests are how you turn an unreliable agent into a reliable system.
Where Vibe Coding Still Makes Sense
Let’s be fair. Vibe coding isn’t wrong. It’s just scoped.
As Addy Osmani (Chrome engineering lead at Google) catalogued, vibe coding works great for:
| Use Case | Why It Works |
|---|---|
| Weekend prototypes | You need something running by Sunday; quality is noise |
| Personal scripts | If it breaks, you regenerate it |
| Learning & exploration | Newcomers build things they couldn’t otherwise |
| Creative brainstorming | Over-generate on purpose, throw everything away, build properly |
Beyond these four scenarios? The technique collapses.
The rule of thumb: If your project has users, money, or a future — you need agentic engineering.
The Role of the Developer Is Changing
This is the part that makes some people uncomfortable.
Before AI:
Developer = person who writes code
After AI:
Developer = person who designs systems + orchestrates AI agents
AI doesn’t replace developers. It replaces developers who don’t know what they’re building.
The most valuable skills in 2026 aren’t syntax mastery or framework knowledge. They’re:
- System design — understanding how components fit together
- Critical review — evaluating whether AI-generated code is correct
- Task decomposition — breaking problems into AI-friendly chunks
- Testing discipline — writing the tests that keep AI honest
- Domain knowledge — understanding the business problem deeply enough to spec it correctly
The engineers making $190K+ in “agentic engineering” roles aren’t the ones who prompt the hardest. They’re the ones who think the clearest.
Real-World Results
The companies doing agentic engineering well are seeing transformative results:
- TELUS saved 500,000+ hours with 13,000 AI-powered solutions
- Zapier achieved 89% AI adoption across their entire organization
- Stripe’s Minions system produces 1,000+ merged PRs per week
These aren’t vibe-coded demos. They’re production systems built with rigorous oversight, comprehensive testing, and human-in-the-loop review.
Conclusion: The Vibe Was Fun. Now Let’s Build.
Vibe coding was never a mistake. It was a necessary first step.
It showed us what was possible. It democratized software creation. It gave non-developers superpowers. It compressed the idea-to-prototype cycle from months to hours.
But if you want software that:
- Has real users
- Runs reliably
- Survives years of maintenance
- Handles real money
- Scales beyond a demo
Then you need the next evolution.
Agentic Engineering.
Where AI still writes most of the code. But you own the architecture. You review every change. You write the tests. You understand the system.
AI makes you faster. But understanding what you’re building? That’s still your job.
The future isn’t about coding with vibes. It’s about engineering with agents.
What’s your experience? Have you hit the “month three wall” with vibe coding? Are you already practicing agentic engineering? I’d love to hear your stories — reach out on LinkedIn or GitHub.