The translation layer between business ideas and working software has been the most expensive bottleneck in technology for decades. It is collapsing — and the consequences reach far beyond who writes the code.
I have worked in product development for over twenty years. The hardest part of my job has always been figuring out what to build — understanding the user, the market, the business logic. That part has not changed. What has changed is everything that came after the thinking. We wrote specifications, negotiated sprint priorities, waited through development cycles — and then held the finished feature in our hands and realized it was not quite what we meant. Sometimes because the developers misunderstood the requirement. Sometimes because we only discovered what we actually needed once we could see and touch the result. All the process in the world could not fix that gap between intent and outcome, because the feedback came too late. Today I work with AI agent teams, and the difference is not that the thinking gets easier. It is that the feedback is immediate. I describe what the system should do, the agents produce a working version, and I can see within minutes whether my specification was right — or where I got it wrong. The iteration that used to take two sprints now takes an afternoon.
This is not a story about replacing developers. It is a story about what happens when the cost of translating an idea into code approaches zero — and what that means for the people on both sides of that translation.
The term that outgrew its name
In February 2025, Andrej Karpathy — former head of AI at Tesla, co-founder of OpenAI — coined the term "vibe coding" in a post that went viral. He described a way of building software where you describe what you want in plain English, accept the AI-generated code without fully reviewing it, and iterate by running it and seeing what happens. "I just see stuff, say stuff, run stuff, and copy paste stuff," he wrote, "and it mostly works."[1]
The term resonated because it captured something millions of people were already experiencing. By the end of 2025, 84% of developers were using or planning to use AI coding tools, according to Stack Overflow's annual survey of approximately 49,000 developers.[2] A quarter of Y Combinator's Winter 2025 batch had codebases that were 95% or more AI-generated.[3] MIT Technology Review named generative coding one of the ten breakthrough technologies of 2026.[4]
Within a year of coining the term, Karpathy declared it passé. The reason was not that the practice had faded but that it had matured beyond its name. "Vibe coding" implied a casual, low-oversight relationship with AI-generated code — accept the vibes, ship the output. What emerged instead was something closer to engineering: directing AI agents with explicit oversight, quality criteria, and accountability. Simon Willison, one of the most respected voices in the AI coding community, formalized the distinction: vibe coding means accepting AI output on faith; what he initially called "vibe engineering" — and what the community now calls "agentic engineering" — means reviewing, understanding, and taking responsibility for every line.[5] The vocabulary shift matters because it signals a practice moving from experimentation toward discipline.
AI removes the translation layer — and that changes everything
For two decades, the most persistent source of waste in software development has not been bad code. It has been the distance between an idea and the moment you can see whether the idea was right. Product owners write requirements. Developers interpret them. Misunderstandings surface in sprint reviews — but so do discoveries that no amount of upfront specification could have prevented, because some things only become visible in a working product. Agile methodology was an attempt to compress this feedback loop. Agentic engineering compresses it to near-zero.
When a product owner can describe a feature in natural language and see a working implementation within minutes, the bottleneck shifts. The expensive part — finding someone to write the code, coordinating the handoff, waiting for results — falls away. What remains is what was always the hard part: knowing what to build. Understanding the user, the market, the business logic. This has always been the product owner's core competency. AI did not create this competency. It made the feedback loop fast enough for that competency to drive the product directly.
Anthropic shipped Agent Teams for Claude Code in early 2026. Multiple AI agents now communicate with each other, share discoveries mid-task, and coordinate through a shared work list.[6] The "product owner directing a team" is no longer a metaphor. It is a workflow.
This empowerment is real, but so are its limits. No systematic study has measured how often non-developers succeed or fail when building software with AI tools. The success stories are visible — mine among them — while the abandoned projects are not. The security data is sobering: Veracode's 2025 evaluation of more than one hundred large language models found that AI-generated code introduces security vulnerabilities in 45% of cases.[7] CodeRabbit's analysis of real-world pull requests showed 1.7 times more issues in AI-generated code than in human-written code, with security vulnerabilities appearing at 2.7 times the rate.[8] These are not edge cases. They describe the median output of the best available tools.
Two factors prevent this from being a disqualifying concern, though neither eliminates it. First, frontier model providers are actively addressing the problem — Anthropic's security-focused training, GitHub's integrated code scanning, and automated review tools are narrowing the gap with each model generation. Second, the relevant comparison for most product owners is not "AI-generated code versus professionally developed software." It is "AI-generated code versus the Excel spreadsheet, the manual workaround, or the tool that never got built because the development team had other priorities." Against that baseline, a vibe-coded internal tool with imperfect security may represent a net improvement — provided someone reviews it before it touches customer data.
The developer's role is dissolving — into something more valuable
If product owners can generate code directly, what happens to the developers who used to write it for them?
The short answer is that their role is changing, not disappearing — and the direction of change is upward. The METR randomized controlled trial, the most rigorous study of AI's impact on experienced developers to date, found that developers with five or more years of experience in their specific codebases were 19% slower when using AI tools compared to working without them.[9] The study attributes the slowdown primarily to context-switching overhead and time spent reviewing AI suggestions. But read from a wider angle, the finding reveals something about what makes experienced developers valuable in the first place: their deep knowledge of a codebase already made them faster than AI could make them. Their value was never in typing speed. It was in architectural judgment, system understanding, and the ability to make trade-offs that no AI currently replicates.
Addy Osmani, Director at Google Cloud AI, describes the shift as moving from writing code to orchestrating AI-driven systems — from craftsperson to conductor.[10] Willison's golden rule captures the professional standard: "I won't commit any code to my repository if I couldn't explain exactly what it does to somebody else."[11] The JetBrains 2025 developer survey found that 68% of developers expect AI proficiency to become a job requirement — and their primary concerns center on quality, reliability, and security, not on replacement.[12]
New role categories are crystallizing around this shift. German industry analysis identifies the "Cognitive Architect" — who designs human-AI collaboration patterns rather than code — and the "AI Guardian" — who ensures AI-generated output meets quality, security, and compliance standards.[13] These are not junior positions. They demand exactly the experience that senior developers have spent years accumulating.
What emerges is a convergence that neither side anticipated. Product owners are moving from writing specifications toward orchestrating AI agent teams — directing the what and the why. Developers are moving from writing code toward orchestrating the same AI agents — directing the how and the how-safely. Both roles converge on the same activity: orchestration. They approach it from opposite directions — product intent versus technical judgment — but the meeting point is the same. The AI team sits at the center, and the humans who guide it bring complementary perspectives rather than hierarchically separated functions.
The transition path has two time horizons. In the near term, existing developers must evolve from coders to architects — a shift that is uncomfortable but navigable for experienced professionals. In the longer term, the trajectory points somewhere more radical: software development as such becomes a skill rather than a profession, and we train people as architects and systems thinkers from the start, without the intermediate step of learning to write code manually. Even this framing has a limited shelf life, because the definition of "architect" will shift as AI capabilities expand.
One tension remains unresolved. Entry-level tech hiring declined 25% year-over-year in 2024, and 70% of hiring managers in a recent survey said they trust AI's output more than that of interns.[14] The tasks that historically served as the learning ground for junior developers — writing simple features, fixing bugs, basic testing — are precisely the tasks AI handles best. The traditional apprenticeship model, where writing code was the first rung on a ladder that led to architectural competency, is losing its first rung. The developers of 2035 may reach architectural competency through paths that do not yet exist. The impact is not evenly distributed: outsourcing economies — India's three-hundred-billion-dollar IT industry alone employs more than six million people — face the sharpest adjustment.
The maturation from vibes to discipline
Every successful technology practice follows a maturation arc. Cloud computing went from "move everything to the cloud" enthusiasm through cost overruns and security incidents to mature cloud-native architecture with well-defined governance. Agile itself followed the same pattern — from the manifesto's liberating simplicity through years of cargo-cult standups and dysfunctional sprints to the more nuanced frameworks that organizations use today. Vibe coding is on the same curve — from casual experimentation toward professional practice.
What distinguishes agentic engineering from vibe coding is not the technology but the posture. A vibe coder accepts the AI's output and iterates by running it. An agentic engineer specifies constraints before generation, defines quality criteria, reviews output systematically, and treats the AI as a capable but fallible team member. The tools are the same. The discipline is different.
Institutional readiness lags behind the technology. A GitLab survey found that 87% of German executives plan to increase AI investment, but only 48% have implemented governance frameworks for AI use.[15] This gap between capability and governance is the single largest risk in the current landscape — not because AI coding tools are dangerous, but because using powerful tools without institutional guardrails produces outcomes that discredit the tools rather than the absence of guardrails. The maturation from vibe coding to agentic engineering is not optional. It is the condition under which AI-augmented software creation becomes sustainable.
Code is the material — the architecture is the product
The deeper shift that vibe coding initiated, and that agentic engineering is formalizing, reaches beyond roles and workflows. It changes the nature of software itself.
For sixty years, software has been an artifact — a thing that is built, tested, deployed, and maintained. The entire industry has been organized around the production and preservation of artifacts. When a product owner describes a feature and an AI agent team generates the implementation, something changes in this equation. The code becomes regenerable. If the output is flawed, you do not debug it line by line — you refine the specification and regenerate. The same feedback loop that changed how product owners work now changes the nature of the artifact itself. The cost of regeneration approaches zero. The code becomes material — necessary and useful, but not what gives the product its value.
What gives it value is the architecture above the code: the accumulated decisions, constraints, and context that shaped the specification. A product owner who has refined their intent over dozens of iterations with an AI team owns something new: not a codebase, but a generative specification that can produce software on demand. Code has always been a means to an end. The end was always the product — the problem solved, the workflow improved, the user served. When the means was expensive, we confused it with the end. When the means is cheap, the confusion dissolves.
This reframing has clear limits. Production systems with databases, API contracts, compliance documentation, and accumulated user state cannot be "regenerated" without accounting for everything they carry. The artifact model persists wherever software interacts with persistent reality — and most software that matters eventually does. But for a growing category of applications — prototypes, internal tools, workflow automations, MVPs — the generative model is already the more accurate description of how they come into being. For everything that stays small, the architecture is the product and the code is the material it is built from. For everything that grows, the material becomes load-bearing — and load-bearing material demands engineering.
Previously in this series: "The One-Person Unicorn: A Reality Check" examined how AI productivity leverage creates genuine million-dollar opportunities while the billion-dollar solo company remains a myth. Next: "The AI Trap" explores how the same force that gives individuals unprecedented leverage simultaneously erodes the purchasing power that sustains their markets.
References
[1]: Andrej Karpathy. Post on X (formerly Twitter), February 3, 2025. "I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works." Origin of the term "vibe coding." https://x.com/karpathy/status/1886192184808149383
[2]: Stack Overflow. "Developer Survey 2025." N=~49,000 from 177 countries. 84% of respondents using or planning to use AI tools in development. https://survey.stackoverflow.co/2025/
[3]: TechCrunch, March 6, 2025. "A quarter of startups in YC's current cohort have codebases that are almost entirely AI-generated." https://techcrunch.com/2025/03/06/a-quarter-of-startups-in-ycs-current-cohort-have-codebases-that-are-almost-entirely-ai-generated/
[4]: MIT Technology Review, January 12, 2026. "Generative Coding: 10 Breakthrough Technologies 2026." https://www.technologyreview.com/2026/01/12/1130027/generative-coding-ai-software-2026-breakthrough-technology/
[5]: Simon Willison. "Not all AI-assisted programming is vibe coding (but vibe coding rocks)," March 19, 2025. https://simonwillison.net/2025/Mar/19/vibe-coding/ — See also: "Vibe engineering," October 7, 2025. https://simonwillison.net/2025/Oct/7/vibe-engineering/
[6]: Anthropic, February 5, 2026. Claude Code Agent Teams launched alongside Opus 4.6. Multiple AI agents coordinate autonomously through shared task lists. https://code.claude.com/docs/en/agent-teams
[7]: Veracode. "2025 GenAI Code Security Report." Evaluation of 100+ LLMs across 80 coding tasks; 45% vulnerability rate. https://www.veracode.com/resources/analyst-reports/2025-genai-code-security-report/
[8]: CodeRabbit, December 17, 2025. "State of AI vs Human Code Generation Report." Analysis of 470 real-world pull requests; AI-generated PRs contain 1.7x more issues. https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report
[9]: METR, July 2025. "Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity." Randomized controlled trial; 16 experienced developers, 246 real issues; 19% slower with AI tools. https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
[10]: Addy Osmani, Director at Google Cloud AI. "The Next Two Years of Software Engineering," January 5, 2026. https://addyosmani.com/blog/next-two-years/
[11]: Simon Willison, creator of Datasette. "Not all AI-assisted programming is vibe coding," March 19, 2025. "I won't commit any code to my repository if I couldn't explain exactly what it does to somebody else." https://simonwillison.net/2025/Mar/19/vibe-coding/
[12]: JetBrains. "The State of Developer Ecosystem 2025." N=24,534. 68% anticipate AI proficiency as job requirement. https://devecosystem-2025.jetbrains.com/
[13]: Computer Weekly DE, 2026. "Drei Prognosen für die KI-getriebene Softwareentwicklung 2026." Emergence of "Cognitive Architect" and "AI Guardian" roles. https://www.computerweekly.com/de/meinung/Drei-Prognosen-fuer-die-KI-getriebene-Softwareentwicklung-2026
[14]: Stack Overflow Blog, December 26, 2025. "AI vs Gen Z: How AI has changed the career pathway for junior developers." Entry-level tech hiring down 25% year-over-year in 2024; 70% of hiring managers trust AI over interns. https://stackoverflow.blog/2025/12/26/ai-vs-gen-z/
[15]: GitLab C-Suite AI Survey, 2025, as reported by IT-Daily, December 15, 2025. "KI-Entwicklung in Deutschland: Vier Prognosen für 2026." 87% of German executives plan to increase AI investment; 48% have governance frameworks. https://www.it-daily.net/it-management/ki/ki-entwicklung-deutschland-2026