What AI-Assisted Development Actually Looks Like in Two Years
The pattern is consistent enough to make predictions. Not with certainty. With pattern recognition.
Charity Majors said something worth taking seriously: “No one knows what AI-assisted software development will look like in two years. NO ONE. Anyone who says anything differently is selling something.”
She's right that certainty is the wrong posture. She's wrong that the pattern is unknowable.
I'm not a researcher. I'm not an analyst. I'm a practitioner who has watched the same transformation cycle play out five times across five different technology waves at some of the largest enterprises in the world. Cloud. DevOps. Containers. Platform Engineering. Now AI.
The arc is consistent enough to make predictions. Not with certainty. With pattern recognition.
Here's what I think actually happens in the next two years — the good and the bad.
The Pattern That Keeps Repeating
Every major platform shift in enterprise technology goes through four phases. I've lived through all of them, multiple times.
Phase 1: Individual productivity. Early adopters go faster. They write blog posts about it. The productivity gains are real but not transferable — they live in individual workflows, not organizational systems.
Phase 2: Local optimization. Every team builds their own version. One team at Company A spends two sprints wiring up the new technology. A different team at Company B does the same thing simultaneously. Neither knows the other exists. Neither captures what works. The knowledge stays fragmented.
Phase 3: The reckoning. The production incidents arrive. The technical debt surfaces. The downstream engineers — the ones who weren't in the LinkedIn posts — start dealing with the consequences. Charity is describing this phase happening right now with AI-generated code.
Phase 4: Organizational intelligence wins. The teams that survive and thrive are the ones who encoded the knowledge into a platform layer. Guardrails over gatekeepers. The right path becomes the easy path. Local optimization gives way to compounding leverage.
I watched this happen with cloud automation at Pearson in 2012. Containers and platform engineering at Aetna in 2014 — we were running Docker on Mesosphere before Kubernetes existed. A Docker rescue at Liberty Mutual in 2016. Platform engineering at scale at Comcast in 2019. AWS landing zones and cloud modernization at Gilead in 2022.
AI is in Phase 3 right now. Phase 4 is coming, and it's coming faster than previous cycles because the technology is moving faster.
What Two Years Actually Looks Like
The first half of the next two years looks like Charity's post.
More production incidents from AI-generated code that nobody fully reviewed. More downstream engineers dealing with “magic” that wasn't. More organizations mandating AI adoption while providing no framework for doing it well. More teams rebuilding context injection in isolation, sprint after sprint, capturing nothing.
This is not AI failing. This is the reckoning phase working as designed. The reckoning is how enterprises learn what governance they actually need.
The second half looks different.
The organizations that survive the reckoning will have built something: a context layer. Not a set of individual prompts. Not a team-specific .cursorrules file. A queryable, version-controlled corpus of organizational knowledge — ADRs, code-maps, compliance requirements, architectural decisions — that AI queries before generating a single line.
This is what makes AI output reviewable, repeatable, and trustworthy at scale. Not better models. Not more prompting. Encoded organizational intelligence.
The organizations that haven't built it by month 18 will be watching the ones that did compound their advantage at a rate that manual processes cannot match.
The Developer Headcount Prediction
I'll say the quiet part out loud.
The teams that get the context layer right will ship the same output with fewer developers. Not because AI is replacing engineers — because the leverage ratio changes fundamentally.
I built 90 Lambda functions in 120 days, solo, using OutcomeOps against my own ADRs and code-maps. That's not 90 functions of vibe code. That's 90 production functions with tests, consistent patterns, and deployment pipelines — audited by the same AI that generated them.
At a Fortune 500 company running OutcomeOps in production right now, 16-hour tasks complete in 20 minutes. First-time approval rate is 90%. Cost per feature is $2.24.
Those numbers don't leave headcount unchanged. A team of 20 engineers operating with this leverage ratio does not need to grow to 40 engineers to double output. They might need 22.
This isn't speculation. It's the same math that played out when cloud eliminated the need for physical datacenter teams. When Puppet and Chef eliminated the need for armies of sysadmins. When Platform Engineering at Comcast eliminated the need for every team to write their own Terraform.
The offshore consulting model built on labor arbitrage faces the same math.
The engineers who survive and thrive are the ones who move up the abstraction layer. Not the ones writing the most code. The ones encoding the most organizational knowledge.
The ones writing ADRs instead of tickets. Designing context instead of functions. Teaching the system instead of feeding it prompts.
Why Most Predictions Get This Wrong
The optimists predict a smooth transition where AI makes everyone more productive and nobody loses. The pessimists predict mass displacement and a race to the bottom on developer salaries.
Both miss the real variable: whether the organization encodes its knowledge before or after the reckoning.
Organizations that build the context layer proactively come out of Phase 4 with a compounding advantage. Smaller teams. Faster output. Higher quality. The institutional knowledge is in the system, not in the heads of engineers who might leave.
Organizations that don't get to Phase 4 at all. They stay in the reckoning, dealing with incident after incident, until either leadership mandates a framework or a competitor that got there first makes the decision for them.
The split outcome is not optimistic or pessimistic. It's what always happens when a platform shift arrives and enterprises have to decide whether to encode the knowledge or keep it in people's heads.
What Charity Is Actually Asking For
She's asking for truth-telling. For practitioners who will say what worked and what didn't, without shining it up past recognition.
Here's mine.
OutcomeOps works in production. The metrics are real. But it only works because the organizational knowledge was encoded first. The ADRs exist. The code-maps are current. The compliance requirements are queryable. Without that foundation, AI generates plausible-looking code against an imaginary codebase — and the downstream engineers deal with the consequences.
The “vibe coding” problem Charity is describing is not an AI problem. It's a context engineering problem. The organizations solving it are pulling away from the ones that aren't.
In two years, the gap will be visible to everyone.
Enterprise Implementation
The Context Engineering methodology described in this post is open source. The production platform with autonomous agents, air-gapped deployment, and compliance features is available via enterprise engagements.
Learn MoreBrian Carpio is the founder of OutcomeOps and has spent 13 years leading enterprise cloud, DevOps, and platform engineering transformations at Pearson, Aetna, Comcast, Gilead Sciences, and as an AWS ProServe Principal. OutcomeOps deploys into your AWS account via Terraform. outcomeops.ai