Two Extremes, One Missing Middle
Fedor Indutny — Node.js TSC Emeritus Member — is circulating a petition asking the Node.js Technical Steering Committee to ban AI-assisted code from Node.js core.
The trigger: a 19,000-line pull request from Matteo Collina, a long-time, trusted Node.js core contributor, with a single sentence of disclosure in the PR description:
“I've used a significant amount of Claude Code tokens to create this PR. I've reviewed all changes myself.”
He reviewed it. He signed off on it. He has years of reputation in that codebase. And the community's response was a petition with dozens of signatures.
I want to hold that next to something I see every week in enterprise engagements.
A Fortune 500 engineering team spends two sprints wiring up AI to understand their codebase — breaking it into chunks, injecting docs into prompts, building what amounts to a custom skill for their repo. It works. For them. For now.
Meanwhile, three floors up, a different team is doing the exact same thing. Different stack, same half-baked playbook. Neither knows the other exists. Neither is capturing what works. Both are building skills in isolation — and skills don't scale across an org. They're local by design. A skill encoded for the payments team doesn't compound into anything the infrastructure team can use. The knowledge stays siloed, the patterns stay fragmented, and six months later you've got seventeen bespoke AI setups that each work fine until the engineer who built them leaves.
That's not AI adoption. That's AI local optimization — and it's the 2025 version of every team writing their own Jenkins pipeline.
Two organizations. Two opposite problems. Same root cause.
No framework for how to use AI with integrity at the system level.
The DCO Argument Is a Proxy War
The petition invokes the Developer's Certificate of Origin. It asks whether AI-assisted code satisfies the DCO's requirement that contributions be created “in whole or in part” by the submitting engineer.
The OpenJS Foundation already answered this. Their legal opinion: LLM-assisted changes are not in violation of DCO.
The petitioners acknowledge this, then say it's “only a small part of the issue.”
Which is honest. Because the DCO argument was never really the argument. The real argument is about craft identity. It's the same emotional response engineers have always had when a new tool threatens to devalue the skill they've spent years building. I'm not saying that's illegitimate — I'm saying call it what it is.
The reproducibility argument is more interesting. The petition states that submitted generated code should be reproducible by reviewers without going through the paywall of subscription-based LLM tooling.
That's a real concern. But it's also already solved, and the solution has a name.
The Enterprise Side of the Coin
While the open source community is debating whether to allow AI at all, enterprises are running the opposite experiment: mandate AI adoption, measure usage metrics, and skip the part where you build any repeatable system for doing it well.
I wrote about this in November. AI Is the New Waste. Teams everywhere rebuilding context injection in isolation. No shared patterns. No feedback loops. No way to tell whether the AI output is consistent with the architecture decisions made three years ago by an engineer who left.
Amazon's Q Developer incident is the receipt. An engineer followed AI advice that pulled from a stale internal wiki and made the wrong call on a production environment. Amazon was clear in their correction: this wasn't an autonomous agent going rogue. The AI ingested outdated internal documentation and gave the engineer confidently wrong troubleshooting guidance. He followed it. The outage happened.
That's the enterprise failure mode in one sentence: AI running fast with no context governance behind it. The pipeline wasn't the problem. The knowledge base was rotten.
The open source failure mode is the opposite: a trusted contributor using AI as an accelerant, reviewing every line himself, and having his contribution treated as suspect because of how it was generated rather than what it contains.
Two extremes. One missing middle.
The Middle Has a Name
Context Engineering is the discipline the open source petition is accidentally pointing at, and the one enterprises are ignoring while they chase adoption metrics.
The reproducibility objection in the Node.js petition is legitimate — but the answer isn't “ban the tool.” The answer is: make the context queryable.
If the ADRs, architectural standards, and coding patterns that governed how Collina generated that 19,000-line PR are version-controlled and accessible, the output is reproducible by anyone with access to the same context. You don't need his Claude subscription. You need his knowledge corpus.
That's exactly what OutcomeOps does in enterprise environments — and it's why the approach works. The AI isn't generating from vibes. It's querying a live, version-controlled corpus of decisions, standards, and patterns before it writes a single line. The context is the audit trail. The context is what makes AI output reviewable, repeatable, and trustworthy at scale.
The PR review process already validates the output. What Context Engineering adds is the ability to validate the reasoning — to say, here are the ADRs that governed this generation, here are the architectural constraints it was working within, here is why it made the choices it made.
That's not a workaround for the reproducibility concern. That's a better answer to it than a ban.
We've Seen This Pattern Before
Every major platform shift produces the same two failure modes simultaneously.
Cloud arrived and half the industry ran wild — every team spinning up their own VPCs, their own IAM policies, their own Terraform modules in isolation. The other half refused to move off on-prem because “the cloud isn't proven.” Both groups were wrong in opposite directions.
DevOps arrived and the enterprise response was to rebadge their ops team and call it a platform. The open source response was to treat CI/CD pipelines as a sacred practice that proprietary tooling couldn't touch. Both groups missed the point.
AI is doing it again. Enterprises are measuring Copilot seat counts instead of output quality. Open source communities are writing petitions instead of governance frameworks.
The engineers who will matter in three years are the ones building the middle: repeatable systems for AI-assisted development that are auditable, governed, and grounded in institutional knowledge. The o16g Outcome Engineering manifesto called this shift months ago.
What I'd Actually Say to the Node.js TSC
Don't ban AI-assisted contributions. Build a contribution standard that requires context disclosure alongside code disclosure.
If you used Claude Code to generate a 19,000-line PR, submit the context corpus that governed it. The ADRs. The standards. The constraints you gave the model. Make that reviewable alongside the diff.
Now the output is reproducible. Now the review process has what it needs. Now you've solved the actual problem without throwing away the productivity gain — or insulting a contributor who reviewed every line.
That's not a novel idea. That's Context Engineering applied to open source governance. The same discipline that solves the enterprise problem solves this one.
The Missing Middle Is the Market
I'll say the quiet part out loud: both failure modes are opportunities.
Enterprises that are running AI wild without governance are one incident away from a mandate to fix it. Open source projects that ban AI entirely are going to watch their contributors fall behind, burn out, or both.
The answer in both cases is the same: stop treating AI as either a silver bullet or a threat, and start treating it as an engineering system that requires the same rigor you'd apply to any other part of your stack.
Governance without prohibition. Acceleration without chaos. Context as the connective tissue between what you intend to build and what AI actually generates.
That's the middle. It's not glamorous. It doesn't make a good petition. But it's where the real work is.
Stop Choosing Between Chaos and a Ban
Context Engineering is the discipline OutcomeOps is built on. If your enterprise is navigating AI adoption without a repeatable framework, let's talk.
We don't ban the tool. We make the context queryable, auditable, and governed — so AI output is trustworthy at scale.