The o16g Manifesto Validates What We've Been Building Since July
Yesterday, Cory Ondrejka — co-creator of Second Life, the engineer who saved Meta, and current CTO of Onebrief — published a manifesto called Outcome Engineering (o16g). Charity Majors, CTO of Honeycomb, said it practically had her doing cartwheels. It's making the rounds on LinkedIn and for good reason.
Go read it. I'll wait.
Here's what struck me: we've been building the platform that implements these principles since July 2025. Not because we read Cory's manifesto — it didn't exist yet. Because when you spend 20 years leading enterprise transformations and then sit down to build something from scratch, you arrive at the same conclusions.
That's not a flex. That's validation. When a Meta CTO and a Fortune 500 practitioner independently converge on the same philosophy, it means the philosophy is right.
The Convergence
Cory opens with: "It was never about the code."
In July 2025, we opened with: "DevOps is dead. Not because the ideas were wrong, but because the implementation lost the plot." Same observation, same starting point. The industry optimized for the wrong things — deployments, velocity, pipeline metrics — while the outcomes that actually matter went unmeasured.
Cory calls the new model Outcome Engineering. We call it OutcomeOps. The name doesn't matter. What matters is the shared realization that engineering must be measured by impact, not activity.
Let me walk through four of Cory's principles and show what the implementation actually looks like.
Point 6: "The Map" — No Wandering in the Dark
"Never dispatch an agent without context. Map the territory before building. If you don't know where you stand, you cannot calculate the path to the destination."
— Cory Ondrejka, o16g
This is Context Engineering. We named the discipline in October 2025 and defined it as the craft of designing the environment in which AI thinks — the knowledge, rules, and context that determine its effectiveness. Not prompts. Systems.
In practice, that means before any AI touches your codebase, you've already indexed your Architecture Decision Records, your dependency manifests, your documentation, your Jira issues, your Confluence pages. The AI doesn't wander. It operates within the boundaries of what your organization has already decided, built, and documented.
When an engineer asks "do we have a Terraform module for RDS with encryption at rest?" — the platform doesn't guess. It searches the indexed code-maps, finds the exact module, cites the ADR that explains why it was built that way, and links to the repo. The territory is mapped before anyone asks a question.
Point 11: "The Graph" — All the Context, Everywhere
"Agents cannot reason in a vacuum. Embed context into the infrastructure, not just the prompt."
— Cory Ondrejka, o16g
This is the core of what OutcomeOps does. We don't bolt AI onto the side of your workflow. We index your GitHub repos (code-maps, dependencies, ADRs, documentation), your Confluence spaces, your Jira projects, and your Outlook communications into workspace-scoped knowledge bases. The context lives in the infrastructure — vectorized, chunked, retrievable, and scoped to the team that needs it.
The workspace model is how this scales without chaos. A security team's workspace has their repos, their standards, their compliance artifacts. A developer team's workspace has their services, their ADRs, their backlog. The context boundaries are intentional and enforced. There's no cross-pollination unless the organization explicitly configures directional sharing.
This isn't a feature. It's the architecture. Context embedded in infrastructure, not stuffed into a prompt.
Point 4: "The Liberation" — The Backlog is Dead
"The backlog is a relic of human limitation. Never reject an idea for lack of time, only for lack of budget. If the outcome is worth the tokens, it gets built. Manage to cost, not capacity."
— Cory Ondrejka, o16g
In December 2025, we introduced the Outcome Engineer — an engineer who doesn't receive user stories from a Product Owner, but identifies business problems directly, defines success metrics upfront, and uses AI to handle implementation. The measuring stick isn't story points or velocity. It's attributed revenue, customer lifetime value, and feature adoption rate.
The practical proof: we reduced 16-hour development tasks to 15-minute implementations at $0.68 per feature. That's not a benchmark from a whitepaper. That's measured production data from Fortune 500 delivery. When the cost of building drops by two orders of magnitude, the backlog doesn't constrain you anymore. Budget does. Exactly as Cory describes.
Point 16: "The Validation" — Audit the Outcomes
"Trust is a vulnerability. Models drift. Prompts break. Capabilities change overnight. Continuously audit the agent against the domain. Verify the tool is sharp before you use it."
— Cory Ondrejka, o16g
We built a seven-layer defense system around our LLM pipeline — input moderation, refusal detection, forced refusal QA testing, logging, alerting, miss detection, and regression testing. Every input and output is logged. Every refusal is caught and categorized. Every moderation failure triggers a notification. The system doesn't trust the model. It verifies the model, continuously.
This wasn't academic. We built it because we had to. When you run AI in production across enterprise environments, you can't hope the model behaves. You build systems that prove it does — or catch it when it doesn't. The audit trail isn't a nice-to-have. It's the table stakes for enterprise trust.
Philosophy Needs Implementation
The o16g manifesto is the philosophy enterprises need to hear. For engineering leaders redefining how they measure value, it's a north star. But philosophy needs implementation. And implementation at enterprise scale adds chapters that no manifesto can cover.
Compliance requirements, cybersecurity supplements, air-gapped deployment mandates, 40-page reseller agreements — these aren't obstacles to outcome engineering. They're the terrain where it gets real.
Enterprise reality adds constraints that make the philosophy stronger, not weaker:
Information boundaries matter.
You can't give every agent "all the context, everywhere" when the organization has classified information types and regulatory obligations. Workspace scoping — limiting what AI can see based on team, role, and data classification — is how you implement Cory's Point 11 without creating a compliance nightmare.
Deployment model matters.
When the platform runs inside the customer's AWS account, not your SaaS environment, the trust model changes entirely. The customer controls their data, their keys, their network boundaries. That's not a limitation — it's what makes the philosophy viable for organizations that can't send their source code to someone else's servers.
Audit trails matter.
Point 16 says "audit the outcomes." In enterprise, that means every question asked, every answer returned, every source cited — logged, timestamped, and reviewable for 12 months minimum. Not because you want to spy on engineers, but because when the compliance audit comes, you need receipts.
These aren't objections to the manifesto. They're the next chapters. The ones that turn a philosophy into a platform enterprises will actually deploy.
The Takeaway
Cory Ondrejka wrote the philosophy beautifully. We've been building the implementation since July. The convergence is the point.
When a CTO who saved Meta and a practitioner who's led transformations at Fortune 10 companies both independently arrive at the same conclusion — it was never about the code, it was always about the outcomes — that's not coincidence. That's a signal.
The question for every engineering organization is the same one we asked in our first blog post: are you measuring your work by how fast you ship, or by the value it creates?
The manifesto has been written. Twice now. The platform exists. The Outcome Engineer is already here.
Time to Build.
The philosophy has been validated. The platform is ready. See how OutcomeOps implements outcome engineering at enterprise scale.