AI-Generated ADRs: From Zero Documentation to Queryable Architecture
Your codebase already contains the decisions. You just haven’t documented them yet.
“We Don’t Have ADRs. Can We Still Use OutcomeOps?”
This is the #1 question from enterprise teams evaluating OutcomeOps. They assume they need to spend months writing Architecture Decision Records before they can start. That assumption is wrong.
Give us your code. We’ll generate the ADRs from it.
Your codebase IS your documentation — every pattern, every library choice, every error handling convention is an implicit architectural decision. The problem isn’t that decisions don’t exist. It’s that nobody wrote them down.
OutcomeOps solves this by ingesting your repositories, generating code-maps that understand your architecture, and then letting you query that knowledge to produce real ADRs grounded in your actual code.
Ask a Question. Get a Production-Quality ADR.
Whether you use the chat interface or the CLI, the workflow is the same: ask a question in plain English, and OutcomeOps queries your code-maps, documentation, and existing patterns to produce a real ADR grounded in your actual architecture.
ADR-003: Authentication — Magic Link (Passwordless)
Status: Accepted | Sources: 4 documents
Decision: Magic link as sole primary auth. No passwords.
Flow: Email submit → Lambda generates link → Token validated → JWT issued with org_id claims
Security: Third-party access via Terraform-controlled endpoint, disabled by default in production
+ Consequences, Standards & Requirements, Alternatives Considered, Testing Requirements...
This isn’t a template filler. The AI reasons about what it finds in your code-maps and produces ADRs with real architectural context, specific file references, and citations. No boilerplate. No “fill in the blanks.”
Example: Magic Link Authentication (RetrieveIT.ai)
Let’s start with a real example. RetrieveIT.ai is a multi-tenant SaaS RAG platform. We ran one command:
./outcome-ops-assist --env prd \
"build me an ADR for app.retrieveit.ai magiclink system" \
--advancedHere’s what it generated:
ADR-003: Authentication Mechanism — Magic Link (Passwordless)
Context: RetrieveIt AI is a multi-tenant SaaS RAG platform requiring secure authentication that supports organizational isolation. The system must authenticate users, associate them with their organization, and enforce tenant boundaries across all API operations.
Decision: The platform uses magic link authentication as its sole primary authentication mechanism. No passwords are stored or managed by the system.
The generated ADR included the complete authentication flow:
Authentication Flow:
1. User submits email address
2. Lambda generates a unique, time-limited magic link
3. Link is sent via SES to the user's email
4. User clicks link -> Lambda validates the token
5. Valid token -> issues JWT
6. JWT contains org_id claims for tenant isolation
7. All subsequent API calls validated against JWT org_idIt also synthesized the secondary third-party access endpoint — an obscure path used for security testers and app store reviewers — into the same ADR with the correct Terraform controls and security considerations:
Secondary Access: Third-Party Endpoint
A secondary authentication path exists for third-party access, controlled via Terraform configuration. This endpoint bypasses the magic link flow and issues scoped tokens with limited permissions. Access is controlled at the infrastructure level, not application level.
The ADR also generated consequences, standards, and alternatives:
Consequences
- • No password storage reduces attack surface and eliminates credential stuffing risk
- • Email dependency — authentication fails if email delivery fails
- • User experience tradeoff: no instant login, but no forgotten passwords either
Standards & Requirements
- • MUST: All magic link tokens expire within 15 minutes
- • MUST: JWT tokens include org_id claim for tenant isolation
- • MUST NOT: Store or accept passwords in any authentication flow
- • MUST NOT: Allow token reuse after successful authentication
Alternatives Considered
| Alternative | Reason Rejected |
|---|---|
| Username/Password | Adds credential storage liability, password reset flows |
| OAuth2/OIDC | External IdP dependency, complexity for current scale |
| API Keys Only | No user-level identity, insufficient for multi-tenant isolation |
Sources
Code map, text, markdown from bcarpio/app.retrieveit.ai
Nobody told the AI about the magic link flow, the third-party endpoint, or the JWT org_id claims. It discovered all of this from the code maps.
Example: DynamoDB Multi-Tenant Isolation
./outcome-ops-assist --env prd \
"Generate an ADR for the DynamoDB multi-tenant data isolation pattern" \
--advancedDecision
All DynamoDB access MUST enforce org_id-scoped isolation. Every table uses the ORG#{org_id} prefix in partition keys, ensuring that no query can cross tenant boundaries without explicit key construction.
The generated ADR included concrete code patterns:
# REQUIRED: All DynamoDB queries must scope to org_id
pk = f"ORG#{org_id}#DOCUMENT#{document_id}"
# PROHIBITED: Never query without org_id prefix
pk = f"DOCUMENT#{document_id}" # VIOLATION - crosses tenant boundaryThen it generated something the team didn’t have before — a code review checklist:
Code Review Checklist (AI-Generated)
- • Every DynamoDB
get_item/query/put_itemcall includesORG#prefix in the partition key - • No scan operations without explicit org_id filter expression
- • GSI queries enforce the same org_id scoping as primary table queries
- • Batch operations verify all items share the same org_id
- • Error handling does not leak cross-tenant data in exception messages
Testing Requirements (AI-Generated)
- • Unit tests MUST verify that queries for Org A never return Org B data
- • Integration tests MUST create items across two orgs and verify isolation
- • Negative tests MUST confirm that removing org_id from a key results in an access error, not a cross-tenant read
The AI identified the pattern AND the enforcement mechanisms needed. Code review checklists and testing requirements that the team didn’t have before — generated from one command.
Example: RAG Pipeline Architecture
./outcome-ops-assist --env prd \
"Generate an ADR for the RAG pipeline" \
--advancedThis one documented a system with 30+ Lambda functions, 5 AWS services (S3 Vectors, Bedrock, DynamoDB, EventBridge, SQS), and complex data flows. From one command.
The generated ADR mapped out a 5-phase pipeline:
RAG Pipeline Phases:
Ingest -> Embed -> Store -> Search + Rerank -> Synthesize
Phase 1 - Ingest: S3 upload triggers EventBridge -> SQS -> Lambda
Phase 2 - Embed: Bedrock embedding model (same model for ingest and query)
Phase 3 - Store: S3 Vectors for embeddings, DynamoDB for metadata
Phase 4 - Search: Vector similarity search, then cross-encoder reranking
Phase 5 - Synthesize: Bedrock LLM with retrieved context -> responseIt produced a component-to-Lambda mapping table:
| Phase | Lambda Functions | AWS Services |
|---|---|---|
| Ingest | document-processor, chunk-splitter, metadata-extractor | S3, EventBridge, SQS |
| Embed | embedding-generator, batch-embedder | Bedrock |
| Store | vector-writer, metadata-indexer | S3 Vectors, DynamoDB |
| Search + Rerank | query-embedder, vector-search, reranker | S3 Vectors, Bedrock |
| Synthesize | context-assembler, response-generator | Bedrock |
Why This Beats “Scan and Generate”
There are other tools that claim to generate ADRs from code. Here’s why OutcomeOps is different:
1. Grounded in YOUR code — not templates
The magic link ADR found a specific obscure third-party endpoint path. A template would never know that. Generic tools produce generic output. OutcomeOps produces ADRs that reference your actual file paths, your actual infrastructure, your actual patterns.
2. Cross-references existing ADRs
The DynamoDB ADR automatically referenced ADR-007 (Documentation-Driven Decisions) because it already existed in the knowledge base. New ADRs build on existing ones, creating a coherent architecture narrative.
3. Generates enforcement mechanisms
Code review checklists. Testing requirements. MUST/MUST NOT standards. Not just descriptions of what exists — prescriptions for how to maintain it.
4. Compounds over time
Each generated ADR gets ingested back into the knowledge base, making future generation better. The loop:
Code -> Code Maps -> ADRs -> Knowledge Base -> Better Code Generation -> Better ADRs5. Enterprise-scale
Works on 30+ Lambda serverless architectures, SAP/ABAP legacy codebases, and Spring Boot monoliths. If your code exists, OutcomeOps can map it and generate ADRs from it.
Enterprise Implementation
The Context Engineering methodology described in this post is open source. The production platform with autonomous agents, air-gapped deployment, and compliance features is available via enterprise engagements.
Learn MoreGetting Started
Two paths, same destination:
Already have ADRs?
OutcomeOps ingests them and enforces them on every PR. Your existing documentation becomes executable context immediately.
Don’t have ADRs?
Give us your repos. We’ll generate them. One command per decision. Production-quality output grounded in your actual code.
Both paths lead to the same outcome: a queryable knowledge base where your architecture is documented, searchable, and enforced by AI.
From Zero Documentation to Queryable Architecture
Stop pretending you’ll write ADRs “when things slow down.” They won’t. Let your code write them for you.
All ADR examples shown are generated from real codebases using OutcomeOps AI Assist. The CLI, knowledge base, and code-map infrastructure are deployed in your AWS environment. Learn more about enterprise engagements.