Technical Deep Dive

What Context Engineering Actually Produces

Traditional Documentation vs. OutcomeOps Code-Maps

Before: What Most Teams Have

"""

This Lambda creates coupons.

It takes a code, amount, and description.

Returns 200 on success.

"""

After: What OutcomeOps Generates

Behavioral specification with:

  • Exact DynamoDB key patterns
  • Error handling specifications
  • Validation rules
  • Request/response schemas
  • Edge case behaviors

(Full code-map shown below)

The Actual Code-Map: Admin Create Coupon Lambda

This is what OutcomeOps automatically generates from your code. Notice the depth of behavioral specification—this isn't documentation, it's encoding your senior engineer's brain into a format AI can consume.

# Admin Create Coupon Lambda - Implementation Documentation

Click any section to expand/collapse

### lambda_handler(event, context)
**Main entry point for Lambda execution**

**Parameters:**
- event (dict): API Gateway Lambda proxy integration event
  - requestContext.http.method (string): HTTP method (POST, OPTIONS)
  - headers.Authorization (string): JWT bearer token (format: "Bearer <token>")
  - body (string): JSON-encoded request body
- context (object): Lambda context object (unused)

**Request Body Structure:**
{
  "code": "string (required, will be uppercased)",
  "amount": "integer (required, > 0)",
  "description": "string (required, non-empty)",
  "total_allowed_uses": "integer (required, > 0)"
}

**Parameter Specifications:**
- code: Coupon code identifier
  - Automatically converted to uppercase
  - Must be unique across all coupons
  - Stripped of leading/trailing whitespace
- amount: Credit amount in smallest currency unit (e.g., cents)
  - Must be positive integer
  - Stored as Decimal in DynamoDB
- description: Human-readable coupon description
  - Required, non-empty after stripping whitespace
- total_allowed_uses: Maximum number of times coupon can be redeemed
  - Must be > 0
  - Stored as Decimal in DynamoDB

### _response(status_code, body)
**Formats API Gateway response with CORS headers**

**Parameters:**
- status_code (int): HTTP status code
- body (dict): Response payload (will be JSON-serialized)

**Returns:**
- dict: API Gateway Lambda proxy integration response format

### _decimal_default(obj)
**JSON serialization helper for Decimal types**

**Parameters:**
- obj (any): Object to serialize

**Returns:**
- float: If obj is Decimal
- Raises TypeError: If obj is not Decimal

This is ONE Lambda. OutcomeOps generates this level of detail for your entire codebase.

But Code-Maps Are More Than AI Input

The same structured context (ADRs + Code-Maps + READMEs) serves three critical purposes in the OutcomeOps platform:

1. Code Generation Input

AI reads your ADRs, code-maps, and READMEs to generate code that matches YOUR patterns, YOUR error handling, YOUR data models.

2. Self-Checking Validation

The AI then uses the SAME context to validate its own work—checking for ADR compliance, architectural drift, missing tests, documentation gaps.

3. Queryable Knowledge Base

Engineers query the structured codebase to find duplications, troubleshoot issues, identify related functionality across repos.

This is the feedback loop: Generate → Validate → Learn

Every PR becomes training data. Every failure becomes an ADR. Every ADR improves the next generation.

Now Imagine Feeding This to AI

Without Context Engineering

> "Create a coupon redemption handler"

Generic code that doesn't match your patterns
Wrong DynamoDB key format
Missing your error handling
2 hours debugging why it breaks

With Context Engineering

> "Create a coupon redemption handler"

Uses YOUR exact key patterns (COUPON#{CODE}/META)
Includes YOUR error responses (409 for duplicates)
Follows YOUR validation patterns
90% production-ready

This is ONE Lambda.

OutcomeOps does this for your ENTIRE codebase.

The Self-Checking Loop in Action

The AI uses the same ADRs, code-maps, and READMEs it used to generate code to now validate that code against your standards.

Here's what OutcomeOps catches automatically on every PR:

OutcomeOps Analysis Started

Automated Code Review

Running 5 checks:

ARCHITECTURAL DUPLICATIONTEST COVERAGEREADME FRESHNESSADR COMPLIANCELICENSE COMPLIANCE

ARCHITECTURAL DUPLICATION

No related functionality identified

Details:

  • This functionality is unique across the platform
  • No related implementations were identified in other repositories

TEST COVERAGE

1 handler(s) missing required tests (ADR-003 violation)

Details:

lambda/list-recent-docs/handler.py: Missing required tests

  • ADR-003 requires: Tests must be written before a story is considered DONE
  • Add test_list-recent-docs.py in lambda/tests/unit/ directory
  • Follow testing standards: pytest, moto for AWS mocking, 70%+ coverage

README FRESHNESS

README.md not updated

Details:

  • Infrastructure files changed (1 file(s)) but README.md was not updated
  • New Lambda handlers added (1 file(s)) but README.md was not updated
  • Consider documenting new resources, handlers, and their purposes

ADR COMPLIANCE

Found compliance issues in Lambda handlers and Terraform files

Details:

lambda/list-recent-docs/handler.py

Handler is missing CORS headers in API Gateway responses (ADR-004 violation)

Suggestions:

  • Add CORS headers (Access-Control-Allow-Origin, Access-Control-Allow-Headers, Access-Control-Allow-Methods) to all response objects
  • Consider creating a _response() helper function to standardize responses with CORS headers as shown in ADR-004

terraform/lambda.tf

Module version downgraded from 8.1.2 to 7.14.0, violating version pinning best practices (ADR-001)

Suggestions:

  • Change version from '7.14.0' to '8.1.2' to maintain the current module version
  • If downgrade is intentional, document the reason in PR description

LICENSE COMPLIANCE

Prohibited licenses detected in 1 file(s)

Details:

File path contains 'gpl' indicator and is missing required copyright header - potential GPL code violation per ADR-008

  • lambda/test-gpl-code/auth_validator.py

New file missing required copyright header per ADR-008

  • docs/lambda-list-recent-docs.md
  • lambda/list-recent-docs/handler.py
  • lambda/list-recent-docs/schemas.py

All checks reference specific ADRs. Every suggestion is grounded in YOUR documented standards.

What's Being Caught Automatically

  • Architectural drift before it reaches production (module version downgrades, missing CORS)
  • ADR violations with specific remediation steps (not just "add tests", but which ADR and where)
  • Documentation gaps when infrastructure changes (README not updated for new Lambdas)
  • Code duplication by checking for similar functionality across repos
  • License violations detecting GPL/copyleft code and missing copyright headers before legal issues arise

Want to See More Examples?

The open-source OutcomeOps framework includes more code-maps, ADR examples, and the tools to generate them for your own codebase.