A Platform Built for the Buyers Who Ask Hard Questions
OutcomeOps AI Assist runs inside the customer’s own AWS account. That deployment model rewrites the requirements list. The buyer’s finance team wants to know what each workspace can spend before it spends it. The buyer’s InfoSec team wants every action piped into their existing SIEM in a format their analysts already read. The buyer’s platform team wants new content sources added on the cadence of their backlog, not ours.
Below are four answers from the last six weeks — each one shipped in days, not months.
Recent Feature Delivery
2026-04-16 → 04-17
~2 days
Analytics + Workspace Budgets
2026-05-03 → 05-08
~5 days
Audit UI + OCSF SIEM export
2026-05-11
~1 day each
OneDrive + OneNote
Workspace Budgets
/analytics/budgetsShipped in 2 days · Apr 16-17Org admins set a monthly cost cap on each workspace. The cap covers everything that touches a billing line: embedding calls, retrieval, generation tokens, rerank passes. The UI shows month-to-date spend per workspace and the cap next to it. When MTD crosses a threshold, an alert fires. Once. Per workspace. Per month.
Daily Threshold Check
An EventBridge cron fires analytics-cost-alert once a day. The Lambda reads month-to-date spend from the analytics aggregates table and compares it to each workspace’s budget. No streaming, no polling — one check a day is enough when the alert is a CFO conversation, not a kill switch.
Exactly-Once Alerts via Conditional Writes
Alert de-dup is a DynamoDB row keyed PK=ALERT, SK=MTD#{YYYY-MM}#WS#{id}. The conditional write fails if the alert already fired this month. The Lambda can run a hundred times — the SNS notification leaves the platform once.
From the handler comment
Triggered once per day by EventBridge. Reads month-to-date spend from the analytics aggregates table and publishes an SNS notification the first time the org-wide or a per-workspace budget threshold is crossed in the current calendar month.
Analytics for Cost
/analyticsShipped alongside Budgets · Apr 16-17Budgets only matter if the spend they govern is legible. Every chat, every retrieval, every rerank emits a cost row tagged with workspace, user, model, and call type. A nightly aggregator collapses those rows into MTD totals you can drill into per workspace.
Per-Call Cost Rows
Every Bedrock invocation logs tokens-in, tokens-out, model, and computed cost. The chat path, the rerank path, and the backfill scripts all share the same recorder.
Nightly Aggregator
analytics-aggregator rolls raw rows into MTD totals per workspace. Read-time queries are O(1) lookups, not table scans.
Drill-Down UI
WorkspaceAnalytics.tsx renders the breakdown by model and call type. Finance sees the same numbers the platform team optimizes against.
Audit UI & OCSF Stream Export
/auditShipped in 5 days · May 3-8Every privileged action — workspace creation, membership change, system-prompt edit, integration connect/disconnect, chat refusal — writes an audit row. The UI shows the last N days with filters for critical actions and refusals, a date-range picker, and CSV/JSON export.
That’s the table-stakes layer. The differentiated layer is what happens after the row lands.
The SIEM Hook
A DynamoDB Streams consumer (audit-stream-publisher) re-emits every audit INSERT as an OCSF v1.3.0 envelope into a Kinesis Data Stream. Customers point their own consumer at that stream — Splunk, Datadog, Sumo Logic, AWS Security Lake, or a Firehose into S3 — and filter on consumption.
Every row gets a proper category_uid / class_uid / activity_id / severity_id. Actor and src_endpoint structures populate from user_email / source_ip / user_agent. The raw payload is preserved under unmapped.
Why OCSF Matters
OCSF is the open security event format Splunk, AWS, Cisco, IBM, and CrowdStrike co-author. Emitting it natively means the buyer’s analysts don’t parse our logs — they query the events from the same dashboards they use for everything else.
We Publish; They Filter
The stream carries every event. Customer-side FilterCriteria on the consumer event source mapping decides what reaches their SIEM. The platform never has to ship a feature for "export only logins between 2am and 4am" — the SIEM already does that.
Two Integrations in a Day
Shipped today · May 11The fourth pillar is connector velocity. The first three sections describe platform features; this section is the proof those features hang together. On May 11, 2026, OneDrive shipped to production and OneNote landed in the same release. Ten commits between the two of them, all on the same day.
OneDrive
- • Integration trio Lambda +
audit_writerplumbing - • Terraform: Lambda trio + SQS + EventBridge cron
- • 75 new unit tests for the trio
- •
file-ingestiondispatch updated forsource=onedrive - • Orphan cleanup on every sync to propagate deletes from OneDrive into the index
OneNote
- • Integration trio Lambda +
audit_writerplumbing - • Terraform: Lambda trio + SQS + EventBridge cron
- • UI surface:
api.ts, Express proxy, React page - •
file-ingestion+delete-workerhandlesource=onenote - • Unit-test coverage for the trio + file-ingestion path
The "Integration Trio" Pattern
Every connector follows the same three-Lambda shape: OAuth handshake, file enumeration, and a per-source processor inside the shared file-ingestion Lambda. The fourth piece — audit writing — is plumbed through automatically. Adding OneNote was effectively a port: copy the OneDrive shape, swap the API client, register the new source= string.
The pattern isn’t a happy accident. It came from the same ADR library Claude Code queries on every product (we wrote about it in the RetrieveIT deep dive). The sixteenth RetrieveIT connector cost less than the first. The eighth OutcomeOps connector cost less than the seventh.
Built With the Same Tools We Sell
The platform that ships customer code is the platform we ship our own platform with. The ADRs that mandate terraform-aws-modules/lambda/aws v8.1.2 built every one of these 48 Lambda functions. The pytest layout from ADR-003 + autouse AWS mocks from ADR-012 made the 75 OneDrive tests cheap to write. The Decimal-over-float discipline from ADR-009 kept the cost analytics honest down to the fraction of a cent.
Context Engineering is the methodology. The ADR repository is the artifact. The OutcomeOps MCP server is how Claude Code reads it. Every product we ship — and every product we sell — runs on the same loop.
The Point
Enterprise RAG is easy to demo and hard to govern. Workspace budgets answer the finance question. OCSF audit answers the security question. Same-day OneDrive + OneNote answers the velocity question.
Every answer came out of the same pattern library.