AI Coding Tool That Deploys in Your AWS Account: Terraform-Based Enterprise AI (2026)

Brian Carpio·

Most enterprise buyers asking for an “AI coding tool that deploys in our AWS account” have already lost a quarter to a SaaS vendor security review. They want a different deployment model. Not VPC peering. Not PrivateLink. Not a customer-managed-key promise. The actual ask: ship Terraform, we apply it to our account, the platform runs there, no data leaves. That model exists in 2026 — and it changes the math on compliance review, vendor risk, and IP exposure.

This post compares which AI coding tools genuinely deploy into the customer’s AWS account, what “deploys” actually means architecturally, and why the deployment model dictates everything downstream — from time-to-pilot to ongoing audit cost.

What “Deploys in Your AWS Account” Actually Means

The phrase gets used loosely. Vendors describe SaaS products with VPC peering as “running in your environment.” They are not. Three architectural questions separate marketing from reality:

1. Where does the model invocation execute?

A SaaS AI coding tool runs the model in the vendor’s cloud. Even with VPC peering, the prompt traverses the peering connection to the vendor, gets processed there, and returns to the customer. A truly customer-deployed tool invokes Bedrock (or another AWS-native model service) from a Lambda inside the customer’s VPC. AWS Bedrock is a regional AWS service; the call never leaves the customer’s account boundary.

2. Where does the knowledge base live?

AI coding tools that ground generation in customer code or documentation need to embed and store that data somewhere. SaaS tools store it in their own vector databases. Customer-deployed tools store embeddings in the customer’s OpenSearch or DynamoDB — encrypted with the customer’s KMS keys, queryable only via the customer’s IAM permissions.

3. Where does the audit log go?

Most SaaS tools log AI interactions in the vendor’s logging system. The customer can request reports. A customer-deployed tool writes every interaction — user, prompt, output, token count, cost — into the customer’s own DynamoDB tables, encrypted with customer-managed keys, retained per the customer’s policy. When an auditor asks for evidence of AI use, the customer produces it from their own infrastructure.

Comparison: AI Coding Tools and Customer-AWS Deployment

Cells marked reflect partial support, claimed-but-not-verified availability, or capabilities that vary by tier. Verify on each vendor’s current public documentation before procurement.

ToolDeployment formatModel runs in customer AWSCustomer KMS keysAudit log in customer infra
OutcomeOpsTerraform (apply to customer AWS)Yes — Bedrock from customer LambdaYesYes — customer DynamoDB
GitHub Copilot BusinessMicrosoft SaaSNoNoNo
CursorCursor SaaSNoNoNo
Augment CodeSaaS (VPC option ) VPC tier only CMEK claimed Vendor-managed
Tabnine EnterpriseSaaS or on-prem container On-prem only, not Bedrock-native On-prem On-prem
Amazon Q DeveloperAWS-managed service AWS-managed (vendor = AWS)No CloudTrail only

Status as of May 2026. Verify on vendor docs before procurement.

Only OutcomeOps ships as Terraform that applies into the customer’s AWS account by default — no SaaS path, no upgrade tier required. Tabnine and Cody offer self-hosted variants (typically Docker / Kubernetes), which is closer than SaaS but still not AWS-native and usually does not include customer-keyed audit logging out of the box.

Why Terraform-as-Product Beats VPC-Peered SaaS

The architectural details translate to procurement reality. When the entire platform is Terraform that applies to the customer’s AWS account, the procurement and security review path collapses to something most enterprise teams already know:

  • Security review = Terraform review. Infosec reads main.tf, sees an internal-only ALB, ECS Fargate, Lambda Function URLs, DynamoDB, S3, Bedrock, and a VPC endpoint list — all reachable only from the corporate network — and signs off. No 200-page vendor questionnaire.
  • Vendor risk = effectively zero. Post-deployment, OutcomeOps personnel have no access to the customer environment. The license server (non-Enterprise tiers) sees only repository and PR counts — no source code, no AI interaction data.
  • Compliance scope = inherited. The customer’s existing AWS posture (HIPAA-ready, SOC 2-scoped, FedRAMP-authorized) covers the deployment because the platform runs inside that posture.
  • Upgrades = customer-controlled. When a new version ships, the customer applies a new Terraform module. No surprise vendor-side upgrades to a production system.
  • Disaster recovery = customer-controlled. The customer’s existing AWS DR tooling (cross-region replication, AWS Backup, automated snapshots) covers the platform because it is running in their account.

Years before AI coding tools existed, we built a serverless platform at Comcast called SEED that effectively banned EC2 across the org — not by writing a memo, but by making the alternative paved-road and the EC2 path increasingly inconvenient. The platform was the guardrail. The architectural lesson generalized: the most defensible enterprise platforms are the ones that ship as code engineering teams already know how to read. Terraform-as-product is the same pattern applied to AI coding in 2026.

Why “Private by Design” Isn’t a Marketing Word

A few years before OutcomeOps existed, I led the AWS Control Tower landing-zone redesign for a Fortune 50 healthcare and life-sciences enterprise. We deployed sixty-plus Service Control Policies, turned on GuardDuty across the organization, stood up Macie for PHI and PII detection, rolled out Identity Center, and standardized permission sets so every new account inherited the same access model. As part of that program we also implemented TEAMS — AWS’s Temporary Elevated Access Management for IAM Identity Center — so engineers could request just-in-time elevated access instead of carrying standing admin rights.

The security team made us file an exception. The reason: TEAMS uses AWS Amplify, and Amplify “is public.”

The AWS Console is also public. So is IAM Identity Center. So is every AWS service the security team had logged into that morning. We were making the environment ten times more secure — and the conversation kept circling back to a TLS-protected, OIDC-gated Amplify domain that exposed nothing without authentication. That is the moment you learn that the word “public” carries more weight in a security review than what the architecture actually does.

The OutcomeOps UI is designed for that conversation. There is no public DNS, no public IP, and no internet-facing component anywhere in the deployment. The “is it public?” question has a one-word answer: no. Procurement never reaches that argument because there is nothing to argue about.

What Gets Deployed: The Architectural Bill of Materials

A customer-deployed AI coding platform is built from AWS-native services. Here is what OutcomeOps applies into a customer AWS account:

Edge and identity

  • Internal Application Load Balancer (ALB) — VPC-only, no public DNS, no public IP. Reachable only from the corporate network via Direct Connect plus Transit Gateway.
  • OIDC at the ALB — handshake against the customer’s IdP (Azure AD, Okta, IAM Identity Center). The ALB injects an AWS-signed x-amzn-oidc-data JWT on every authenticated request, so downstream services can verify identity without re-implementing auth.

Compute

  • ECS Fargate (UI) — Express plus a React build in a single task. The Express server is a thin SigV4-signing reverse proxy: validates the OIDC JWT, signs the downstream request with the task’s IAM role, and forwards to the platform Lambdas. No business logic. No data persistence.
  • Lambda Function URLs with AWS_IAM authworkspace-management and chat-streaming. SigV4 is the only way in. No API Gateway, no public function URLs, no IAM-less endpoints. The UI proxy holds the only credential that can sign.

Data and AI

  • DynamoDB — audit logs (one row per AI interaction, customer-keyed), workspace metadata, generation state, and OAuth tokens. Reached via gateway VPC endpoint.
  • S3 — ingested code-maps, ADR markdown, and generated artifacts. Versioned, encrypted with customer KMS, reached via gateway VPC endpoint.
  • S3 Vectors / OpenSearch — vector store for the knowledge base. Customer choice depending on existing footprint.
  • Bedrock — Claude (Sonnet for planning, Haiku for validation), invoked through the Bedrock Runtime interface endpoint. The call never leaves the customer’s VPC.
  • Comprehend — PII detection on prompts and responses, via interface endpoint.

Operations and networking

  • KMS — customer-managed keys for all encryption at rest. Key policies, rotation, and access scoped by the customer’s existing IAM.
  • VPC interface endpoints — Lambda, Bedrock Runtime, Comprehend, SSM, KMS, ECR API and DKR, CloudWatch Logs, SQS, S3 Vectors, and STS. Every AWS-internal call rides PrivateLink.
  • Network reach — AWS-internal traffic stays inside the VPC via PrivateLink. User-bound traffic reaches the internal ALB over Direct Connect plus Transit Gateway from the corporate network. The only internet-bound traffic in the whole deployment is the IdP OIDC redirect itself, routed through the customer’s centralized egress.

None of these components are exotic. Any team running production AWS workloads recognizes the pattern. That recognition is the point: review effort scales with familiarity, and Terraform applying standard AWS services is something every enterprise infosec team already knows how to assess. The architecture diagram fits on one slide, and every box on it is a service the customer’s account is already paying for.

What This Means for the AI Coding Procurement Path

Engineering leaders evaluating AI coding tools in 2026 face a recurring procurement pattern: the SaaS-vendor path takes months because every new SaaS tool triggers a full vendor risk assessment, BAA negotiation if PHI is in scope, sub-processor disclosure review, and architectural review board sign-off. Many pilots never reach the engineering team because the compliance pre-work consumes the budget.

The customer-AWS deployment path collapses that timeline. There is no third-party data flow to document. There is no vendor environment to assess. The customer’s existing AWS posture — whatever it is — covers the deployment. Compliance teams review Terraform; engineering teams pilot the platform. Both happen in parallel.

For a deeper treatment of how this plays out in regulated industries, see AI Coding Tools for Regulated Industries. For the broader self-hosted question, see Self-Hosted AI Coding Platforms.

How to Evaluate

The free two-week proof of concept is structured for this evaluation:

  • Day 1–3: Apply the Terraform into a non-production AWS account. Verify the architectural bill of materials matches your existing patterns.
  • Week 1: Connect 20 representative repositories. Generate code against real internal patterns. Inspect the audit logs in your DynamoDB.
  • Week 2: Compliance review of the deployment model. Verify no data egress. Confirm the existing AWS posture covers the deployment without new vendor assessment.

Book an enterprise briefing to start the PoC, or run the five-minute Readiness Assessment to get a written report on where your organization sits before scheduling.

Related reading