Skip to Content
Core ConceptsArchitecture & Data Flow

Architecture & Data Flow

Veris supports two execution modes that determine where your agent runs during simulations. Both modes use the same CLI, scenarios, graders, and evaluation pipeline — only the hosting boundary changes.

ModeYour agent runs onBest for
Veris-hosted (default)Veris cloud (GKE)Getting started quickly, no infra to manage
Customer-hostedYour own K8s clusterData residency, compliance, air-gapped environments

Architecture Overview


Mode 1: Veris-Hosted (Default)

Your agent code is pushed to the Veris container registry and runs in an isolated pod on Veris-managed GKE infrastructure.

Veris-Hosted Architecture

System Boundaries

Customer Infra — everything under your control:

  • CLI / CI pipeline — runs on your machine or CI runner, pushes code and sends commands
  • LLM provider — your agent’s LLM calls go directly to the provider using your own API keys

Customer / Veris Infra — your code running on Veris infrastructure:

  • Simulation pod — an isolated, ephemeral container on Veris GKE with gVisor sandboxing. Contains your agent, mock services, simulated user, simulation engine, eval & reporting, and log uploader

Veris Infra — Veris-managed services:

  • API & Orchestration — authentication, job dispatch, monitoring
  • Data Lake — simulation transcripts, evaluations, reports, encrypted secrets

What’s Inside a Simulation Pod

Each pod is a self-contained simulation environment:

ComponentRole
Your agentYour code, started via the entry_point defined in veris.yaml
Mock servicesLLM-powered mocks for CRM, Calendar, Stripe, Jira, etc.
Simulated userAn LLM-powered actor that drives the conversation with your agent
Simulation engineOrchestrates the scenario, manages turns, and records the transcript
Eval & reportingGrading, root cause analysis, and actionable recommendations
Log uploaderSidecar that streams logs and transcripts to the data lake

Data Flow

Egress (you send):

DataHow It’s SentWhere It’s Stored
Docker image (agent code + dependencies)veris env pushVeris Container Registry
veris.yaml (services, actor config, entry point)Bundled in the Docker imageVeris Container Registry
Environment variables & secretsveris env set KEY=VALUE --secretDatabase, encrypted at rest (Fernet/AES)
Scenarios (test case definitions)veris scenarios generate or manual uploadDatabase + Data Lake

Ingress (you receive):

DataHow It’s Retrieved
Simulation transcriptsCLI: veris simulations get / Console UI
Evaluation scoresCLI: veris evaluation-runs status / Console UI
ReportsCLI: veris reports get -o report.html / Console UI
Run status & metadataCLI: veris run status / Console UI

Security Model

ConcernHow It’s Handled
Code isolationEach simulation runs in a gVisor-sandboxed container with no access to host or other tenants
Secrets at restEncrypted with Fernet (AES) in the database; decrypted only at pod creation time
Secrets in transitAll API communication over HTTPS/TLS
Network isolationPods have no access to Veris infrastructure; only outbound to LLM providers via customer keys
Ephemeral containersPods are destroyed after simulation completes; no persistent state

Mode 2: Customer-Hosted

In customer-hosted mode, your agent code never leaves your infrastructure. The Veris backend connects to your Kubernetes cluster via the K8s API and creates simulation Jobs directly in a namespace you control.

Customer-Hosted Architecture

How It Works

  1. You register your K8s cluster with Veris (API server URL, CA cert, bearer token)
  2. You set your environment’s execution_mode to customer_hosted
  3. When you run simulations, the Veris backend creates Jobs, Secrets, and ConfigMaps on your cluster instead of Veris GKE
  4. The log uploader sidecar uses a short-lived GCS token (via service account impersonation) to upload transcripts back to Veris storage
  5. Evaluation, reporting, and all other pipeline steps work identically

System Boundaries

Customer Infra — everything runs on your side:

  • CLI / CI pipeline — pushes code and sends commands
  • K8s cluster — your own cluster where simulation pods run, in a dedicated veris namespace
  • Image registry — optionally push to your own container registry instead of Veris’s
  • LLM provider — agent LLM calls stay within your network perimeter

Veris Infra — only orchestration and storage:

  • API & Orchestration — dispatches jobs to your cluster via K8s API using a bearer token you provide
  • Data Lake — receives transcripts via short-lived upload tokens
  • Database — stores run metadata, scenarios, evaluations, encrypted secrets

Setup

1. Prepare your cluster

Create a namespace and service account with the required RBAC permissions:

apiVersion: v1 kind: Namespace metadata: name: veris --- apiVersion: v1 kind: ServiceAccount metadata: name: veris-simulation namespace: veris --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: veris-simulation namespace: veris rules: - apiGroups: ["batch"] resources: ["jobs"] verbs: ["create", "get", "list", "delete", "watch"] - apiGroups: [""] resources: ["pods", "pods/log"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["secrets", "configmaps"] verbs: ["create", "get", "delete", "list"] - apiGroups: [""] resources: ["namespaces"] verbs: ["get"]

2. Register your cluster

veris clusters register \ --name "my-production-cluster" \ --provider gke \ --api-server-url "https://<cluster-ip>" \ --ca-certificate /path/to/ca.pem \ --namespace veris \ --token "<bearer-token>"

3. Test connectivity

veris clusters test <cluster-id>

4. Set your environment to customer-hosted

veris env set-execution-mode customer_hosted --cluster <cluster-id>

5. Push images to your own registry (optional)

veris env push --image us-docker.pkg.dev/my-project/repo/my-agent --tag v1

This builds locally using the Veris base image, tags the output with your registry URI, and pushes using your existing Docker auth.

Data Flow

What stays on your infrastructure:

  • Your agent source code and Docker image
  • All simulation execution (pods run on your cluster)
  • Agent LLM calls (never routed through Veris)

What goes to Veris:

  • Simulation transcripts and logs (uploaded to GCS via short-lived tokens)
  • Run metadata and status updates
  • Encrypted secrets (stored in Veris DB, injected into your cluster as K8s Secrets at runtime)

What comes from Veris to your cluster:

  • K8s Job definitions (created via K8s API)
  • K8s Secrets and ConfigMaps (scenario config, environment variables)
  • Short-lived GCS upload tokens for the log sidecar

In customer-hosted mode, simulation pods do not use gVisor since they run on your own infrastructure with your own security policies. You control the node pool, network policies, and resource limits.

Security Model

ConcernHow It’s Handled
Code residencyAgent code never leaves your infrastructure; images stay in your registry
Cluster accessVeris authenticates via a scoped bearer token with minimal RBAC permissions
Log uploadShort-lived GCS tokens via service account impersonation (no long-lived credentials on your cluster)
SecretsEncrypted at rest in Veris DB; created as K8s Secrets on your cluster at job creation time
NetworkOnly outbound connection is from the log sidecar to GCS; all other traffic stays in your cluster

Comparison

Veris-HostedCustomer-Hosted
Agent runs onVeris GKE (gVisor)Your K8s cluster
Code leaves your infraYes (pushed to Veris registry)No
Infrastructure to manageNoneK8s namespace + RBAC
Image registryVeris Artifact RegistryYour own (optional)
Log uploadWorkload Identity (automatic)Short-lived GCS tokens
gVisor isolationYesNo (your cluster, your policies)
Setup timeMinutes~30 minutes

How DNS Interception Works

Regardless of hosting mode, your agent’s API calls are intercepted transparently — no code changes needed:

  1. You declare dns_aliases in veris.yaml (e.g., www.googleapis.com for the Calendar service)
  2. At pod startup, /etc/hosts is modified to point those domains to 127.0.0.1
  3. nginx terminates TLS with auto-generated certificates and routes traffic to the correct mock service
  4. Your agent calls https://www.googleapis.com/calendar/v3/... as usual — the mock service responds with realistic, LLM-generated data

LLM API Calls

In both modes, your agent’s LLM calls (e.g., to OpenAI or Anthropic) go directly to the LLM provider using your own API keys. Veris does not proxy, inspect, or modify these calls.

Veris uses its own LLM budget separately for:

  • Powering mock services (generating realistic API responses)
  • Driving the simulated user (actor)
  • Scenario generation
  • Grading and report generation

Your LLM costs during simulation are the same as in production — Veris does not add any overhead to your agent’s LLM calls.