Aurva

Control Plane on AWS EKS (Terraform)

Provision the Aurva Control Plane on Amazon EKS using Terraform.

Overview

The Aurva Control Plane stores telemetry from Data Planes, runs analysis pipelines, and serves the Aurva console. This guide covers a self-hosted Control Plane deployment on Amazon EKS using the Aurva-provided Terraform module plus Helm charts.

Infrastructure Components

The Terraform module provisions the following AWS resources:

EKS

ComponentConfiguration
Architecturex86_64
Node OSAmazon EKS-Optimized Linux
Instance sizec5a.xlarge (varies with scale)
Storage100 GB minimum

RDS (PostgreSQL)

ComponentConfiguration
Engine versionPostgreSQL 18
Instance classdb.t4g.medium (varies with scale)
Storage128 GB minimum

OpenSearch

ComponentConfiguration
Engine versionOpenSearch 2.19
Instance classc7g.large.search (varies with scale)
Nodes3 minimum
VolumeSized based on QPS

Storage

BucketConfiguration
Alerts & ReportsStandard, lifetime retention
OpenSearch snapshotsGlacier (first 120 days), then Deep Archive

All buckets use SSE-S3 encryption, block public access, and restrict access to the Control Plane IAM role via bucket policy.

Networking & IAM

ComponentNotes
Load balancersAurva deploys 1 ALB and 1 NLB
IAMRead/write/delete for S3 and OpenSearch (managed by the Terraform module)
Terraform backendS3 bucket with state versioning

Deployment Prerequisites

VPC

  • A VPC with at least 2 private subnets.
  • Each subnet must have at least 96 available IPv4 addresses.

Subnet capacity check

VPC Endpoints (air-gapped only)

For air-tight environments without a NAT Gateway, the following VPC endpoints are required (example uses ap-south-1):

com.amazonaws.ap-south-1.acm-pca
com.amazonaws.ap-south-1.ec2
com.amazonaws.ap-south-1.ec2messages
com.amazonaws.ap-south-1.ecr.api
com.amazonaws.ap-south-1.ecr.dkr
com.amazonaws.ap-south-1.eks
com.amazonaws.ap-south-1.eks-auth
com.amazonaws.ap-south-1.elasticloadbalancing
com.amazonaws.ap-south-1.kms
com.amazonaws.ap-south-1.monitoring
com.amazonaws.ap-south-1.s3
com.amazonaws.ap-south-1.sts
com.amazonaws.ap-south-1.wafv2
com.amazonaws.ap-south-1.ssm
com.amazonaws.ap-south-1.ssmmessages

ACM Certificate

An ACM certificate matching your company domain (e.g. *.aurva.com) must already exist in the AWS account. The Terraform module attaches it to the load balancers.

Jump Server

A Linux jump server inside the same VPC, with the following CLIs installed:

CLIVerify
Helmhelm version
kubectlkubectl version
AWS CLIaws --version
curlcurl --version
tartar --version

Networking Prerequisites

SourceDestinationPortPurpose
VPCresources.deployment.aurva.io443Download deployment scripts and resources
VPCbifrost.aurva.io443License validation

Deployment Workflow

The deployment is split into two phases: infrastructure (Terraform) and application (Helm).

Infrastructure — Step 0: Configure CLIs and AWS credentials

# AWS credentials (pick one)
aws configure          # static IAM keys
aws sso login          # SSO

Infrastructure — Step 1: Download the bundle

mkdir -p /opt/aurva-controlplane
cd /opt/aurva-controlplane
curl -O https://resources.deployment.aurva.io/manifests/main/install-controlplane-aws-kube.tar.gz
tar -xzvf install-controlplane-aws-kube.tar.gz

After extraction:

install-controlplane-aws-kube/
├── infrastructure/
└── helm/

Infrastructure — Step 2: Configure Terraform variables

cd install-controlplane-aws-kube/infrastructure/terraform
vi terraform.tfvars.tpl

Mandatory variables:

VariableDescriptionExample
aws_regionTarget AWS regionap-south-1
create_vpcProvision a new VPC (true) or use an existing one (false)false
air_gappedAir-gapped deployment with no internet accessfalse
create_rdsProvision a new RDS instancetrue
create_opensearchProvision a new OpenSearch clustertrue
create_eksProvision a new EKS cluster (true) or reuse an existing one (false)true

Networking (when create_vpc = false):

VariableExample
vpc_idvpc-0c1e176679c6f5778
public_subnet_ids["subnet-03c901a039a89e31b", "subnet-0fcdac58aeef4329e"]
private_subnet_ids["subnet-02b70317d0fa1b5d7", "subnet-06aa8777e1dab9cb8"]

Networking (when create_vpc = true):

VariableExample
vpc_cidr10.3.0.0/16
public_subnet_cidrs["10.3.102.0/24", "10.3.101.0/24"]
private_subnet_cidrs["10.3.3.0/24", "10.3.1.0/24"]

OpenSearch:

VariableExample
os_instance_typet3.small.search
number_of_nodes3 (must be ≥ number of private subnets / AZs)
ebs_volume_size100

RDS:

VariableExample
rds_instance_classdb.t4g.medium
rds_storage256

EKS:

VariableExample
node_group_instance_type["t3a.medium"]
cluster_name<EKS_CLUSTER_NAME> (only when create_eks = false)

Infrastructure — Step 3: Run the preflight script

This creates or validates the S3 bucket used as the Terraform backend.

chmod +x preflight.sh
./preflight.sh

Infrastructure — Step 4: Plan and apply

terraform plan  -var-file=tfvars/terraform.tfvars
terraform apply -var-file=tfvars/terraform.tfvars

Pod Specifications

Aurva deploys the following workloads into the EKS cluster:

PodTypeReplicasMemoryCPU
aurva-alertsDeploymentmin 1, max 2req 500 MiB / lim 1024 MiBreq 500 m / lim 1000 m
aurva-anomaly-detectionDeploymentmin 1, max 2req 2 GiB / lim 4 GiBreq 1000 m / lim 2000 m
aurva-commandDeploymentmin 1, max 2req 1 GiB / lim 1 GiBreq 1000 m / lim 1000 m
aurva-gatewayDeploymentmin 1, max 2req 500 MiB / lim 500 MiBreq 200 m / lim 500 m
aurva-internal-gatewayDeploymentmin 1, max 2req 300 MiB / lim 600 MiBreq 300 m / lim 600 m
aurva-log-ingestionDeploymentmin 1, max 2req 300 MiB / lim 600 MiBreq 300 m / lim 600 m
aurva-queryprocessorDeploymentmin 1, max 2req 300 MiB / lim 600 MiBreq 300 m / lim 600 m
aurva-redisStatefulSet1req 300 MiB / lim 600 MiBreq 300 m / lim 600 m
aurva-riskscoreDeploymentmin 1, max 2req 300 MiB / lim 600 MiBreq 300 m / lim 600 m
aurva-system-healthDeploymentmin 1, max 2req 300 MiB / lim 600 MiBreq 300 m / lim 600 m
aurva-webappDeploymentmin 1, max 2req 300 MiB / lim 600 MiBreq 300 m / lim 600 m

Application — Step 1: Export Helm values

terraform output -raw helm_values_snippet > ../helm/env/production.yaml

Application — Step 2: Set the Kubernetes context

aws eks update-kubeconfig \
  --name $(terraform output -raw cluster_name) \
  --region $(terraform output -raw aws_region)

Application — Step 3: Install the Helm chart

cd ../helm
helm upgrade --install aurva-controlplane . \
  -f values.yaml \
  -f env/production.yaml \
  -n aurva-controlplane \
  --create-namespace

Verification

kubectl -n aurva-controlplane get pods

All pods should reach Running. Once the load balancers are healthy, the Aurva console becomes reachable at the configured domain.