Aurva

Data Plane on GCP GKE

Deploy the Aurva Data Plane on Google Kubernetes Engine.

Overview

The Aurva Data Plane collects discovery, classification, query activity, and posture telemetry from your GCP environment and forwards it to the Aurva Control Plane for analysis and visualization.

This guide covers Data Plane deployment on an existing GKE cluster, including infrastructure prerequisites, networking, and the Terraform + Helm deployment workflow.

High Level Architecture

Data Plane on GCP GKE architecture

Infrastructure Prerequisites

The customer must provision the following before installing the Data Plane:

Compute

ComponentRequirement
Node groupA dedicated node group is recommended (an existing one can be reused)
Architecturex86_64
Node OSContainer-Optimized OS
Instance sizen2d-standard-4 (4 vCPU, 8 GB RAM) — minimum
Storage50 GB minimum
Node countProduction: minimum 2; scale out as needed. PoC: 1 is acceptable.

Storage & IAM

ResourceConfiguration
Cloud Storage bucketLifetime retention; public access blocked; bucket policy restricts access to the Data Plane service account
IAMRead-only permissions provisioned by the Aurva-supplied Terraform module
Terraform backendGCS bucket for state

Pod Specifications

Aurva deploys the following workloads into the GKE cluster (production sizing):

PodTypeReplicasMemoryCPUProduct
controllerDeploymentmin 2, max 3req 500 MiB / lim 1024 MiBreq 500 m / lim 1000 mAll (DAM, Data Flow, DSPM)
pii-analyserDeploymentmin 1, max 3req 2 GiB / lim 4 GiBreq 1000 m / lim 2000 mAll
ocrDeploymentmin 1, max 1req 1 GiB / lim 1 GiBreq 1000 m / lim 1000 mDSPM
postgresqlStatefulSet1req 200 m / lim 500 mreq 500 MiB / lim 500 MiBAll
ebpf-agentDaemonSetOne per nodereq 300 MiB / lim 600 MiBreq 300 m / lim 600 mData Flow

Networking Prerequisites

The following outbound connectivity must be permitted from the GKE node VPC:

DestinationPortPurpose
Control Plane URL (command.aurva.io for production, command.uat.aurva.io for PoC)443Data Plane → Control Plane communication
registry.aurva.io443Pull Aurva container images
bifrost.aurva.io443License validation
resources.deployment.aurva.io443Download deployment scripts and resources

Prerequisites

ToolReference
Terraform CLIdeveloper.hashicorp.com/terraform/install
Helm CLIhelm.sh/docs/intro/install
gcloud configured for the target projectRun gcloud auth login and select the right project

Deployment Workflow

The deployment is split into two phases: infrastructure (Terraform) and application (Helm).

Infrastructure — Step 1: Download the bundle

mkdir -p /opt/aurva-dataplane
cd /opt/aurva-dataplane
curl -O https://resources.deployment.aurva.io/manifests/main/install-dataplane-gcp-kube.tar.gz
tar -xzvf install-dataplane-gcp-kube.tar.gz

After extraction:

install-dataplane-gcp-kube/
├── infrastructure/
└── helm/

Infrastructure — Step 2: Configure Terraform variables

cd install-dataplane-gcp-kube/infrastructure
vi terraform.tfvars.tpl

Mandatory variables:

VariableDescription
project_idTarget GCP project ID
regionRegion hosting the GKE cluster (e.g., asia-south1)
cluster_nameName of the target GKE cluster
network_nameVPC network (and subnet) where the cluster resides
productsdspm, data_flow, or both
company_idAurva tenant ID — find it in the Aurva console
dataplane_nameFriendly name for this Data Plane (commonly the cluster name)

Infrastructure — Step 3: Run the preflight script

This creates or validates the GCS bucket used as the Terraform backend.

chmod +x preflight.sh
./preflight.sh

Infrastructure — Step 4: Plan and apply

terraform plan  -var-file=tfvars/terraform.tfvars
terraform apply -var-file=tfvars/terraform.tfvars

Infrastructure — Step 5: Capture the namespace

export KUBE_NAMESPACE=$(terraform output -raw kubernetes_namespace)

Application — Step 1: Export Helm values

terraform output -raw helm_values_snippet > ../helm/env/production.yaml

Application — Step 2: Set the Kubernetes context

gcloud container clusters get-credentials \
  $(terraform output -raw cluster_name) \
  --region $(terraform output -raw region)

Application — Step 3: Install the Helm chart

cd ../helm
helm upgrade --install aurva-dataplane . \
  -f values.yaml \
  -f env/production.yaml \
  -n $KUBE_NAMESPACE \
  --create-namespace

Verification

kubectl -n $KUBE_NAMESPACE get pods
kubectl -n $KUBE_NAMESPACE logs deployment/controller -f

In the Aurva console, navigate to Settings → Monitoring Configuration. The new Data Plane should appear and be marked Healthy within a few minutes.

Next Steps