Aurva

OpenSearch for On-Premises

Deploy a self-managed OpenSearch cluster for Aurva in on-premises environments.

Overview

OpenSearch is part of Aurva's PaaS Control Plane infrastructure. It stores DAM logs collected from various sources, including:

  • eBPF — kernel-level observability and event data
  • Audit logs — system and application audit trails
  • Aurva log exporter — logs exported from applications and services

This installer sets up a self-managed OpenSearch cluster (single-node or multi-node) so the Control Plane can ingest and query these logs.

Infrastructure Prerequisites

Production deployments requiring multi-node high availability need at least 3 VMs. PoC environments can use 1–2 VMs.

ComponentRequirementRemarks
CPUMinimum 2 vCPU (production)Final sizing decided with the Aurva team based on QPS
MemoryMinimum 8 GB RAM (production)Final sizing decided with the Aurva team based on QPS
Operating SystemRHEL 9 or Oracle Linux 9
DiskData mount /data, cold-storage mount /snapshotsFinal sizing decided with the Aurva team based on QPS and retention
PrivilegesRoot or sudo accessRequired for installation

Networking Prerequisites

The following inbound access is required on the OpenSearch servers:

Source → DestinationPortPurpose
Aurva Control Plane application VMs → all OpenSearch VMs9200OpenSearch HTTP API
All OpenSearch VMs ↔ each other9300Node-to-node transport

The following outbound access is required:

DestinationPortPurpose
resources.deployment.aurva.io443Download installer packages

Installation Steps

Multi-node note: for DEPLOYMENT_MODE="multi", run these steps on every OpenSearch VM using the same config.env file. The script auto-detects the local node by IP/hostname.

Step 1 / 6 — Become root

sudo su

Step 2 / 6 — Download the installer

mkdir -p /opt/aurva
cd /opt/aurva
curl -O https://resources.deployment.aurva.io/manifests/main/install-opensearch-onprem-vms.tar.gz
tar -xzvf install-opensearch-onprem-vms.tar.gz
rm install-opensearch-onprem-vms.tar.gz

Step 3 / 6 — Navigate to the installer directory

cd /opt/aurva/install-opensearch-onprem-vms

The directory structure should look like:

install-opensearch-onprem-vms/
├── README.md
├── config.env.example
├── main.sh
├── prerequisites.sh
├── utils.sh
├── opensearch-patch-assist.sh
├── rollback.sh
└── uninstall.sh

Step 4 / 6 — Create and edit the configuration

cp config.env.example config.env
vim config.env

All configuration options are self-documented inside config.env.example with examples.

Step 5 / 6 — Run prerequisites and installation

sudo bash prerequisites.sh
sudo bash main.sh

Installation typically takes 3–4 minutes.

Step 6 / 6 — Verify

For multi-node clusters, only verify after installation has completed on every VM — the cluster only becomes active once all nodes have joined.

curl -k -u admin:"YOUR_PASSWORD" https://localhost:9200/_cluster/health?pretty

Expected output: cluster status green.

Troubleshooting

sudo systemctl status opensearch
sudo journalctl -u opensearch -f
sudo systemctl restart opensearch

If the installer fails, share the install log with the Aurva team:

tail -f /var/log/opensearch-install.log

Password Reset

sudo bash main.sh --reset-password

For multi-node clusters, run this on every OpenSearch VM.

Patching OpenSearch VMs

Prerequisites:

  • At least 3 nodes in the cluster (required for quorum — patching a node in a smaller cluster can take others down).
  • Patch one node at a time and wait for the cluster to return to green before moving to the next node.

Get the patching assist script

If you installed OpenSearch using the steps above, the script is already present:

cd /opt/aurva/install-opensearch-onprem-vms

Otherwise, fetch it directly:

sudo sh -c 'mkdir -p /opt/aurva && curl -sL https://resources.deployment.aurva.io/manifests/main/install-opensearch-onprem-vms.tar.gz | tar -xzf - -C /tmp install-opensearch-onprem-vms/opensearch-patch-assist.sh && mv /tmp/install-opensearch-onprem-vms/opensearch-patch-assist.sh /opt/aurva/ && rm -rf /tmp/install-opensearch-onprem-vms'

1. Dry-run — validate readiness (no changes made)

Validates connectivity, cluster health, node count (≥ 3), unassigned shards, and other prerequisites. Makes no system changes.

sudo OS_HOST=https://localhost:9200 OS_USER=admin OS_PASS='yourpassword' \
  ./opensearch-patch-assist.sh --dry-run

What to look for:

  • [PASS] — root access, tools, OpenSearch reachable, cluster green, node count ≥ 3, unassigned shards 0, post-reboot service check.
  • [WARN] — review (e.g. yellow status, mid-rebalance).
  • [FAIL] — must fix before proceeding (e.g. red, node count < 3, unassigned shards).

If the dry-run fails, fix the reported issues and re-run until it passes (or only non-blocking warnings remain).

2. Pre-reboot — prep this node (stop OpenSearch, install post-reboot service)

Runs health checks, disables shard allocation, flushes, stops OpenSearch on this node, and installs the systemd service that will run --post-reboot after the next boot. Does not patch the kernel or reboot.

sudo OS_HOST=https://localhost:9200 OS_USER=admin OS_PASS='yourpassword' \
  ./opensearch-patch-assist.sh --pre-reboot

After this step the node's OpenSearch is stopped. Proceed to step 3.

3. Kernel patching

Patch and reboot the node using your standard procedure.

4. Post-reboot — check logs, run manually if needed

After reboot, the post-reboot steps may run automatically via opensearch-post-patch.service. Check logs first:

sudo journalctl -u opensearch-post-patch.service -n 200 --no-pager

Look for: "PHASE 3: Post-reboot", "Patch complete on this node", or any errors. If the service completed successfully you should see allocation re-enabled and the cluster reaching green.

If the service failed or does not exist, run the post-reboot phase manually:

sudo OS_HOST=https://localhost:9200 OS_USER=admin OS_PASS='yourpassword' \
  ./opensearch-patch-assist.sh --post-reboot

Wait for log lines indicating allocation re-enabled and the cluster green. If the cluster does not reach green, contact Aurva support.

5. Repeat on remaining nodes

Run the same steps on each remaining OpenSearch node, one at a time.

Uninstall

Standard — keeps data on the data disk and only removes the OpenSearch service:

sudo bash uninstall.sh

Full clean — removes all data including snapshots and the OpenSearch service:

sudo bash uninstall.sh --clear-data