Shadow AI Detection
Detect unauthorised AI tools and LLMs accessing sensitive data.
Modern engineering teams quickly adopt LLM APIs, vector databases, and copilots — often without security review. Aurva surfaces this "shadow AI" surface by correlating data flow telemetry with known AI/LLM endpoints.
What Aurva detects
- Outbound traffic from your applications and databases to third-party LLM APIs (OpenAI, Anthropic, Google, etc.)
- Vector database creation and embedding workloads against sensitive tables
- Sensitive payload patterns (PII, secrets, source code) leaving the environment toward AI providers
- Internal services that have started calling LLM endpoints recently
How to enable
- Ensure DAM and Data Flow Monitoring are active for the relevant assets — see Monitoring Configuration.
- Create a Detect & Alert policy with Data Flow Monitoring scope and condition
destination domain in (api.openai.com, api.anthropic.com, ...). See Creating a Custom Policy. - Route findings to Slack or Jira via Alert Routes.
Investigation
When an alert fires, open the matched query in the Audit Trail to see the originating service, the data classification touched, and the destination. Use the Overview Dashboard Third-Party tab to review aggregate exposure to AI endpoints.