Skip to main content
This guide explains how to integrate CrowdStrike AIDR guardrails with TrueFoundry AI Gateway using the Custom Guardrails route.
This page replaces the older native CrowdStrike/Pangea integration flow. The recommended approach is now to use Custom Guardrails with a thin adapter service that calls CrowdStrike AIDR.

CrowdStrike AIDR endpoint used

The endpoint analyzes prompt/response content and returns:
  • result.blocked - whether the content should be blocked
  • result.transformed - whether content was transformed/redacted
  • result.guard_output - transformed structured output (when available)
  • result.detectors - detector-level findings

Integration architecture

TrueFoundry Custom Guardrails expect TrueFoundry request/response schemas. CrowdStrike expects guard_input payloads.
So the recommended pattern is:
  1. TrueFoundry AI Gateway calls your Custom Guardrail adapter
  2. Adapter maps payload to CrowdStrike guard_chat_completions
  3. Adapter maps CrowdStrike verdict back to TrueFoundry allow/block/mutate behavior

Prerequisites

Before you begin, ensure you have:
  1. A CrowdStrike account with AIDR access
  2. A CrowdStrike bearer token with permission to call AIDR APIs
  3. TrueFoundry AI Gateway access with permission to configure guardrails
  4. A deployed adapter service endpoint reachable by the Gateway

Quick start

1

Build and deploy the adapter service

Create two endpoints in your adapter:
  • POST /crowdstrike/input for LLM input guardrails
  • POST /crowdstrike/output for LLM output guardrails
Both endpoints should:
  1. Read incoming TrueFoundry payload
  2. Build CrowdStrike payload under guard_input (especially messages)
  3. Call POST https://api.crowdstrike.com/aidr/aiguard/v1/guard_chat_completions
  4. Return block/mutate/pass behavior to TrueFoundry
Example adapter call to CrowdStrike:
import requests

def call_crowdstrike(guard_input: dict, token: str, event_type: str = "input") -> dict:
    url = "https://api.crowdstrike.com/aidr/aiguard/v1/guard_chat_completions"
    payload = {
        "guard_input": guard_input,
        "event_type": event_type
    }
    resp = requests.post(
        url,
        headers={
            "Authorization": f"Bearer {token}",
            "Content-Type": "application/json"
        },
        json=payload,
        timeout=10,
    )
    resp.raise_for_status()
    return resp.json()
Keep the CrowdStrike token in your adapter environment (not in client code).
In TrueFoundry, authenticate only to your adapter endpoint.
2

Register Custom Guardrail integrations in TrueFoundry

Navigate to AI Gateway → Controls → Guardrails and create a guardrail group.Add two custom integrations:
IntegrationURLTargetOperationEnforcing Strategy
Inputhttps://<your-adapter>/crowdstrike/inputrequestvalidate (or mutate)enforce
Outputhttps://<your-adapter>/crowdstrike/outputresponsevalidate (or mutate)enforce
For details on the Custom Guardrail request/response schema and authentication options, see the Custom Guardrails guide.
3

Create guardrail rules

Bind the integrations to models through Guardrail Rules:
name: crowdstrike-guardrails
type: gateway-guardrails-config
rules:
  - id: crowdstrike-input
    when:
      target:
        operator: or
        conditions:
          model:
            values:
              - openai-main/gpt-4o-mini
            condition: in
    llm_input_guardrails:
      - crowdstrike/crowdstrike-input
    llm_output_guardrails: []
  - id: crowdstrike-output
    when:
      target:
        operator: or
        conditions:
          model:
            values:
              - openai-main/gpt-4o-mini
            condition: in
    llm_input_guardrails: []
    llm_output_guardrails:
      - crowdstrike/crowdstrike-output

Validation logic

Your adapter should map CrowdStrike response to TrueFoundry behavior:
  • If result.blocked == true -> return HTTP 400 (request blocked)
  • If result.transformed == true and operation is mutate -> return transformed payload
  • Otherwise -> return pass (no change)

Example CrowdStrike response shape

{
  "result": {
    "blocked": true,
    "transformed": false,
    "policy": "default",
    "detectors": {
      "malicious_prompt": {
        "detected": true,
        "data": {
          "action": "blocked"
        }
      }
    }
  }
}

Testing checklist

  1. Safe prompt should pass.
  2. Prompt-injection prompt should return blocked response.
  3. Sensitive data prompt should block or redact based on adapter behavior.
  4. Verify guardrail traces in TrueFoundry request logs.

Troubleshooting

Ensure your adapter timeout is below TrueFoundry guardrail timeout and optimize the payload sent to CrowdStrike.
Confirm output integration is configured with target: response and bound under llm_output_guardrails.
Use operation: mutate and return transformed payload from your adapter.

Resources