Audit Stream → Generic HTTP Webhook¶
Most SIEMs and log collectors accept authenticated HTTPS POST as a first-class input. The exact URL and auth header differ; the configuration shape in Aegis is the same. This page covers Sumo Logic, Elastic, Cribl, Logstash/Fluentd/Vector, New Relic, and any other HTTP-ingest destination.
Sumo Logic — HTTP Source¶
| URL | The collector URL Sumo generates when you create the HTTP Source — auth is embedded in the URL |
| Auth header | None (Sumo doesn't use a separate auth header for HTTP Sources) |
- Sumo Logic UI → Manage Data → Collection → Add Source → HTTP Source
- Set Source Category to
aegis/audit(used in Sumo searches) - Copy the Collector URL Sumo gives you
Configure in Aegis:
curl -X POST https://api.aegispreflight.com/api/orgs/${ORG_ID}/forwarding-destinations \
-H "Authorization: Bearer ${AEGIS_API_KEY}" \
-H "Content-Type: application/json" \
-d '{
"url": "https://endpoint1.collection.us2.sumologic.com/receiver/v1/http/ZaVnC4dhaV3..."
}'
Search in Sumo:
Elastic Cloud / self-hosted — Ingest API¶
| URL | https://<your-elastic-host>:9243/_bulk or /<index>/_doc/ |
| Auth header | Authorization (Basic) or ApiKey |
For an API key (recommended):
- In Kibana: Stack Management → API Keys → Create
- Permissions:
indexprivilege on youraegis-audit-*index pattern - Copy the
encodedvalue
Configure:
curl -X POST https://api.aegispreflight.com/api/orgs/${ORG_ID}/forwarding-destinations \
-H "Authorization: Bearer ${AEGIS_API_KEY}" \
-H "Content-Type: application/json" \
-d '{
"url": "https://your-elastic-host:9243/aegis-audit/_doc",
"auth_header_name": "Authorization",
"auth_header_value": "ApiKey VnVhQ2ZHY0JDZGJrUW0tZTVhT3g6dWk2bEd5..."
}'
Elastic Common Schema (ECS) field mapping comes for free — our event
payload uses ECS field names (@timestamp, event.id, event.kind,
event.category, organization.id).
Cribl Stream / Cribl Edge — HTTP-in Source¶
| URL | The endpoint URL Cribl exposes for the HTTP-in Source |
| Auth header | Configurable in Cribl — set the same name+value here |
- Cribl → Sources → HTTP → Add Source
- Enable Authentication and set a token Aegis will send
- Copy the Endpoint URL
Configure:
curl -X POST https://api.aegispreflight.com/api/orgs/${ORG_ID}/forwarding-destinations \
-H "Authorization: Bearer ${AEGIS_API_KEY}" \
-H "Content-Type: application/json" \
-d '{
"url": "https://cribl.example.com:10080/aegis",
"auth_header_name": "Authorization",
"auth_header_value": "Bearer your-cribl-token"
}'
From Cribl you can route to multiple downstream destinations (Splunk, Datadog, S3, Snowflake, etc.) — useful if you fan out to several SIEMs.
Logstash / Fluentd / Vector — HTTP input¶
These pipelines all accept HTTPS POST with JSON. Example configs:
Then configure the Aegis destination:
curl -X POST https://api.aegispreflight.com/api/orgs/${ORG_ID}/forwarding-destinations \
-H "Authorization: Bearer ${AEGIS_API_KEY}" \
-H "Content-Type: application/json" \
-d '{
"url": "https://your-pipeline-host:8080/aegis",
"auth_header_name": "X-Shared-Secret",
"auth_header_value": "your-shared-secret"
}'
New Relic Logs¶
curl -X POST https://api.aegispreflight.com/api/orgs/${ORG_ID}/forwarding-destinations \
-H "Authorization: Bearer ${AEGIS_API_KEY}" \
-H "Content-Type: application/json" \
-d '{
"url": "https://log-api.newrelic.com/log/v1",
"auth_header_name": "Api-Key",
"auth_header_value": "YOUR-NEW-RELIC-LICENSE-KEY"
}'
(EU customers: https://log-api.eu.newrelic.com/log/v1)
Build-your-own / arbitrary HTTPS endpoint¶
Any service that accepts an HTTPS POST with a JSON body works. The Aegis worker will:
- POST the event payload as
application/json - Add the auth header you configured (single header, single value)
- Treat
2xxas success - Treat
4xx/5xx/ connect errors as retryable (up to 3 attempts with exponential backoff, then dead-letter)
If you need additional headers, signing, or transformation, the right place to add it is between Aegis and your final destination — point Aegis at a Cribl / Logstash / Vector pipeline (above) and do the transform there.
Starter receiver: HMAC-verifying webhook¶
If you're standing up your own receiver (e.g. in front of Cribl, Vector, or Fluentd), here's a minimal Python/Flask template that:
- verifies a shared secret as a bearer token,
- verifies an optional HMAC-SHA256 body signature,
- replies
202 Acceptedon success.
Wire Aegis to it with auth_header_name = "Authorization" and
auth_header_value = "Bearer <your-shared-secret>". If you also want
body signing, generate the HMAC in your reverse proxy (Aegis itself
sends one auth header — anything beyond that lives in your pipeline).
# aegis_audit_receiver.py
# Run behind TLS-terminating reverse proxy (nginx, Caddy, an ALB).
# Tested with Python 3.11 + Flask 3.
import hmac
import hashlib
import os
from flask import Flask, request, jsonify, abort
SHARED_SECRET = os.environ["AEGIS_SHARED_SECRET"] # what Aegis sends
SIGNING_KEY = os.environ.get("AEGIS_SIGNING_KEY") # optional, from proxy
app = Flask(__name__)
@app.post("/aegis/audit")
def ingest():
# 1. Bearer-token auth (Aegis sends "Authorization: Bearer <secret>")
auth = request.headers.get("Authorization", "")
if not auth.startswith("Bearer "):
abort(401, "missing bearer token")
if not hmac.compare_digest(auth.removeprefix("Bearer "), SHARED_SECRET):
abort(401, "invalid bearer token")
# 2. Optional HMAC-SHA256 body signature (added by your reverse proxy).
# Skip this block if you're not signing.
if SIGNING_KEY:
signature = request.headers.get("X-Aegis-Signature", "")
expected = hmac.new(
SIGNING_KEY.encode(),
request.get_data(),
hashlib.sha256,
).hexdigest()
if not hmac.compare_digest(signature, expected):
abort(401, "invalid signature")
# 3. Parse + persist. Replace this block with your forwarding logic
# (Cribl HTTP-in, S3 PUT, Kafka produce, etc.).
event = request.get_json(force=True)
print(f"event.id={event.get('event', {}).get('id')} "
f"decision={event.get('aegis', {}).get('decision')}")
return jsonify({"accepted": True}), 202
if __name__ == "__main__":
# Production: serve with gunicorn / uvicorn behind TLS.
app.run(host="0.0.0.0", port=8080)
Run with:
export AEGIS_SHARED_SECRET="$(openssl rand -hex 32)"
# Optional, only if you sign requests at the proxy:
# export AEGIS_SIGNING_KEY="$(openssl rand -hex 32)"
gunicorn -w 2 -b 0.0.0.0:8080 aegis_audit_receiver:app
Then in Aegis:
curl -X POST https://api.aegispreflight.com/api/orgs/${ORG_ID}/forwarding-destinations \
-H "Authorization: Bearer ${AEGIS_API_KEY}" \
-H "Content-Type: application/json" \
-d "{
\"url\": \"https://your-host/aegis/audit\",
\"auth_header_name\": \"Authorization\",
\"auth_header_value\": \"Bearer ${AEGIS_SHARED_SECRET}\"
}"
What the template intentionally does NOT do
- No retries / no queue. Aegis already retries with exponential
backoff and dead-letters after 3 attempts. Your receiver just needs
to return
2xxfast or non-2xxto ask for a retry. - No body parsing beyond
json(). Aegis events are ECS-shaped JSON —event.idis the dedupe key,event.datasetselects betweenaegis.preflight(real events) andaegis.audit_stream.test(synthetic test events from the Test button). - No PII handling. Event payloads carry finding counts, not raw values, so the receiver has no special storage burden.
Verifying delivery¶
After creating any destination, the synchronous test endpoint is the fastest sanity check:
curl -X POST \
https://api.aegispreflight.com/api/orgs/${ORG_ID}/forwarding-destinations/${DEST_ID}/test \
-H "Authorization: Bearer ${AEGIS_API_KEY}"
A successful response ({"ok": true, "status_code": 2xx}) confirms:
- The destination URL is reachable from the Aegis cloud
- The auth header is accepted
- The destination returned a 2xx within 10 seconds
If you can't see the test event in your SIEM, double-check the SIEM-side ingest configuration (index permissions, source category, parser) — at that point Aegis has done its job.