Skip to content

Logs

A log is a discrete event with a timestamp, level, message, and arbitrary structured metadata. SiteQwality stores them in ClickHouse and queries them via a fast full-text + structured fabric.

  • For discrete events with rich context — what happened, who triggered it, what state was the system in.
  • For the audit trail of “what happened at 14:32:17.345?”
  • When you need to grep through error context to find a pattern.
  • For “how many?” — emit a metric. Counting logs is slow; counting metrics is the point.
  • For “why was this request slow?” — use traces. Logs alone don’t show causality.
Terminal window
curl -X POST https://logs.siteqwality.com/v1/ingest \
-H "Authorization: Bearer $SITEQWALITY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"timestamp": "2026-05-03T14:32:17.345Z",
"level": "error",
"message": "Failed to charge card",
"service": "billing",
"env": "prod",
"host": "api-7d4f9-xkz3l",
"tags": { "endpoint": "/api/charge" },
"metadata": {
"user_id": "usr_abc123",
"amount_cents": 9900,
"stripe_error": "card_declined"
}
}'

Three accepted body shapes:

  • Single JSON object — one log entry.
  • JSON array — multiple log entries in one request.
  • NDJSON (Content-Type: application/x-ndjson) — one JSON entry per line. Best for shipping pipelines.

Optional headers:

  • X-Log-Source: <name> — sets source field on every log in the request.
  • X-Log-Parser: <parser-id> — apply a specific parser to every log.

A 202 Accepted means the logs were queued.

FieldDescription
timestampRFC 3339. Defaults to ingest time if omitted.
leveltrace, debug, info, warn, error, fatal. Free-string but stick to the standard set for filtering.
messageHuman-readable summary. Searched as full-text.
sourceLogical source name (billing-api, worker-fleet). Indexed; high-cardinality avoid.
hostPod/container/VM identity.
envprod, staging, dev.
versionApp version. Useful for “did the latest deploy regress?”
deviceFor client logs (browser logs from the SDK).
tagsstring → string map for filtering.
metadataArbitrary structured object. Searchable via metadata_search query param.

Logs are queried via GET /logs/query with these params:

  • start_time, end_time — RFC 3339 window (defaults to last 1h).
  • level — comma-separated levels (error,warn).
  • source, env, host — exact filters.
  • search — full-text search of message.
  • metadata_search — full-text search of metadata field values.
  • tags — JSON-encoded {key: value} filter.
  • excluded — JSON-encoded same-shape exclusion.
  • sort_field (timestamp), sort_order (asc|desc), limit, offset.

The dashboard’s Logs Explorer is the friendliest interface; it composes these params for you. For programmatic access (CI, scripts), call the API directly.

For follow-mode (tail -f-style):

Terminal window
GET /logs/tail/poll?since=<cursor>&...

Returns logs since the cursor, plus a new cursor to use on the next poll. Poll every few seconds.

GET /logs/insights returns ML-derived alerts:

  • Error spike — sudden increase in error level vs baseline.
  • New error — a never-seen-before error message.
  • Silent source — a normally-chatty source has gone quiet.
  • Top errors — most frequent error messages in the window.

Each insight comes with severity (info, medium, high) and a pre-built filter you can click into the Logs Explorer to investigate.

Logs are billed per GB ingested + retention. The dashboard’s Settings → Billing → Usage shows current month-to-date.