Metrics Streaming

Tigrister can stream live load-test metrics to any OpenTelemetry-compatible backend over OTLP HTTP/JSON. Turn it on, point it at your collector, and every snapshot the Rust runner emits during the test is pushed to your observability stack in near real time — Grafana, Prometheus + OTel Collector, Datadog, Honeycomb, Elastic, whatever you already run.

Enabling OTLP HTTP Export

At the bottom of the Dashboard panel there's a card labelled Live Metrics Streaming. It has two fields:

  • OTLP HTTP Export toggle — flips otelEnabled on the active test type's config
  • Collector URL input — where the runner will POST OTLP payloads. Placeholder: http://localhost:4318

Per-test-type setting: like the rest of the configuration form, the streaming toggle is stored per test type. You can enable streaming for Stress runs and keep it off for Load runs on the same spec.

What Gets Streamed

Every second during the run, the Rust sender builds an OTLP resourceMetrics payload and POSTs it to {collectorUrl}/v1/metrics. The scope is tigrister-load-test v1.0.0.

MetricOTel TypeUnit
http.client.request.duration.avgGaugems
http.client.request.duration.p50Gaugems
http.client.request.duration.p90Gaugems
http.client.request.duration.p95Gaugems
http.client.request.duration.p99Gaugems
http.client.request.throughputGaugereq/s
http.client.request.error_rateGauge1
load_test.virtual_users.activeGauge1
http.client.requestsCumulative Sum1
http.client.errorsCumulative Sum1

On top of the aggregate metrics above, a per-step metric family is appended for every step in the spec so you can break dashboards down by individual endpoint.

Resource Attributes

Every OTLP payload carries a resource block so the metrics can be filtered and grouped in your backend:

  • service.name"{spec.name} - {testType}"
  • load_test.test_run_id — unique ID for this run
  • load_test.spec_name — the spec's name
  • load_test.test_typeload / stress / spike / soak / custom
  • load_test.status — always running during the live stream

Fire-and-Forget Semantics

The sender is designed to never affect test performance:

  • Async spawn — every send is dispatched to a background task; the runner loop never waits for the collector to respond
  • 3-second timeout — a slow or dead collector can't back up the sender
  • Drop on overlap — if the previous send is still in flight when a new snapshot is ready, the new one is dropped rather than queued. Metrics are already downsampled to 1 Hz, so dropped frames are rare
  • Silent errors — send failures are ignored; the run continues without disruption

Typical Targets

Any OTLP/HTTP endpoint works. The most common setups:

OpenTelemetry Collector

Run the Collector locally or as a sidecar. Point Tigrister at http://localhost:4318 and fan out from the Collector to Prometheus, Loki, Tempo, or anything else you already export to.

Grafana Cloud / Grafana Alloy

Alloy accepts OTLP HTTP natively. Build live dashboards keyed on the load_test.test_run_id resource attribute to overlay multiple concurrent tests.

Datadog / Honeycomb / Elastic

All accept OTLP/HTTP on their ingestion endpoints. Use vendor-specific URLs and, when required, headers — Tigrister sends plain JSON so reverse proxies that inject auth headers work transparently.

Prometheus (via Collector)

Prometheus itself doesn't accept OTLP pushes, but the OTel Collector's prometheusremotewrite exporter bridges the gap. Tigrister → Collector → Prometheus.

Streaming vs. Exported Reports

OTLP streaming and the JSON/HTML/PDF report exports (described in the Results section) are independent features that can be combined:

  • OTLP streaminglive metrics, sent during the run, no final summary
  • Report exportspost-run snapshots with the full time-series, per-step breakdown, and threshold results baked in

A common pattern: stream live metrics to a Grafana dashboard so the team can watch in real time, then fire an After All script that uploads the HTML or PDF report to your team chat once the run finishes.