Results

The Results view is what you see while a test is running and after it finishes. It aggregates every metric the Rust runner emits into a single scrollable page: status, headline metric cards, four time-series charts, a per-step breakdown, error analysis, and threshold pass/fail rows.

Header Row

At the very top of the Results view, the active test type is shown as an uppercase label on the left (e.g. LOAD TEST, STRESS TEST) and an Export dropdown sits on the right offering three formats:

FormatFileUse case
Export JSON*.jsonFull machine-readable snapshot — config, summary, time-series, per-step, thresholds
Export HTML*.htmlSelf-contained report you can email, embed, or attach to a PR
Export PDF*.pdfPrintable report for status meetings and compliance records

The filename defaults to {testType}-test-{shortRunId}.{ext} — e.g. stress-test-4f2a9b.pdf.

Metric Cards

Directly below the header, five headline cards summarize the run at a glance:

CardPrimary valueSub-line
RequestsTotal request count (K/M formatting over 1K)Failed count — highlighted red when > 0
Success RatePercentage of successful requests — green ≥ 95%, red belowRaw success/total counts
Avg ResponseMean response time (amber when > 1000 ms)P95 response time
ThroughputSuccessful requests per secondreq/s unit label
Active VUsCurrently active virtual usersTarget VU count for the active test type

Response Time Chart

The first time-series chart plots response-time percentiles against elapsed seconds. Five lines are available — Avg, P50, P90, P95, P99 — and a pill row beneath the chart lets you switch between them:

  • All pill — shows every percentile line simultaneously
  • Individual pills — click any percentile to see only that line (exclusive select)
  • Three-dot menu — opens a checkbox popover for arbitrary multi-select combinations (e.g. only P95 + P99)

The chart axis is formatted with the same helper the metric cards use — values above 1000 ms switch to seconds, below they stay in milliseconds.

Throughput, Error Rate, Active VUs

Three additional time-series charts sit below Response Time. The throughput chart spans full width; error rate and active VUs share a two-column row.

ChartY unitShows
Throughputreq/sAggregate successful requests per second across all VUs
Error Rate%Ratio of failed to total — shown as a percentage with one decimal
Active VUsVUsHow many virtual users were actively running at each point in time

Step Breakdown

Below the charts, the Step Breakdown table shows one row per step in the spec. This is where you pinpoint which request is dragging the run down — if the overall P95 is 3 s but one specific step's P95 is 12 s, you immediately know where to focus your optimization.

ColumnMeaning
StepHTTP method badge + step name
RequestsTotal count of requests this step executed
AvgMean response time for this step only
P50 / P90 / P95 / P99Percentile response times per step
Error %Per-step error rate — highlighted red when non-zero
req/sThroughput attributable to this step

Error Breakdown & Analysis

When at least one request failed, two additional tables appear.

Error Breakdown

Aggregate error counts grouped by category, sorted descending:

  • timeout — requests that exceeded the configured timeout
  • connection_refused — TCP connection was refused (server down, port closed)
  • assertion — a per-step assertion failed
  • http_NNN — dynamic category per HTTP status code (e.g. HTTP 500, HTTP 429)
  • other — anything else

Each row shows the count, the % of all errors, and the % of total requests so you can quickly judge whether an error category represents an isolated glitch or a systemic problem.

Error Analysis

A pivot table: steps-with-errors on the left axis, error categories on the top axis. Cells show the per-step count for each category, with zeros rendered as muted dashes. Use it to answer questions like "is the 500 coming from all steps or just one?" at a glance.

Threshold Results

Last section of the Results view. One row per threshold, showing the rule on the left and the actual value on the right:

Passed rows have a green check and a muted green background; failed rows have a red cross and a red background. The value column shows the metric's actual aggregate value formatted with the right unit — ms for response-time metrics, % for error rate, req/s for throughput.

Disabled thresholds don't appear here — they're skipped entirely during evaluation.

Reading the row: a row that says P95 Response Time < 2000ms → 1843ms with a green check means the threshold was set to P95 under 2 s and the actual P95 came in at 1843 ms — passed.

Run History

When the secondary sidebar is in Results mode, it displays the full run history for the active spec — each card shows the run's test type, status, timestamp, and summary metrics. Clicking a card loads that run into the Results view, so you can scroll back through every past execution. This is also the picker the Compare view uses to select runs.