# Module 06: Cloud Integration (Labs 08-09) JSON Output and k6 Cloud Run
Navigate: [All Slides](../index.html) | [Prev: Local Observability](../05_Local_Observability/index.html) | [Next: Synthetic Basics](../07_Synthetic_Monitoring_Basics/index.html)
## What You'll Learn - How to write k6 metric data to JSON files with `--out json` - The structure of k6 JSON output records - How to extract specific metrics using `jq` - How to use `handleSummary` for custom report generation - The difference between `k6 run` (local) and `k6 cloud run` - How to navigate the k6 Cloud results UI - What cloud execution gives you over local runs
## Part 1: JSON Output Structured Data for Post-Processing
## Run k6 with JSON Output ```bash k6 run --out json=results.json scripts/solutions/lab-02-solution.js ``` This writes every metric data point to `results.json` while the test runs. Check file size: ```bash wc -l results.json # Output: 4823 results.json ``` A short test can produce thousands of lines.
## JSON Record Structure ```json { "type": "Point", "metric": "http_req_duration", "data": { "time": "2024-01-15T10:23:45.123456789Z", "value": 142.35, "tags": { "method": "GET", "name": "http://localhost:3000/", "status": "200", "url": "http://localhost:3000/" } } } ``` Each line is a complete metric sample.
## Key JSON Fields | Field | Description | |-------|-------------| | `type` | Always `"Point"` for metric data | | `metric` | k6 metric name (e.g., `http_req_duration`) | | `data.time` | ISO 8601 timestamp of the sample | | `data.value` | Numeric value (milliseconds for durations) | | `data.tags` | Labels attached to this sample |
## Extract Metrics with jq Filter for `http_req_duration` only: ```bash cat results.json | jq 'select(.metric=="http_req_duration")' ``` Extract values and timestamps: ```bash cat results.json | jq -c 'select(.metric=="http_req_duration") | {t: .data.time, ms: .data.value}' ``` Find the slowest request: ```bash cat results.json | jq 'select(.metric=="http_req_duration") | .data.value' | sort -n | tail -1 ```
## Calculate Average Response Time ```bash cat results.json | jq -s ' [.[] | select(.metric=="http_req_duration") | .data.value] | add/length ' ``` This: 1. Slurps all lines into an array (`-s`) 2. Filters for `http_req_duration` metrics 3. Extracts all values 4. Calculates average with `add/length`
## handleSummary: Custom Reports ```javascript export function handleSummary(data) { const p95 = data.metrics['http_req_duration'].values['p(95)']; const rps = data.metrics['http_reqs'].values['rate']; const errorRate = data.metrics['http_req_failed'].values['rate']; const md = `# Test Summary\n\n` + `- **p95 latency:** ${p95.toFixed(2)} ms\n` + `- **Requests/s:** ${rps.toFixed(2)}\n` + `- **Error rate:** ${(errorRate * 100).toFixed(2)}%\n`; return { stdout: md, 'summary.md': md, }; } ```
## handleSummary Output Destinations ```javascript return { stdout: md, // Print to terminal stderr: errors, // Print to stderr 'summary.md': md, // Write to file 'summary.json': json, // Write to file }; ``` Use cases: - CI/CD artifacts (save summary.json) - Slack webhooks (POST summary to endpoint) - Custom dashboards (transform to HTML)
## Part 2: k6 Cloud Run Persistent, Shareable Test Results
## Local vs. Cloud Execution **k6 run (local):** - Executes on your workstation - Results in terminal only - Data lost when session ends - No shareable URL **k6 cloud run:** - Executes in Grafana Cloud - Results stored persistently - Shareable URL for team collaboration - Automated performance insights
## Verify Your Cloud Token ```bash echo $K6_CLOUD_TOKEN ``` You should see a long alphanumeric string. If empty, retrieve it from: **grafana.com → Your Stack → k6 → Settings → API Token** Set it: ```bash export K6_CLOUD_TOKEN=
```
## Run Locally First ```bash k6 run scripts/starters/lab-09-starter.js ``` Note the characteristics: - Results appear only in this terminal - Summary disappears when you close the window - No URL to share - Data requires your own infrastructure to persist
## Run in the Cloud ```bash k6 cloud run scripts/starters/lab-09-starter.js ``` Output shows: ``` execution: cloud script: scripts/starters/lab-09-starter.js output: https://app.grafana.com/a/k6-app/runs/12345678 Run your test results here: https://app.grafana.com/a/k6-app/runs/12345678 ``` A **results URL** is printed immediately.
## Cloud Results UI: Run Overview Top of the page shows: - Test status badge (Finished / Running / Failed) - Duration, VUs, total requests, error rate at a glance - Threshold results with pass/fail status Green badge = all thresholds passed Red badge = one or more thresholds failed
## Performance Insights Tab Grafana's automated analysis of your run: - Flags high error rates - Detects latency spikes - Identifies throughput drops - Suggests potential issues Think of this as a "first reading" before you dig into raw metrics.
## HTTP Tab: Endpoint Breakdown | URL | Requests | Avg | p95 | Max | Failures | |-----|----------|-----|-----|-----|----------| | GET / | 120 | 4ms | 12ms | 45ms | 0.00% | | GET /api/products | 120 | 6ms | 18ms | 67ms | 0.00% | | POST /login | 120 | 145ms | 340ms | 890ms | 2.50% | Click any row to see a full time-series chart for that endpoint.
## Checks Tab Shows pass/fail counts for every `check()` call in your script: ``` ✓ status is 200 120 passed, 0 failed ✓ response time < 500ms 118 passed, 2 failed ✓ body contains products 120 passed, 0 failed ``` Empty if the script has no checks.
## Script Tab The exact script that was run, stored alongside the results. Invaluable six months later when you can't remember: - What URL you tested - What thresholds you set - What user flow you simulated
## Run the Solution for a Richer Cloud Run ```bash k6 cloud run scripts/solutions/lab-09-solution.js ``` The solution adds: - Multi-stage load profile (ramp-up / sustain / ramp-down) - Checks for response validation - Thresholds for pass/fail criteria Open the results URL and watch VU count change in real-time.
## What Cloud Gives You Over Local | Capability | Local (`k6 run`) | Cloud (`k6 cloud run`) | |---|---|---| | Results storage | Terminal only | Persistent in Grafana Cloud | | Shareable URL | No | Yes | | Performance insights | No | Yes | | Multi-location execution | No | Yes (requires plan) | | Results dashboard | DIY (InfluxDB + Grafana) | Built-in | | Historical comparison | Manual | Built-in trend view |
## Key Takeaways - `--out json=
` captures every raw metric sample as line-delimited JSON - `jq` is a powerful tool for slicing and aggregating JSON output - `handleSummary` produces custom reports (Markdown, JSON, HTML) from summary data - `k6 cloud run` is a drop-in replacement for `k6 run` with persistent, shareable results - Cloud results UI provides endpoint breakdown, automated insights, and historical trends - Use local execution for development, cloud for team collaboration and CI/CD integration
# Track 2 Complete! You've mastered k6 load testing
Navigate: [All Slides](../index.html) | [Prev: Local Observability](../05_Local_Observability/index.html) | [Next: Synthetic Basics](../07_Synthetic_Monitoring_Basics/index.html)