---
sidebar_position: 7
---

> ## Documentation Index
> Fetch the complete documentation index at: https://docs.lium.io/llms.txt
> Use this file to discover all available pages before exploring further.

# Grafana Dashboards

Lium runs a Grafana instance at **[grafana.lium.io](https://grafana.lium.io/dashboards/f/cfh8umv50c9oga/subnet-51)** that exposes the validator-side telemetry behind subnet 51 — the same data that drives your score, your emission, and your rental activity. Use it for live debugging when the [Provider Portal](./portal/overview.md) doesn't go deep enough.

## Access

- **Direct:** open [grafana.lium.io](https://grafana.lium.io/dashboards/f/cfh8umv50c9oga/subnet-51) and sign in. All dashboards covered here live in the **Subnet 51** folder.
- **Embedded:** each node's detail page in the [Provider Portal](./portal/monitoring.md) has a **Grafana** tab that surfaces the relevant panels for that one node — no separate login.

Most per-provider views accept `miner_hotkey` and `executor_id` template variables in the top toolbar — set them once and every panel filters down to your own machines.

## Dashboards

| Dashboard | What it answers |
|---|---|
| [**Dashboard**](https://grafana.lium.io/d/bejrh6ldw5lhcc/dashboard) | Subnet-wide health: total GPUs, revenue ($/hr), rented-vs-idle ratio, GPUs per provider / validator / node. Use to compare the network against your own footprint. |
| [**GPU Demand Analytics**](https://grafana.lium.io/d/gpu-demand-analytics-v1/gpu-demand-analytics) | Supply, demand, and utilization broken down by GPU model. Use before adding hardware to see which models are oversaturated. |
| [**GPU model rate**](https://grafana.lium.io/d/eenyghq29eqrke/gpu-model-rate) | Score-portion share by GPU model over time — how much of total subnet score each model is earning. |
| [**Job Logs**](https://grafana.lium.io/d/aejriu31349hcb/job-logs) | Raw validator scoring logs per node. Use to find out *why* a specific node scored what it scored. |
| [**Penalty Events**](https://grafana.lium.io/d/b7b972f3-a083-415f-886d-ae5b12e2371b/penalty-events) | Every penalty applied to a provider: reason code, action, amount withheld, and the deploy/undeploy failure that triggered it. |
| [**Weights**](https://grafana.lium.io/d/cem1ol3fw0740e/weights) | Per-validator weight bars to each provider. Use to confirm a specific validator is actually weighting your hotkey. |

## Panels in each dashboard

### Dashboard

Network-wide overview. No template variables.

- **Total GPUs**, **Revenue ($/hr)**, **R / I** (rented/idle), **Celium Revenue ($/hr)** — top-line stats and trends.
- **GPUs per Type** — count per GPU model over time.
- **GPUs / Executors per Provider**, **per Validator** — distribution across operators.
- **GPUs Per Executor (GPU not changed / changed)** — flags executors whose GPU set has been swapped between runs.
- **Executors without GPU** — table of executors reporting zero GPUs.

### GPU Demand Analytics

Supply/demand by GPU model. Template variable: `gpu_type` (multi-select).

- **Total GPUs in Network**, **Currently Rented GPUs**, **Available GPUs**, **Overall Utilization** — current snapshot.
- **Rented GPUs**, **GPU Utilization Rate**, **Supply / Demand** — per-model time series.
- **Current Utilization by GPU Type** — table sortable by demand.
- **Current Demand** — bar chart of active demand per model.

### GPU model rate

- **Score Portion** — time series of each GPU model's share of total subnet score. Useful for spotting which model the validators are currently rewarding most.

### Job Logs

Three Loki-style log feeds from `prod_executors`, all keyed by provider hotkey and executor ID:

- **Job Logs** — every scoring run, including the score and uptime.
- **Job Error Logs (Score = 0 AND GPU > 0)** — runs where the node was alive but failed scoring (e.g., synthetic-job error, sysbox missing).
- **Machine scrape Error Logs (Score = 0 AND GPU = 0)** — runs where the validator couldn't even read the machine spec (unreachable, auth failure, no GPU detected).

If your node is earning nothing, look here first: the second panel tells you the node is up but failing tests; the third tells you it isn't reachable at all.

### Penalty Events

Template variables: `miner_hotkey`, `executor_id`. Single table with one row per penalty:

| Column | Meaning |
|---|---|
| `event_time` | When the penalty fired |
| `executor_id`, `miner_hotkey` | Who got penalized |
| `reason_code` | Machine-readable reason |
| `action` | What was done (e.g., withhold) |
| `dry_run` | `true` = recorded but not enforced |
| `amount_withheld` | Reward withheld for this event |
| `failure_type` | Linked `deploy_failed` / `undeploy_failed` event from the past 24 h, if any |
| `comment` | Free-text comment + extracted error message |

### Weights

Template variable: `validator`. Single bar chart of the chosen validator's weight vector across all provider hotkeys. Use it to verify a given validator is actually weighting your hotkey, and at what magnitude relative to peers.

## Common questions → which dashboard

| Question | Open |
|---|---|
| "Why is my node scoring zero?" | **Job Logs** — filter by your hotkey, check the two `Score = 0` panels |
| "Were any rewards withheld from me?" | **Penalty Events** — filter by your hotkey |
| "Is this validator giving me weight?" | **Weights** — pick the validator |
| "Should I add more H100s or 4090s?" | **GPU Demand Analytics** + **GPU model rate** |
| "How does my fleet compare to the network?" | **Dashboard** — see GPUs/Executors per provider |

For the resulting TAO emission (which is downstream of these score signals), the [TaoMarketCap subnet 51 page](https://taomarketcap.com/subnets/51/miners) is the canonical source. The Grafana dashboards above are the *inputs* that produce that emission.
