---
sidebar_position: 3
---

> ## Documentation Index
> Fetch the complete documentation index at: https://docs.lium.io/llms.txt
> Use this file to discover all available pages before exploring further.

# Architecture

How the pieces of a Lium provider setup fit together. Read this once to build a mental model — then proceed to [Quickstart](./quickstart.md), [Self-hosted provider setup](./self-hosted-provider.md), or [Node Quickstart](./nodes/quickstart.md) depending on the path you've chosen.

## The two-component model

A Lium provider setup has two distinct components:

1. **Provider** — a lightweight CPU coordinator (no GPU). It signs with your registered subnet 51 hotkey, talks to validators, and pushes/pulls node configuration.
2. **Nodes** — one or more GPU machines that perform actual rental workloads. Validators probe each node and renters connect to it.

```mermaid
graph TD
    A[Provider<br/>CPU coordinator] --> B[Node 1<br/>GPU]
    A --> C[Node 2<br/>GPU]
    A --> D[Node N<br/>GPU]

    V[Validators] -.scoring.-> B
    V -.scoring.-> C
    V -.scoring.-> D

    P[Provider Portal<br/>provider.lium.io] --> A
    R[Renters] -.rentals.-> B
    R -.rentals.-> C
    R -.rentals.-> D

    style A fill:#1A75FF,stroke:#0066FF,stroke-width:2px,color:#fff
    style B fill:#161B22,stroke:#30363D,stroke-width:2px,color:#E6EDF3
    style C fill:#161B22,stroke:#30363D,stroke-width:2px,color:#E6EDF3
    style D fill:#161B22,stroke:#30363D,stroke-width:2px,color:#E6EDF3
```

A provider with no nodes earns nothing. Each node you bring online adds to your scoring surface and rental revenue.

## Where the provider runs — your choice

The CPU coordinator can run in two places:

| Option | Who runs it | Setup |
|---|---|---|
| **Self-hosted provider** | You, on a Linux box you provide | [Self-hosted provider setup](./self-hosted-provider.md) |
| **Lium.io Central Provider Server** | Lium operates it for you | Toggle on in your [Provider Portal profile](https://provider.lium.io/profile) |

The choice is a per-account toggle in the [Provider Portal profile page](https://provider.lium.io/profile). You can flip back and forth — see [Provider Configuration](./provider-configuration.md) for the trade-offs and the toggle flow.

The nodes are always run by you, regardless of which path you pick for the coordinator.

## How validators interact with your nodes

Validators on subnet 51 do two things continuously:

- **Probe each node** for the required `sysbox-runc` runtime, hardware specs, and synthetic workload performance. Nodes missing Sysbox are rejected — they earn no emission and cannot be rented. Setup is in [Sysbox](./nodes/sysbox.md).
- **Score each provider** based on those probes plus rental activity. Scores translate into TAO emission via Bittensor's standard subnet mechanism.

Rental income is separate from emission — renters pay you (in the platform's billing system) for actual GPU-hour usage. See [Provider Portal → Payments](./portal/payments.md) for the payout view.

<details>
<summary>**What goes on each component** (component breakdown)</summary>

**On the provider host (CPU coordinator)**

- Bittensor `btcli` for hotkey operations
- The provider Docker container (from `lium-io/neurons/miners/`)
- The registered subnet 51 hotkey (only the hotkey — never the coldkey)

**On each node host (GPU)**

- NVIDIA driver + Container Toolkit
- [Sysbox runtime](./nodes/sysbox.md) (required)
- The node Docker container (from `lium-io/neurons/executor/`)
- Optionally: [XFS-backed Docker storage](./nodes/docker-storage.md) for [GPU splitting](./nodes/gpu-splitting.md), or [TDX-isolated CVM](./nodes/cvm.md) for confidential workloads

**Off-host (managed for you)**

- The [Provider Portal](./portal/overview.md) — node registration, pricing, monitoring, payouts
- The Bittensor network — subnet 51 emission, hotkey state
- Optionally, the Lium.io Central Provider Server if you opted in

</details>

## Supported GPUs

### What GPU models can I bring to this subnet?

The validator scores a fixed allow-list of NVIDIA models — anything outside this list is rejected at machine-spec time. Within the list, only some models are admitted to the **unrented pool** (so they earn the unrented-emission share); the rest are *operationally supported* (you can register and rent them out) but receive `0` from the unrented-pool reward.

The live list — with each model's reference rental price — is available from the API:

```bash
curl https://lium.io/api/machines
```

Price limits apply to the reference price returned by this endpoint: the portal enforces a **0.5× floor** and a **3× ceiling** (HTTP 400 outside that range). The static list below is a snapshot; the API is authoritative. The validator source also documents each model's emission weight — see [`neurons/validators/src/services/const.py` → `GPU_MODEL_RATES`](https://github.com/Datura-ai/lium-io/blob/main/neurons/validators/src/services/const.py).

#### Datacenter — Hopper / Blackwell

`B300 SXM6 AC`, `B200`, `H200`, `H200 NVL`, `H100 80GB HBM3`, `H100 NVL`, `H100 PCIe`, `H800 80GB HBM3`, `H800 NVL`, `H800 PCIe`.

#### Datacenter — Ampere & inference

`A100 80GB PCIe`, `A100-SXM4-80GB`, `A10 Tensor Core`, `T4 Tensor Core`, `Tesla V100 Tensor Core`, `Tesla P100`, `Tesla P40`, `Tesla M40`.

#### Workstation — Ada, Ampere, Blackwell

`RTX 6000 Ada`, `RTX 5880 Ada`, `RTX 5000 Ada`, `RTX PRO 6000 Blackwell` (Server / Workstation Editions), `RTX PRO 5000 Blackwell`, `RTX PRO 4000 Blackwell`, `RTX A6000`, `RTX A5000`, `RTX A4500`, `RTX A4000`, `RTX A2000`, `L40S`, `L40`, `L4`.

#### Workstation — older Quadro / TITAN

`Quadro RTX 8000`, `Quadro RTX 6000`, `Quadro RTX 5000`, `Quadro P4000`, `TITAN RTX`, `TITAN V`, `TITAN Xp`.

#### Consumer — RTX 50-series

`RTX 5090`, `RTX 5080`, `RTX 5070 Ti`, `RTX 5070`, `RTX 5060 Ti`, `RTX 5060`.

#### Consumer — RTX 40-series

`RTX 4090`, `RTX 4090 D`, `RTX 4080 SUPER`, `RTX 4080`, `RTX 4070 Ti SUPER`, `RTX 4070 Ti`, `RTX 4070 SUPER`, `RTX 4070`, `RTX 4060 Ti`, `RTX 4060`.

#### Consumer — RTX 30-series

`RTX 3090 Ti`, `RTX 3090`, `RTX 3080 Ti`, `RTX 3080`, `RTX 3070 Ti`, `RTX 3070`, `RTX 3060 Ti`, `RTX 3060`, `RTX 3060 Laptop`, `RTX 3050`.

#### Consumer — RTX 20-series & GTX 10/16-series

`RTX 2080 Ti`, `RTX 2080 SUPER`, `RTX 2070 SUPER`, `RTX 2060 SUPER`, `RTX 2060`, `GTX 1660 Ti`, `GTX 1660 SUPER`, `GTX 1660`, `GTX 1080 Ti`, `GTX 1080`, `GTX 1070 Ti`, `GTX 1070`, `GTX 1060`.

:::info Rented vs unrented pool eligibility
Every model on the list is eligible for **rental revenue** (the customer-paid stream). Only a subset earns the **unrented-emission** share — concretely, the models with a non-zero entry in `GPU_MODEL_RATES`. Models marked `0.0` in that dict (e.g. RTX 5080, RTX 4080, A10, T4, RTX A2000, every model older than the RTX 30-series) still mine and rent normally; they just don't accrue unrented-pool weight when idle. For live per-model reference prices, query `https://lium.io/api/machines`.
:::

:::tip CVM-eligible GPUs
For [Confidential VM (CVM)](./nodes/cvm.md) nodes, only Hopper (H100, H200) and Blackwell (B200, GB200) are supported — consumer and workstation cards do not implement NVIDIA Confidential Computing.
:::

## Where to go next

- New provider: [Quickstart](./quickstart.md) (5 min, recommended) or [Self-hosted provider setup](./self-hosted-provider.md) for the do-it-yourself path
- Adding GPU nodes: [Node Quickstart](./nodes/quickstart.md)
- Day-2 operations: [Provider Portal](./portal/overview.md)
