---
sidebar_position: 2
---

> ## Documentation Index
> Fetch the complete documentation index at: https://docs.lium.io/llms.txt
> Use this file to discover all available pages before exploring further.

# Node Quickstart

## Get started in 5 min

Three commands and a portal click bring a GPU machine online as a Lium node.

### Requirements

- **OS**: Ubuntu 22.04 (recommended)
- **GPU**: NVIDIA driver installed (`nvidia-smi` works)
- **Docker**: installed and running
- **Network**: 50 Mbps download, public IP, ports open (default `8080` HTTP and `2200` SSH)
- **Hardware**: ≥ 8 GB RAM, ≥ 100 GB free storage
- **A provider already registered on Subnet 51** — see [Provider Quickstart](../quickstart). You'll need its hotkey SS58 address.

### Optional (strongly recommended)

- **[Docker Storage Setup](./docker-storage.md)** — provision XFS-backed Docker storage with project quotas so each rented pod is capped at the disk size advertised in the portal.

  :::danger Why this matters
  Without XFS + `pquota`, a renter can write across the **entire host disk** instead of the slice they paid for. The validator detects this as a misbehaving node and your **score drops to 0**, which means **no emission and no rentals** until you reprovision storage.
  :::

### Step 1 — Install Sysbox (required)

[Sysbox](./sysbox.md) is **required**. Validators reject any node missing the `sysbox-runc` runtime — without it the node earns no emission and cannot be rented. Install it before anything else. The `lium-io` repo ships an installer that also handles the NVIDIA Container Toolkit:

```bash
curl -fsSL https://raw.githubusercontent.com/Datura-ai/lium-io/main/neurons/executor/nvidia_docker_sysbox_setup.sh | sudo bash
```

Verify with the same command our validator uses:

```bash
docker run --rm --runtime=sysbox-runc --gpus all daturaai/compute-subnet-executor:latest nvidia-smi
```

If `nvidia-smi` output appears — you're good. Running Docker ≥ 29.2.0, or hitting a `permission denied` error? See the [Sysbox page](./sysbox.md#docker--2920-cdi-compatibility-fix) for the CDI fix.

### Step 2 — Run the node in one script

```bash
curl -fsSL https://lium.io/mine.sh | bash -s -- -k <your_provider_hotkey_ss58>
```

This installs the [`lium`](https://github.com/Datura-ai/lium) CLI, then runs `lium mine`, which:

| Step | Action |
|------|--------|
| 1 | Clone or update the `lium-io` repo into `./compute-subnet` |
| 2 | Run `scripts/install_executor_on_ubuntu.sh` (Docker + NVIDIA Container Toolkit checks) |
| 3 | Render `neurons/executor/.env` from the template (ports + hotkey) |
| 4 | `docker compose up -d` and wait for the container to report `healthy` |
| 5 | Run the validator's own check via `daturaai/lium-validator:latest` |

You'll be prompted for ports interactively. Defaults work for most providers:

- **Service port** — node HTTP API (default `8080`)
- **Node SSH port** — used by validators to SSH into the container (default `2200`)
- **Public SSH port** — only if behind NAT and forwarding a different port
- **Renting port range** — optional, e.g. `2000-2005`

To skip prompts and accept all defaults, append `--auto`:

```bash
curl -fsSL https://lium.io/mine.sh | bash -s -- -k <your_provider_hotkey_ss58> --auto
```

When the script finishes, note the printed **GPU type, IP address, port, and GPU count** — you'll paste them into the portal next.

### Step 3 — Register the node in the Provider Portal

1. Sign in at [provider.lium.io](https://provider.lium.io) — if you haven't yet, follow [Provider Quickstart → Step 2](../quickstart#step-2--sign-in-to-the-provider-portal).
2. In the sidebar, click **Add Node**.
3. Fill the form with the values printed at the end of Step 2:
   - **GPU Type** (e.g. `RTX 4090`)
   - **GPU Count**
   - **IP Address**
   - **Port** (default `8080`)
   - **Price per GPU (USD/hr)**
4. Click **Add Node** to submit.

After submitting, the node will appear in your portal — confirm it shows as active. Detailed walk-through: [Managing Nodes → Adding New Nodes](../portal/managing-nodes#adding-new-nodes).

## What's next

1. [Docker Storage Setup](./docker-storage.md) — provision XFS-backed Docker storage for stable rentals.
2. [GPU splitting](./gpu-splitting.md) — let multiple customers rent partial GPUs from one node.
3. [CVM (Confidential VM)](./cvm.md) — TDX-isolated nodes for sensitive workloads.
4. [Manage prices and notice periods](../portal/managing-nodes.md) in the Provider Portal.

## Other useful `lium` commands

```bash
lium gpu-splitting check    # inspect host before enabling GPU splitting
lium gpu-splitting setup    # provision XFS-backed Docker storage non-interactively
lium gpu-splitting verify   # confirm the host meets the splitting requirements
```

See [GPU Splitting](./gpu-splitting.md) for the full workflow.
