---
sidebar_position: 5
---

> ## Documentation Index
> Fetch the complete documentation index at: https://docs.lium.io/llms.txt
> Use this file to discover all available pages before exploring further.

# GPU Splitting

GPU splitting lets a single node serve multiple customers at once — each renting an integer subset of GPUs. It increases utilization and lets you price flexibly.

## Host prerequisite: Docker storage

GPU splitting requires Docker on `overlay2` + XFS with `pquota` and `ftype=1`. Run the CLI workflow in [Docker Storage Setup](./docker-storage.md) before enabling the feature in the portal — the `Edit` button only unlocks once preflight passes. (This same storage layout will be required for every node in the near future, so the work is not GPU-splitting specific.)

## Enable GPU splitting in the portal

After the host meets the prerequisites:

1. Open the node's details page in the [Provider Portal](https://provider.lium.io).
2. Find the **GPU Splitting** panel — an **Edit** button appears once preflight passes.
3. Set the minimum GPU count per rental (must be greater than `1`).

Customers can then rent any integer count between your minimum and the node's total. To disable splitting, clear the minimum — only allowed when no pod is currently using a partial allocation.

## When splitting actually helps

GPU splitting does **not** unlock any extra validator score, emission, or platform-side incentive — payouts are unchanged whether splitting is enabled or not. The upside is purely demand-side: a meaningful share of customers want a single GPU rather than the full 8× node, and an 8-GPU node with no splitting is invisible to that segment. Enabling splitting widens the renter pool that can match your node, which typically raises utilization on multi-GPU hosts. On single-GPU nodes the feature has no effect.
