Skip to main content

Quick Start Guide

This guide will walk you through your first pod creation and management workflow with the Lium CLI.

Prerequisites​

Before starting, make sure you have installed and configured the Lium CLI. See the Installation Guide for detailed instructions.

Step 1: Browse Available GPUs​

List available GPU executors:

lium ls

This shows a table with:

  • Executor index numbers
  • GPU types (H100, A100, RTX 4090, etc.)
  • Pricing per hour
  • Available memory and storage
  • Pareto-optimal choices marked with ★

Filter by GPU type:

lium ls H100  # Show only H100 GPUs
lium ls A100 # Show only A100 GPUs

Step 2: Create Your First Pod​

Create a pod using an executor from the list:

lium up 1  # Uses executor #1 from the list

You'll be prompted to select a template from the available options. You can filter the templates during selection.

Step 3: Check Pod Status​

View your active pods:

lium ps

This displays:

  • Pod names and IDs
  • Status (running, stopped, etc.)
  • Uptime and costs
  • SSH connection details

Step 4: Connect to Your Pod​

SSH Access​

Connect via SSH:

lium ssh my-pod

Or use the pod number from lium ps:

lium ssh 1

Execute Commands​

Run commands without SSH:

lium exec my-pod "nvidia-smi"
lium exec my-pod "python --version"

Step 5: Transfer Files​

Copy Files to Pod​

Copy a single file:

lium scp my-pod ./script.py

Copy to specific location:

lium scp my-pod ./data.csv /root/datasets/

Copy to multiple pods:

lium scp 1,2,3 ./model.py
lium scp all ./config.json # All pods

Sync Directories​

Synchronize entire directories:

lium rsync my-pod ./project
lium rsync my-pod ./data /root/datasets/

Step 6: Stop Your Pod​

Remove a pod when done:

lium rm my-pod

Remove multiple pods:

lium rm pod1 pod2 pod3
lium rm all # Remove all pods

Complete Example Workflow​

Here's a complete machine learning training workflow:

# 1. List available GPUs and choose one
lium ls A100

# 2. Create a pod (you'll be prompted to select a template)
lium up 1 --name ml-training

# 3. Copy your training code
lium scp ml-training ./train.py
lium rsync ml-training ./data /root/datasets/

# 4. Install dependencies
lium exec ml-training "pip install -r requirements.txt"

# 5. Start training
lium exec ml-training "python train.py --epochs 100"

# 6. Monitor progress
lium ssh ml-training
# Inside pod: tail -f training.log

# 7. Copy results back (from local machine)
scp root@<pod-ip>:/root/models/best_model.pt ./

# 8. Clean up
lium rm ml-training

Using Templates​

List available templates:

lium templates

Search for specific templates:

lium templates pytorch
lium templates tensorflow

Create pod and select template:

lium up 1
# You'll be prompted to select from available templates

Cost Management​

Monitor Spending​

Check current costs:

lium ps  # Shows hourly rate and total spent

Fund Your Account​

Add funds from Bittensor wallet:

lium fund  # Interactive mode
lium fund -w my-wallet -a 10.0 # Fund 10 TAO

Tips and Best Practices​

1. Use Pareto-Optimal Executors​

Look for ★ symbols in lium ls - these offer the best price/performance.

2. Name Your Pods​

Use descriptive names for easier management:

lium up 1 --name experiment-bert-v2

3. Batch Operations​

Copy files to multiple pods efficiently:

lium scp all ./updated_config.json

4. Monitor Resources​

Check GPU utilization:

lium exec my-pod "nvidia-smi"

5. Clean Up​

Always remove pods when done to avoid charges:

lium rm all

Next Steps​