Lium SDK
The lium.io package ships both the CLI and a Python SDK for managing GPU pods programmatically. Install it once and use whichever interface fits the job.
Full SDK Reference
The complete API reference, guides, and advanced examples are hosted on Read the Docs:
Installation​
pip install lium.io
Two Entry Points​
The SDK exposes two ways to run work on Lium GPUs:
@lium.machinedecorator — annotate a Python function and offload it to a GPU pod. Best for quickly running isolated workloads.Lium()client — a direct client for long-lived orchestration code that manages pod lifecycles.
@lium.machine decorator​
Annotate a function with the machine type and dependencies, then call it like a normal Python function. The SDK handles provisioning, code upload, execution, and teardown.
import lium
@lium.machine(machine="A100", requirements=["torch", "transformers", "accelerate"])
def infer(prompt: str) -> str:
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("sshleifer/tiny-gpt2")
model = AutoModelForCausalLM.from_pretrained("sshleifer/tiny-gpt2", device_map="cuda")
tokens = tokenizer(prompt, return_tensors="pt").to("cuda")
out = model.generate(**tokens, max_new_tokens=50)
return tokenizer.decode(out[0], skip_special_tokens=True)
print(infer("Who discovered penicillin?"))
Lium() client​
The client mirrors the CLI's pod lifecycle — list executors, bring a pod up, wait until it's ready, execute commands, and tear it down.
from lium.sdk import Lium
lium = Lium()
executor = lium.ls(gpu_type="A100")[0]
pod = lium.up(executor=executor.id, name="demo")
ready = lium.wait_ready(pod, timeout=600)
print(lium.exec(ready, command="nvidia-smi")["stdout"])
lium.down(ready)
Next Steps​
Ready to go deeper? The Lium SDK Documentation covers the full API reference, authentication, volumes, backups, and advanced usage patterns.
Related​
- CLI Installation — install the
lium.iopackage - CLI Commands — command-line reference
- CLI Quickstart — get started with the CLI