AI Sparks

How to Build a Stable and Efficient QLoRA Tuning Pipeline Using Unsloth for Large Language Models

In this tutorial, we show how to fine-tune a macro language model using Misbehavior and QLoRA. We focus on building a stable, well-maintained pipeline that handles common Colab problems such as GPU detection failures, runtime crashes, and library incompatibilities. By carefully controlling the environment, model configuration, and training loop, we demonstrate how to reliably train an instruction-based model with limited resources while maintaining robust performance and fast iteration speed.

import os, sys, subprocess, gc, locale


locale.getpreferredencoding = lambda: "UTF-8"


def run(cmd):
   print("n$ " + cmd, flush=True)
   p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=True)
   for line in p.stdout:
       print(line, end="", flush=True)
   rc = p.wait()
   if rc != 0:
       raise RuntimeError(f"Command failed ({rc}): {cmd}")


print("Installing packages (this may take 2–3 minutes)...", flush=True)


run("pip install -U pip")
run("pip uninstall -y torch torchvision torchaudio")
run(
   "pip install --no-cache-dir "
   "torch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 "
   "--index-url 
)
run(
   "pip install -U "
   "transformers==4.45.2 "
   "accelerate==0.34.2 "
   "datasets==2.21.0 "
   "trl==0.11.4 "
   "sentencepiece safetensors evaluate"
)
run("pip install -U unsloth")


import torch
try:
   import unsloth
   restarted = False
except Exception:
   restarted = True


if restarted:
   print("nRuntime needs restart. After restart, run this SAME cell again.", flush=True)
   os._exit(0)

Set up a managed and compatible environment by reinstalling PyTorch and all required libraries. We ensure that Unsloth and its dependencies are compatible with the CUDA runtime available from Google Colab. We also manage the restart time logic so that the environment is clean and stable before the training starts.

import torch, gc


assert torch.cuda.is_available()
print("Torch:", torch.__version__)
print("GPU:", torch.cuda.get_device_name(0))
print("VRAM(GB):", round(torch.cuda.get_device_properties(0).total_memory / 1e9, 2))


torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True


def clean():
   gc.collect()
   torch.cuda.empty_cache()


import unsloth
from unsloth import FastLanguageModel
from datasets import load_dataset
from transformers import TextStreamer
from trl import SFTTrainer, SFTConfig

We ensure GPU availability and configure PyTorch to compute properly. We import Unsloth before all other training libraries to ensure that all performance improvements are applied correctly. We also describe the GPU memory management utility functions during training.

max_seq_length = 768
model_name = "unsloth/Qwen2.5-1.5B-Instruct-bnb-4bit"


model, tokenizer = FastLanguageModel.from_pretrained(
   model_name=model_name,
   max_seq_length=max_seq_length,
   dtype=None,
   load_in_4bit=True,
)


model = FastLanguageModel.get_peft_model(
   model,
   r=8,
   target_modules=["q_proj","k_proj],
   lora_alpha=16,
   lora_dropout=0.0,
   bias="none",
   use_gradient_checkpointing="unsloth",
   random_state=42,
   max_seq_length=max_seq_length,
)

We load the 4-bit, instruction-configured model using Unsloth’s fast-loading utilities. We then attach LoRA adapters to the model so we can fine-tune the parameters. We optimize the LoRA setup to measure memory efficiency and learning power.

ds = load_dataset("trl-lib/Capybara", split="train").shuffle(seed=42).select(range(1200))


def to_text(example):
   example["text"] = tokenizer.apply_chat_template(
       example["messages"],
       tokenize=False,
       add_generation_prompt=False,
   )
   return example


ds = ds.map(to_text, remove_columns=[c for c in ds.column_names if c != "messages"])
ds = ds.remove_columns(["messages"])
split = ds.train_test_split(test_size=0.02, seed=42)
train_ds, eval_ds = split["train"], split["test"]


cfg = SFTConfig(
   output_dir="unsloth_sft_out",
   dataset_text_field="text",
   max_seq_length=max_seq_length,
   packing=False,
   per_device_train_batch_size=1,
   gradient_accumulation_steps=8,
   max_steps=150,
   learning_rate=2e-4,
   warmup_ratio=0.03,
   lr_scheduler_type="cosine",
   logging_steps=10,
   eval_strategy="no",
   save_steps=0,
   fp16=True,
   optim="adamw_8bit",
   report_to="none",
   seed=42,
)


trainer = SFTTrainer(
   model=model,
   tokenizer=tokenizer,
   train_dataset=train_ds,
   eval_dataset=eval_ds,
   args=cfg,
)

We prepare the training data set by converting the variable conversations into a single text format suitable for supervised fine-tuning. We partitioned the dataset to maintain training integrity. We also describe the training setting, which controls the ensemble size, learning rate, and training duration.

clean()
trainer.train()


FastLanguageModel.for_inference(model)


def chat(prompt, max_new_tokens=160):
   messages = [{"role":"user","content":prompt}]
   text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
   inputs = tokenizer([text], return_tensors="pt").to("cuda")
   streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
   with torch.inference_mode():
       model.generate(
           **inputs,
           max_new_tokens=max_new_tokens,
           temperature=0.7,
           top_p=0.9,
           do_sample=True,
           streamer=streamer,
       )


chat("Give a concise checklist for validating a machine learning model before deployment.")


save_dir = "unsloth_lora_adapters"
model.save_pretrained(save_dir)
tokenizer.save_pretrained(save_dir)

We perform a training loop and monitor the fine-tuning process on the GPU. We switch the model to inference mode and verify its behavior using sample data. Finally we save the trained LoRA adapters so that we can reuse or release a fine-tuned model later.

In conclusion, we fine-tuned the instruction-following language model using the advanced Unsloth training stack and the lightweight QLoRA setup. We have shown that by limiting the sequence length, dataset size, and training steps, we can achieve stable training on Colab GPUs with no runtime degradation. The resulting LoRA adapters provide a functional, reusable artifact that we can use or extend further, making this workflow a solid foundation for future experiments and advanced alignment techniques.


Check it out Full Codes here. Also, feel free to follow us Twitter and don’t forget to join our 120k+ ML SubReddit and Subscribe to Our newspaper. Wait! are you on telegram? now you can join us on telegram too.


Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button