This project demonstrates how to fine-tune a large pre-trained model (DistilBERT) on the IMDB movie reviews dataset using LoRA (Low-Rank Adaptation) — a parameter-efficient fine-tuning (PEFT) technique. With LoRA, we only train 0.5% of the model parameters, making the process fast, cost-effective, and beginner-friendly.