8

Using Community-Shared LoRAs

To meet specific needs and generate higher fidelity images, we may need to fine-tune a pre-trained Stable Diffusion model, but the fine-tuning process is extremely slow without powerful GPUs. Even if you have all the hardware or resources on hand, the fine-tuned model is large, usually the same size as the original model file.

Fortunately, researchers from the Large Language Model (LLM) neighbor community developed an efficient fine-tuning method, Low-Rank Adaptation (LoRA — “Low” is why the “o” is in lowercase) [1]. With LoRA, the original checkpoint is frozen without any modification, and the tuned weight changes are stored in an independent file, which we usually call the LoRA file. Additionally, there are countless ...

Get Using Stable Diffusion with Python now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.