Chapter 5. Fine-tuning LLMs
In the previous chapter, we discussed the various factors that need to be taken into account while choosing the right LLM for your specific needs, including pointers on how to evaluate LLMs in order to be able to make an informed choice. Next, let us utilize these LLMs to solve our tasks.
In this chapter, we will explore the process of adapting an LLM to solve your task of interest, using fine-tuning. We will go through a full example of fine-tuning, covering all the important decisions one needs to make. We will also discuss the art and science of creating fine-tuning datasets. Open your Google Colab/Jupyter notebook environment and let us get started!
The need for fine-tuning
Why do we need to fine-tune LLMs? Why doesn’t a pre-trained LLM with few-shot prompts suffice for our needs? Let us look at a couple of examples to drive the point home.
Use Case 1: Consider you are working on the rather whimsical ...
Get Designing Large Language Model Applications now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.