Chapter 12. Retrieval-Augmented Generation
In Chapter 10, we demonstrated how to vastly expand the capabilities of LLMs by interfacing them with external data and software. In Chapter 11, we introduced the concept of embedding-based retrieval, a foundational technique for retrieving relevant data from data stores in response to queries. Armed with this knowledge, let’s explore the application paradigm of augmenting LLMs with external data, called retrieval-augmented generation (RAG), in a holistic fashion.
In this chapter, we will take a comprehensive view of the RAG pipeline, diving deep into each of the steps that make up a typical workflow of a RAG application. We will explore the various decisions involved in operationalizing RAG, including what kind of data we can retrieve, how to retrieve it, and when to retrieve it. We will highlight how RAG can help not only during model inference but also during model training and fine-tuning. We will also compare RAG with other paradigms and discuss scenarios where RAG shines in comparison to alternatives or vice versa.
The Need for RAG
As introduced in Chapter 10, RAG is an umbrella term used to describe a variety of techniques for using external data sources to augment the capabilities of an LLM. Here are some reasons we might want to use RAG:
-
We need the LLMs to access our private/proprietary data, or data that was not part of the LLM’s pre-training datasets. Using RAG is a much more lightweight option than pre-training an LLM ...