Chapter 2. RAG Part I: Indexing Your Data
In the previous chapter, you learned about the important building blocks used to create an LLM application using LangChain. You also built a simple AI chatbot consisting of a prompt sent to the model and the output generated by the model. But there are major limitations to this simple chatbot.
What if your use case requires knowledge that the model wasn’t trained on? For example, let’s say you want to use AI to ask questions about a company, but the information is contained in a private PDF or other type of document. While we’ve seen model providers enriching their training datasets to include more and more of the world’s public information (no matter what format it is stored in), two major limitations continue to exist in LLM’s knowledge corpus:
- Private data
-
Information that isn’t publicly available is, by definition, not included in the training data of LLMs.
- Current events
-
Training an LLM is a costly and time-consuming process that can span multiple years, with data-gathering being one of the first steps. This results in what is called the knowledge cutoff, or a date beyond which the LLM has no knowledge of real-world events; usually this would be the date the training set was finalized. This can be anywhere from a few months to a few years into the past, depending on the model in question.
In either case, the model will most likely hallucinate (find misleading or false information) and respond with inaccurate information. Adapting ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access