Chapter 11: Handling LLM Hallucinations

It’s this part of the book that deals with the biggest issue associated with LLMs i.e. Hallucinations. A jargon sometime back, GenAI had such an impact that ‘hallucination’ became the word of the year in 2023 by the Cambridge Dictionary. This chapter deals with:

What are hallucinations?

Why do LLMs hallucinate?

How to deal with hallucinations using LangChain?

11.1 What are Hallucinations?

Hallucinations in the context of artificial intelligence often refer to situations where a model generates content that is not accurate or is seemingly made up. It can involve the model "imagining" details not present in the input data or generating responses that are not grounded in reality.

 

For example: Suppose you ...

Get LangChain in your Pocket now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.