Understanding Hallucinations and Bias
Off-the-shelf foundation models have limitations that restrict their direct use in production. At the core, large language models (LLMs) learn from a vast amount of data collected from the Internet (e.g., Wikipedia), papers, books, and articles. While this data is rich and informative, it is also riddled with inaccuracies and societal biases. Since LLMs are trained to predict text without a mechanism for fact-checking, they can suffer from hallucination or bias.
Hallucinations in LLMs occur when a model generates text that is incorrect and not grounded in reality. This phenomenon involves the model confidently producing responses with no basis in its training data (e.g., creating non-sensical or non-factual ...
Get Building LLMs for Production now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.