Prompt Engineering for LLMs

Book description

Large language models (LLMs) promise unprecedented benefits. Well versed in common topics of human discourse, LLMs can make useful contributions to a large variety of tasks, especially now that the barrier for interacting with them has been greatly reduced. Potentially, any developer can harness the power of LLMs to tackle large classes of problems previously beyond the reach of automation.

This book provides a solid foundation of LLM principles and explains how to apply them in practice. When first integrating LLMs into workflows, most developers struggle to coax useful insights from them. That's because communicating with AI is different from communicating with humans. This guide shows you how to present your problem in the model-friendly way called prompt engineering.

With this book, you'll:

  • Examine the user-program-AI-user model interaction loop
  • Understand the influence of LLM architecture and learn how to best interact with it
  • Design a complete prompt crafting strategy for an application that fits into the application context
  • Gather and triage context elements to make an efficient prompt
  • Formulate those elements so that the model processes them in the way that's desired
  • Master specific prompt crafting techniques including few-shot learning, and chain-of-thought prompting

Publisher resources

View/Submit Errata

Table of contents

  1. Brief Table of Contents (Not Yet Final)
  2. 1. Introduction to Prompt Engineering
    1. Large Language Models Are Magic 
    2. A Brief History of Language Models
      1. In the Beginning Was Symbolic NLP
      2. Then There Was Statistical NLP
      3. The Advent of Neural NLP
      4. State-of-the-Art NLP
      5. To Infinity and Beyond
    3. Prompt Engineering
    4. Becoming a Prompt Engineer
      1. How to Read This Book
    5. Conclusion
  3. 2. Understanding LLMs
    1. 2.1 What Are LLMs
      1. Completing a Document
      2. Human Thought Versus LLM Processing
      3. Hallucinations
    2. 2.2 How LLMs See the World
      1. Counting Tokens
    3. 2.3 One Token at a Time
      1. Patterns and Repetitions
    4. 2.4 Temperature and Probabilities
    5. 2.5 The Transformer Architecture
    6. 2.6 Conclusion
  4. 3. From Document Completion to Personal Assistant
    1. Reinforcement Learning from Human Feedback
      1. The Process of Building a RLHF Model
      2. But Why Go To All That Trouble?
    2. Moving from InstructGPT to ChatGPT
      1. InstructGPT
      2. ChatGPT
    3. The Changing API
      1. Chat Completion API
      2. Moving Away from Chat
      3. Moving Beyond Chat to Functions
    4. Prompt Engineering as Play Writing
    5. Conclusion
  5. 4. Designing LLM Applications
    1. The Anatomy of the Loop
      1. The User’s Problem
      2. Converting the User’s Problem to the Model Domain
      3. Using the LLM to Complete the Prompt
      4. Transforming Back to User Space
    2. Zooming in to the Feedforward Pass
      1. Building the Basic Feedforward Pass
      2. Exploring the Complexity of the Loop
    3. Evaluating LLM Application Quality
      1. Online Evaluation
      2. Offline Evaluation
    4. Conclusion
  6. 5. What Goes into the Prompt
    1. 5.1 Sources of Content
    2. 5.2 Clarifying Your Question
    3. 5.3 Few-Shot Prompting
      1. #1: Few-Shotting Scales Badly with Context
      2. #2: Few-Shotting Biases the Model Toward the Examples.
      3. #3: Few-Shotting Can Suggest Spurious Patterns
    4. 5.4 How Many Shots?
    5. 5.5 Conclusion
  7. 6. A Sea of Context
    1. 6.1 Finding Context
    2. 6.2 Retrieval
      1. Syntactic Retrieval
      2. Neural Retrieval
    3. 6.3 Summarization
      1. Summary Length
      2. General and Specific Summaries
    4. 6.4 Conclusion
  8. 7. Assembling the Pseudo-Document
    1. 7.1 Anatomy of the Ideal Prompt
    2. 7.2 What kind of Document?
    3. 7.3 Snippetization
    4. 7.4 Elastic Snippets
    5. 7.5 Relationships between Prompt Elements
    6. 7.6 Putting it all Together
    7. 7.7 Conclusion
  9. 8. Taming the Model
    1. Introduction
    2. Anatomy of the Ideal Completion
      1. Preamble type: Structural boilerplate.
      2. Preamble type: Reasoning
      3. Preamble type: Fluff
      4. Recognizable Start and End
    3. Beyond the Text: Logprobs
      1. How good is the completion?
      2. LLMs for classification
      3. Critical points in the prompt
    4. Choosing the Model
      1. Making your own
    5. Conclusion
  10. About the Authors

Product information

  • Title: Prompt Engineering for LLMs
  • Author(s): John Berryman, Albert Ziegler
  • Release date: December 2024
  • Publisher(s): O'Reilly Media, Inc.
  • ISBN: 9781098156152