Chapter 7. Agents II
Chapter 6 introduced the agent architecture, the most powerful of the LLM architectures we have seen up until now. It is hard to overstate the potential of this combination of chain-of-thought prompting, tool use, and looping.
This chapter discusses two extensions to the agent architecture that improve performance for some use cases:
- Reflection
-
Taking another page out of the repertoire of human thought patterns, this is about giving your LLM app the opportunity to analyze its past output and choices, together with the ability to remember reflections from past iterations.
- Multi-agent
-
Much the same way as a team can accomplish more than a single person, there are problems that can be best tackled by teams of LLM agents.
Let’s start with reflection.
Reflection
One prompting technique we haven’t covered yet is reflection (also known as self-critique). Reflection is the creation of a loop between a creator prompt and a reviser prompt. This mirrors the creation process for many human-created artifacts, such as this chapter you’re reading now, which is the result of a back and forth between the authors, reviewers, and editor until all are happy with the final product.
As with many of the prompting techniques we have seen so far, reflection can be combined with other techniques, such as chain-of-thought and tool calling. In this section, we’ll look at reflection in isolation.
A parallel can be drawn to the modes of human thinking known as System 1 (reactive ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access