Chapter 12. Agents and LLM Workflows
LLM workflows and agents are easy to spot, as they are AI-powered services that provide a natural language API. The first LLM-powered chatbots were simple wrappers for LLMs. But they couldn’t answer questions on any event that happened after their training cutoff date. So they rapidly evolved into the complex multistep engines that can answer questions on even today’s events, using vector indexes, search engines, feature stores, and other data sources to add context information to prompts.
With the help of tools and new protocols, LLM workflows have transmogrified into agents that have a level of autonomy in how to plan and execute tasks to achieve goals. Agents are more than just LLM wrappers. They can use external tools, they have memory, and they can plan strategies to achieve goals. Agents are mostly interactive services, but there are also background agents that execute tasks autonomously, automating routine tasks such as workflow execution, process optimization, and proactive maintenance.
In this chapter, we will descend the rabbit hole of building LLM workflows and agents. We will learn the art of context engineering, providing as much context and prior knowledge as possible for every interaction with an LLM. For this, you may need to query diverse data sources (vector indexes, search engines, feature stores, etc.), call external APIs, and even use other agents. We will also introduce two protocols—Model Context Protocol (MCP) and Agent-to-Agent ...