In Chapter 2, you learned about RAG, memory, retrieval, and embeddings. You were able to combine these concepts and build yourself a command-line chatbot that answered your questions and could remember the rest of your conversation. This allowed the LLM to become “smarter” by getting context from history. Your chatbot also had access to up-to-date, personal information via a vector database, meaning it was able to answer questions beyond what it was trained on. This also helped prevent hallucination. ...
3. Chains, Tools and Agents
Get Building Generative AI-Powered Apps: A Hands-on Guide for Developers now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.