6.1 Beyond intents: The role of search in conversational AI6.1.1 Using search in conversational AI6.1.2 Benefits of traditional search6.1.3 Drawbacks of traditional search6.2 Beyond search: Generating answers with RAG6.2.1 Using RAG in conversational AI6.2.2 Benefits of RAG6.2.3 Combining RAG with other generative AI use cases6.2.4 Comparing intents, search, and RAG approaches6.3 How is RAG implemented?6.3.1 High-level implementation6.3.2 Preparing your document repository for RAG6.4 Additional considerations of RAG implementations6.4.1 Can’t we just use an LLM directly?6.4.2 Keeping answers current and relevant with RAG6.4.3 How easy is it to set up the ingestion pipeline?6.4.4 Handling latency6.4.5 When to use a fallback mechanism and when to search6.5 Evaluating and analyzing RAG performance6.5.1 Indexing metrics6.5.2 Retrieval metrics6.5.3 Generation metrics6.5.4 Comparing efficiency of indexing and embedding solutions for RAG