Chapter 9. Deployment: Launching Your AI Application into Production
So far, we’ve explored the key concepts, ideas, and tools to help you build the core functionality of your AI application. You’ve learned how to utilize LangChain and LangGraph to generate LLM outputs, index and retrieve data, and enable memory and agency.
But your application is limited to your local environment, so external users can’t access its features yet.
In this chapter, you’ll learn the best practices for deploying your AI application into production. We’ll also explore various tools to debug, collaborate, test, and monitor your LLM applications.
Let’s get started.
Prerequisites
In order to effectively deploy your AI application, you need to utilize various services to host your application, store and retrieve data, and monitor your application. In the deployment example in this chapter, we will incorporate the following services:
- Vector store
-
Supabase
- Monitoring and debugging
-
LangSmith
- Backend API
-
LangGraph Platform
We will dive deeper into each of these components and services and see how to adapt them for your use case. But first, let’s install necessary dependencies and set up the environment variables.
If you’d like to follow the example, fork this LangChain template to your GitHub account. This repository contains the full logic of a retrieval agent-based AI application.
Install Dependencies
First, follow the instructions in the README.md file to install the project dependencies.
If you’re ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access