In this section, we will learn about the architecture of deploying TensorFlow with AWS Lambda. One of the critical questions of deployment is about where to keep the retrained model that will be used within AWS Lambda.
There are the following three possible options:
- Keep the model within the deployment package alongside the code and libraries
- Keep the model on the S3 bucket and unload it in AWS Lambda during execution
- Keep the model on the FTP or HTTP server and unload it into AWS Lambda during execution