This will be the 9th chapter of the final book.
If you have comments about how we might improve the content and/or examples in this book, or if you notice missing material, please reach out to the authors at email@example.com.
Imagine this - You just built your world-class Dog-Cat classifier. You have a solid business plan and you cannot wait to pitch your magic classifier to that venture capital firm next week. You know that they will question you about your cloud strategy, and you need to show a solid demo before they even consider giving you the money. How would you do this? This chapter will cover some of the ways you can get your model running on the cloud, each with its pros and cons.
Creating a model is half the battle, serving it is the next challenge.
There are many approaches to build your web based prediction service. You could run it on a generic web server like Flask. Or you could host a dedicated deep learning inference engine like TensorFlow Serving or Clipper. Or you could use a cloud based managed serving service like Google Cloud ML Engine or Algorithmia. Some are easy but limited in functionality. Others have more granular controls and are performant, but are more involved to set up. And some super easy but not exactly cheap as we have seen in the last chapter. We’ll take a stab at building your customer facing prediction service, illustrating ...