Building a serverless API for an ML model

Getting public access to data in 10 lines of code is useful. But let's now do something more complex than that—say, serving an actual ML model.

Let's create one more app—311predictions. As before, we would need to call chalice new-project and type our new project's name.

Now, for the previous application, we didn't need any dependencies; in order to serve the ML model we used in the previous chapter, we need to have pandas and sklearn. The problem is that both of them cannot fit into the 50 MB limitation. In fact, until recently, there was no easy way to fit either of them there—normal pip install requires all the source code to be downloaded and compiled on the machine. Luckily, now a pre-compiled ...

Get Learn Python by Building Data Science Applications now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.