Chapter 7. Deploying Models for Inference at Scale
You have trained your VLM and now you want people to actually use it. This is where inference deployment comes in, and it is where many of us discover that “the model works in my setup” is far from “the model serves 1000 users reliably”.
Deployment is two challenges bundled together: first, there is the optimization challenge: how do you make inference fast and cheap enough to be practical? Second, there is the packaging and deploying your model where it needs to run: whether that’s a cloud cluster, a browser, an edge device, or specific hardware.
In this chapter, we tackle both problems: inference optimization, serving frameworks like vLLM, and how to make models portable across runtimes and deploy them. Small note, some of the libraries mentioned here are swiss army knives: they export a model to their own format, quantize, handle serving.
Inference optimization for VLMs
You have trained your VLM and confirmed its accuracy through your tests. ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access