September 2024
Intermediate to advanced
240 pages
6h 24m
English
This chapter covers
Imagine you are on your way to an AI conference halfway around the world. You are on a plane, cruising at 35,000 feet above the ground, and you want to prototype a new feature for your application. The airplane’s Wi-Fi is prohibitively slow and expensive. What if instead of paying all that money for a broken and borderline unusable GPT, you have one running right there on your laptop, offline? This chapter will review developers’ options to run a large language model (LLM) locally.
The introductory scenario ...
Read now
Unlock full access