Skip to Content
LLM Engineer's Handbook
book

LLM Engineer's Handbook

by Paul Iusztin, Maxime Labonne
October 2024
Intermediate to advanced
522 pages
12h 55m
English
Packt Publishing
Content preview from LLM Engineer's Handbook

8

Inference Optimization

Deploying LLMs is challenging due to their significant computational and memory requirements. Efficiently running these models necessitates the use of specialized accelerators, such as GPUs or TPUs, which can parallelize operations and achieve higher throughput. While some tasks, like document generation, can be processed in batches overnight, others require low latency and fast generation, such as code completion. As a result, optimizing the inference process – how these models make predictions based on input data – is critical for many practical applications. This includes reducing the time it takes to generate the first token (latency), increasing the number of tokens generated per second (throughput), and minimizing ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

AI Engineering

AI Engineering

Chip Huyen
AI Engineering

AI Engineering

Chip Huyen
AI Engineering

AI Engineering

Chip Huyen

Publisher Resources

ISBN: 9781836200079Supplemental Content