Skip to Content
Designing Large Language Model Applications
book

Designing Large Language Model Applications

by Suhas Pai
March 2025
Intermediate to advanced
366 pages
9h 31m
English
O'Reilly Media, Inc.
Content preview from Designing Large Language Model Applications

Chapter 9. Inference Optimization

In the past few chapters, we learned several techniques for adapting and utilizing LLMs to solve specific tasks. In this chapter, we will learn how to efficiently perform inference on them for real-world usage. LLMs’ large size make deployment and inference particularly challenging, as they exert significant pressure on compute, memory, and energy requirements. This proves to be especially challenging on edge devices like mobile phones.

For the rest of the chapter, we will focus on the field of inference optimization, discussing the factors influencing LLM inference time. We will then showcase a variety of optimization techniques including caching, knowledge distillation, early exiting, quantization, parallel and speculative decoding, and more.

LLM Inference Challenges

What are the bottlenecks affecting LLM inference? As we all know, their gargantuan sizes necessitate vast computing and memory resources. Apart from that, two additional factors exacerbate the situation:

  • As seen in Chapter 4, contemporary LLMs are based largely on decoder-only models that operate autoregressively. This means that each token is generated one after the other, thus imposing a sequential limitation. Later in this chapter, we will discuss techniques for parallel and speculative decoding that aim to speed up the decoding process.

  • As the input sequence length increases, the amount of compute needed increases quadratically. Later this chapter, we will discuss techniques ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

Hands-On Large Language Models

Hands-On Large Language Models

Jay Alammar, Maarten Grootendorst

Publisher Resources

ISBN: 9781098150495Errata PageSupplemental Content