Skip to Content
Build a Large Language Model (From Scratch)
book

Build a Large Language Model (From Scratch)

by Sebastian Raschka
September 2024
Beginner to intermediate
368 pages
9h 49m
English
Manning Publications
Content preview from Build a Large Language Model (From Scratch)

2 Working with text data

This chapter covers

  • Preparing text for large language model training
  • Splitting text into word and subword tokens
  • Byte pair encoding as a more advanced way of tokenizing text
  • Sampling training examples with a sliding window approach
  • Converting tokens into vectors that feed into a large language model

So far, we’ve covered the general structure of large language models (LLMs) and learned that they are pretrained on vast amounts of text. Specifically, our focus was on decoder-only LLMs based on the transformer architecture, which underlies the models used in ChatGPT and other popular GPT-like LLMs.

During the pretraining stage, LLMs process text, one word at a time. Training LLMs with millions to billions of parameters ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

Build a Large Language Model (From Scratch)

Build a Large Language Model (From Scratch)

Sebastian Raschka
Hands-On Large Language Models

Hands-On Large Language Models

Jay Alammar, Maarten Grootendorst

Publisher Resources

ISBN: 9781633437166Supplemental ContentPublisher SupportOtherPublisher WebsiteSupplemental ContentErrata PagePurchase Link