Chapter 4. Training Data and Preprocessing for VLMs
A shocking difference between VLMs and traditional machine learning based computer vision is the sheer size, scale, and diversity of the data needed to train modern vision-language models.
If you examine SmolLM3’s pretraining recipe in Figure 4-1 and add the durations of its three training phases, you can see that it required 11.1 trillion tokens. To put this in perspective: assuming 1 token is roughly 4 characters, then 11.1 trillion tokens equals about 44 trillion characters of text. That’s on the same order of magnitude as the entire publicly readable Internet.
Figure 4-1. SmolLM3 data mixture and training stage
When moving on to training VLMs the shear variability in data becomes an added complication. While even early works in LLMs recognized the importance of data mixtures (e.g., what percent of the data is code, math etc), the variability ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access