Chapter 11. Advanced Topics and Cutting-Edge Research
We began this journey by reflecting on our nature as social and visual beings: creatures who evolved to perceive the world through sight and coordinate through language.
Perception and communication alone were never enough for survival; our ancestors had to act on what they saw and discussed: they had to hunt, build, navigate, and collaborate to shape their environment.
As vision-language models mature beyond passive understanding, we face a parallel challenge: enabling AI systems to not just see and speak, but to do. This chapter explores that challenge in two domains that share the same core idea. In the digital world, agents use VLMs to reason, plan, and execute actions through tools and graphical interfaces: clicking buttons, filling forms, writing code. In the physical world, vision-language-action (VLA) models take the same see-and-understand capabilities and wire them to robot motors: picking objects, folding laundry, pouring coffee. ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access