Chapter 10. The Future of LLMs and LLMOps
In the next decade, the future of LLMOps, LLMs, NLP, and knowledge graphs will converge in ways we can barely imagine today. Imagine AI systems no longer as distant tools but as deeply integrated into every facet of our lives. Even the most popular LLMs today are somewhat clunky iterations, but in the near future, I believe they will be refined to a point where their understanding of language will rival human intuition. This is because of emergent traits in LLMs.
Currently, the main way users interact with LLMs is through text-based chats, but in coming years, LLMs won’t just be answering questions; they’ll be engaging in complex problem-solving, offering insights, and pushing the boundaries of creativity itself. For example, in September 2024, OpenAI released Advanced Voice Mode for its ChatGPT application, which can detect voice tone—including sarcasm. Much of this work is related to impending innovations across the infrastructure stack. Meta recently wrote about issues beyond algorithms and architecture that arise in training these models at scale (see Figure 10-1).
LLMOps will be the backbone supporting these systems as they mature into a seamless, self-sustaining infrastructure. Instead of manual intervention, pipelines for training, fine-tuning, and deploying these models will be fully automated, speeding up advances in this area. LLMOps engineers will spend less time debugging code and more time refining high-level system strategies, ...
Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Read now
Unlock full access