LLMs from Prototypes to Production
Published by Pearson
Integrating LLMs into workflows, deployment options, and model evaluation
- Apply best practices for transitioning LLM prototypes to production
- Integrate LLMs with different workflows and systems
- Fine-tune LLMs like GPT and FLAN-T5
- Learn model evaluation and improvement techniques
Once you understand how large language models (LLMs) work, define your prompts, and train your models, it’s time to move your LLM prototypes to production and fine-tune your models for optimal performance. We will cover the best practices for integrating LLMs into various workflows, deployment options, and model evaluation. This course will empower you to confidently transition from LLM prototypes to fully realized applications and optimize their performance.
This course is the third in a three-part series by Sinan Ozdemir designed for machine learning engineers and software developers who want to expand their skillset and learn how to work with large language models (LLMs) like ChatGPT and FLAN-T5. The courses provide practical instruction on prompt engineering, language modeling, moving LLM prototypes to production, and fine-tuning GPT models. The three live courses in the series are:
- LLMs, GPT, and Prompt Engineering for Developers
- Using Open- and Closed-Source LLMs in Real-World Applications
- LLMs from Prototypes to Production
The book Quick Start Guide to LLMs by Sinan Ozdemir is recommended as companion material for for post-class reference.
What you’ll learn and how you can apply it
- How to effectively move LLM prototypes to production environments
- The various deployment options and considerations for LLMs
- Techniques for fine-tuning LLMs like GPT and FLAN-T5
- How to evaluate and improve LLM model performance
And you’ll be able to:
- Seamlessly transition LLM prototypes into production systems
- Integrate LLMs into various workflows and applications
- Fine-tune LLMs for optimal performance
- Evaluate and enhance LLM models based on specific use cases
This live event is for you because...
- You're interested in moving LLM prototypes to production environments
- You want to learn how to integrate LLMs into various workflows and systems
- You seek to optimize the performance of LLMs through fine-tuning
- You're eager to learn model evaluation techniques for LLMs
Prerequisites
- Attendees should have prior experience with machine learning and be proficient in Python programming.
- Familiarity with natural language processing concepts and techniques is helpful but not required.
- Attendees should have a willingness to engage in hands-on exercises and apply the concepts learned in the course to real-world applications.
Course Set-up:
- Jupyter notebooks can be run alongside the instructor but students can follow along without coding with pre-run notebooks on the GitHub repository.
- Visit the GitHub repository with the PDFs of slides, plus code and links in a Jupyter notebook
Recommended Preparation:
- Attend: LLMs, GPT and Prompt Engineering for Developers by Sinan Ozdemir
- Attend: Using Open- and Closed-Source LLMs in Real-World Applications by Sinan Ozdemir
- Read: Quick Start Guide to Large Language Models: Strategies and Best Practices for using ChatGPT and Other LLMs by Sinan Ozdemir
- Watch: Introduction to Transformer Models for NLP by Sinan Ozdemir
Recommended Follow-up:
- Watch: Quick Guide to ChatGPT, Embeddings, and Other Large Language Models (LLMs) by Sinan Ozdemir
- Attend: Optimizing Large Language Models by Shaan Khosla
- Explore: Getting Started with Data, LLMs and ChatGPT by Sinan Ozdemir
Schedule
The time frames are only estimates and may vary according to how the class is progressing.
Session 1: Moving LLM Prototypes to Production (60 minutes)
- Best practices for moving LLM prototypes to production
- Integration with different workflows and systems
- Deployment options and considerations
- Q&A (5 mins)
- Break (10 minutes)
Session 2: Fine-Tuning GPT and FLAN-T5(60 minutes)
- Understanding how to fine-tune LLMs
- Identifying areas for improvement
- Techniques for improving performance
- Q&A (5 mins)
- Break (10 minutes)
Session 3: Model Evaluation (60 minutes)
- Understanding how to evaluate LLM models
- Identifying areas for improvement
- Techniques for fine-tuning models
Wrap-up and Final Q&A (30 minutes)
- Review of key takeaways
- Final Q&A
- Conclusion and feedback
Your Instructor
Sinan Ozdemir
Sinan Ozdemir is the founder of Crucible, an AI factory platform that helps teams convert existing workflows into custom models. He is a Y Combinator alum, AI & LLM Advisor at Tola Capital, and the author of multiple books on data science and machine learning including Building Agentic AI, Quick Start Guide to LLMs, and Principles of Data Science. Sinan is a former lecturer of data science at Johns Hopkins University and the founder of Kylie.ai, an enterprise-grade conversational AI platform (acquired 2014). He holds a master's degree in pure mathematics from Johns Hopkins University and is based in San Francisco, California.