Skip to Content
View all events

Deploying GPT and Large Language Models

Published by Pearson

Intermediate content levelIntermediate

Leveraging LLMOps for production-ready AI systems

  • Use GPT-4, Cohere, embeddings, and other LLMs to build AI applications at scale
  • See how novel LLMs like ChatGPT are changing how people think about and build products using AI
  • See practical use cases plus code for repeatable use after the session is over

ChatGPT and OpenAI are two of the most talked about advances in natural language processing (NLP) technology, but they are not the only LLMs in town. Both open and closed source LLMs can be applied to your business by leveraging the power of AI to help you automate customer service tasks using agents, create smarter chatbots using retrieval augmented generation (RAG), and unlock new insights from your data.

In this training, you learn how to use the latest GPT models, OpenAI embeddings, and several other large language models to build applications for both experimenting and production. We cover the fundamentals of LLMs and its applications and explore alternative generative models as well as encoding models (for embedding and classification). You will gain practical experience in building a variety of applications with these models, including text generation, summarization, question answering, and more. Learn how to leverage prompt engineering, quantization, and few-shot learning to get the most out of GPT-like models.

The focus will then shift to deploying these models in production, including best practices and debugging techniques using the latest Ops frameworks like Kubeflow. By the end of the training, you have a working knowledge of GPT and other large language models, as well as the skills to start building your own applications with them.

What you’ll learn and how you can apply it

  • Learn the fundamentals of GPT and alternative generative models
  • Understand how to leverage prompt engineering, context stuffing, and few-shot learning to get the most out of GPT-like models
  • Build applications such as text generation, summarization, and question answering
  • Deploy GPT and other language models in production
  • Develop best practices and debugging techniques for using GPT and other large language models in applications

This live event is for you because...

  • You are a software engineer, data scientist, or machine learning engineer who is interested in using GPT and other large language models to build applications.
  • You want to learn how to get the most out of GPT-like models with prompt engineering, context stuffing, and few-shot learning.
  • You are looking to develop best practices and debugging techniques for deploying GPT and other language models in production.

Prerequisites

  • Python 3 proficiency with some familiarity with working in interactive Python environments including Notebooks (Jupyter/Google Colab/Kaggle Kernels)
  • Familiarity with machine learning concepts
  • Understanding of natural language processing (NLP)

Course Set-up

  • Instructor's github repository with the slides/code/links
  • Attendees will need to have access to the notebooks in the github

Recommended Preparation

Recommended Follow-up

Schedule

The time frames are only estimates and may vary according to how the class is progressing.

Segment 1: Introduction to GPT and Large Language Models Length (30 min)

  • Overview of Claude, ChatGPT, and other large language models
  • Types of applications GPT and other language models can be used for
  • How GPT and other LLMs work

Segment 2: Building Applications with GPT (50 min)

  • Leveraging prompt engineering to get the most out of proprietary LLMs
  • Building applications such as text generation, summarization, and question answering using open-source models

Break / QA 10 min

Segment 3: Deploying GPT and Other Language Models in Production Length (30 min)

  • Best practices for deploying GPT and other language models in production
  • Debugging techniques for LLMs

Segment 4: Advanced Applications with GPT and Other Large Language Models Length (50 min)

  • Building conversational agents and retrieval augmented chatbots
  • Evaluating LLMs in production

Break / QA 10 min

Segment 5: OpenAI Embeddings and Alternatives to GPT Length (30 min)

  • Overview of embeddings and how to produce and use them
  • Comparing generative models using benchmarks

Segment 6: Course Wrap-Up and Next Steps Length (20 min)

  • Recap of the lesson
  • Resources for further learning and exploring GPT and other large language models

Final Q/A 10 min

Your Instructor

  • Sinan Ozdemir

    Sinan Ozdemir is the founder of Crucible, an AI factory platform that helps teams convert existing workflows into custom models. He is a Y Combinator alum, AI & LLM Advisor at Tola Capital, and the author of multiple books on data science and machine learning including Building Agentic AI, Quick Start Guide to LLMs, and Principles of Data Science. Sinan is a former lecturer of data science at Johns Hopkins University and the founder of Kylie.ai, an enterprise-grade conversational AI platform (acquired 2014). He holds a master's degree in pure mathematics from Johns Hopkins University and is based in San Francisco, California.

    linkedinXlinksearch

Skill covered

GPT