Skip to Content
Quick Start Guide to Large Language Models: Strategies and Best Practices for ChatGPT, Embeddings, Fine-Tuning, and Multimodal AI, 2nd Edition
book

Quick Start Guide to Large Language Models: Strategies and Best Practices for ChatGPT, Embeddings, Fine-Tuning, and Multimodal AI, 2nd Edition

by Sinan Ozdemir
October 2024
Intermediate to advanced content levelIntermediate to advanced
384 pages
13h 7m
English
Addison-Wesley Professional
Content preview from Quick Start Guide to Large Language Models: Strategies and Best Practices for ChatGPT, Embeddings, Fine-Tuning, and Multimodal AI, 2nd Edition

9

Moving Beyond Foundation Models

Introduction

In previous chapters, we have focused on using or fine-tuning pre-trained models such as GPT, BERT, and Claude to tackle a variety of natural language processing and computer vision tasks. While these models have demonstrated state-of-the-art performance on a wide range of benchmarks, they may not be sufficient for solving more complex or domain-specific tasks that require a deeper understanding of the problem.

In this chapter, we explore the concept of constructing novel LLM architectures by combining existing models. By combining different models, we can leverage their strengths to create a hybrid architecture that either performs better than the individual models or performs a task that wasn’t ...

Become an O’Reilly member and get unlimited access to this title plus top books and audiobooks from O’Reilly and nearly 200 top publishers, thousands of courses curated by job role, 150+ live events each month,
and much more.
Start your free trial

You might also like

Build a Large Language Model (From Scratch)

Build a Large Language Model (From Scratch)

Sebastian Raschka

Publisher Resources

ISBN: 9780135346570