7

Customizing Embeddings and Model Architectures

Introduction

Two full chapters of prompt engineering equipped us with the knowledge of how to effectively interact with (prompt) LLMs, acknowledging their immense potential as well as their limitations and biases. We have also fine-tuned models, both open and closed source, to expand on an LLM’s pre-training to better solve our own specific tasks. We have even seen a full case study of how semantic search and embedding spaces can help us retrieve relevant information from a dataset with speed and ease.

To further broaden our horizons, we will utilize lessons learned from earlier chapters and dive into the world of fine-tuning embedding models and customizing pre-trained LLM architectures to unlock ...

Get Quick Start Guide to Large Language Models: Strategies and Best Practices for ChatGPT, Embeddings, Fine-Tuning, and Multimodal AI, 2nd Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.