October 2023
Intermediate to advanced
288 pages
8h 46m
English
Two full chapters of prompt engineering equipped us with the knowledge of how to effectively interact with (prompt) LLMs, acknowledging their immense potential as well as their limitations and biases. We have also fine-tuned models, both open and closed source, to expand on an LLM’s pre-training to better solve our own specific tasks. We have even seen a full case study of how semantic search and embedding spaces can help us retrieve relevant information from a dataset with speed and ease.
To further broaden our horizons, we will utilize lessons learned from earlier chapters and dive into the world of fine-tuning embedding models and customizing pre-trained LLM architectures to unlock ...
Read now
Unlock full access