Chapter 3

Engineering advanced learning prompts

You can achieve a lot by manipulating the prompts and hyperparameters of large language models (LLMs). You can modify the tone, style, accuracy, and level of correctness of the generated content from such models. With smarter techniques like chain-of-thought, tree-of-thought, and variations, it is even possible to instill the ability to self-correct and improve to some extent.

In the examples seen so far, we have mostly discussed sending a single prompt to the chosen LLM and the response it can generate, which, filtered or not, can be presented to the user. This approach works but leaves a lot of (perhaps too much) room for the arbitrariness and randomness of these models, which we cannot fully ...

Get Programming Large Language Models with Azure Open AI: Conversational programming and prompt engineering with LLMs now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.