Chapter 5. Prompt Content

Imagine you’re building a new LLM-driven book recommendation app. Competition is tough because there are countless book recommendation applications already in existence. Their recommendations typically rely upon highly mathematical approaches such as collaborative filtering, which glean recommendations for users by comparing their patterns of usage with the patterns of usage across all other users.

But LLMs might have something new to offer in this space, because unlike the rigid, computational recommendation algorithms more typically used, LLMs can read textual data about a user and use almost humanlike common sense to make recommendations—much like a human who happens to have thoroughly read every book review available on the public internet.

Let’s see this in action. Figure 5-1 shows two example book recommendations from ChatGPT. In the first, we include only information about the last books I read—Moby Dick and Huckleberry Finn. This type of information—previous books read—is analogous to the information that more traditional recommendation systems would use. And as we see, the resulting recommendation of To Kill a Mockingbird is not unreasonable.

But now, it’s time to let the power of LLMs shine. On the right side of the figure, we additionally include information about my demographics, my preferences outside of books, and my recent experiences—lots of messy textual data—and the LLM is able to assimilate this information and use common sense to make ...

Get Prompt Engineering for LLMs now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.