Chapter 15. Bias in Recommendation Systems

We’ve spent much time in this book dissecting how to improve our recommendations, making them more personalized and relevant to an individual user. Along the way, you’ve learned that latent relationships between users and user personas encode important information about shared preferences. Unfortunately, all of this has a serious downside: bias.

For the purposes of our discussion, we’ll talk about the two most important kinds of bias for recommendation systems:

  • Overly redundant or self-similar sets of recommendations

  • Stereotypes learned by AI systems

First, we’ll delve into the crucial element of diversity in recommendation outputs. As critical as it is for a recommendation system to offer relevant choices to users, ensuring a variety of recommendations is also essential. Diversity not only safeguards against overspecialization but also promotes novel and serendipitous discoveries, enriching the overall user experience.

The balance between relevance and diversity is delicate and can be tricky. This balance challenges the algorithm to go beyond merely echoing users’ past behavior and encourages an exploration of new territories, hopefully providing a more holistically positive experience with the content.

This kind of bias is primarily a technical challenge; how do we satisfy the multiobjectives of diverse recommendations and highly relevant ones?

We’ll consider the intrinsic and extrinsic biases in recommendation systems as an often ...

Get Building Recommendation Systems in Python and JAX now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.