Chapter 3. Mathematical Considerations
Most of this book is focused on implementation and on practical considerations necessary to get recommendation systems working. In this chapter, you’ll find the most abstract and theoretical concepts of the book. The purpose of this chapter is to cover a few of the essential ideas that undergird the field. It’s important to understand these ideas as they lead to pathological behavior in recommendation systems and motivate many architectural decisions.
We’ll start by discussing the shape of data you often see in recommendation systems, and why that shape can require careful thought. Next we’ll talk about the underlying mathematical idea, similarity, that drives most modern recommendation systems. We’ll briefly cover a different way of thinking about what a recommender does, for those with a more statistical inclination. Finally, we’ll use analogies to NLP to formulate the popular approach.
Zipf’s Laws in RecSys and the Matthew Effect
In a great many applications of ML, a caveat is given early: the distribution of observations of unique items from a large corpus is modeled by Zipf’s law—the frequency of occurrence drops exponentially. In recommendation systems, the Matthew effect appears in the popular item’s click rates or the popular user’s feedback rates. For example, popular items have dramatically larger click counts than average, and more-engaged users give far more ratings than average.
The Matthew Effect
The Matthew effect—or popularity ...
Get Building Recommendation Systems in Python and JAX now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.