Chapter 9. Feature-Based and Counting-Based Recommendations

Consider this oversimplified problem: given a bunch of new users, predict which will like our new mega-ultra-fancy-fun-item-of-novelty, or MUFFIN for short. You may start by asking which old users like MUFFIN; do those users have any aspects in common? If so, you could build a model that predicts MUFFIN affinity from those correlated user features.

Alternatively, you could ask, “What are other items people buy with MUFFIN?” If you find that others frequently also ask for JAM (just-awesome-merch), then MUFFIN may be a good suggestion for those who already have JAM. This would be using the co-occurrence of MUFFIN and JAM as a predictor. Similarly, if your friend comes along with tastes similar to yours—you both like SCONE, JAM, BISCUIT, and TEA—but your friend hasn’t yet had the MUFFIN, if you like MUFFIN, it’s probably a good choice for your friend too. This is using the co-occurrence of items between you and your friend.

These item relationship features will form our first ranking methods in this chapter; so grab a tasty snack and let’s dig in.

Bilinear Factor Models (Metric Learning)

As per the usual idioms about running in front of horses and walking after the cart, let’s start our journey into ranking systems with what can be considered the naive ML approaches. Via these approaches, we will start to get a sense of where the rub lies in building recommendation systems and why some of the forthcoming efforts are necessary ...

Get Building Recommendation Systems in Python and JAX now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.