Chapter 4. Co-occurrence and Recommendation
Once you’ve captured user histories as part of the input data, you’re ready to build the recommendation model using co-occurrence. So the next question is: how does co-occurrence work in recommendations? Let’s take a look at the theory behind the machine-learning model that uses co-occurrence (but without the scary math).
Think about three people: Alice, Charles, and Bob. We’ve got some user-history data about what they want (inferentially, anyway) based on what they bought (see Figure 4-1).

In this toy microexample, we would predict that Bob would like a puppy. Alice likes apples and puppies, and because we know Bob likes apples, we will predict that he wants a puppy, too. Hence our starting this paper by suggesting that observations as simple as “I want a pony” are key to making a recommendation model work. Of course, real recommendations depend on user-behavior histories for huge numbers of users, not this tiny sample—but our toy example should give you an idea of how a recommender model works.
So, back to Bob. As it turns out, Bob did want a puppy, but he also wants a pony. So do Alice, Charles, and a new user in the crowd, Amelia. They all want a pony (we do, too). Where does that leave us?