We use Matrix factorization methods for making a personalized ranking of items for each user. However, to solve this problem we use a binary classification optimization criterion—the log loss. This loss works fine and optimizing it often produces good ranking models. What if instead we could use a loss specifically designed for training a ranking function?
Of course, it is possible to use an objective that directly optimizes for ranking. In the paper BPR: Bayesian Personalized Ranking from Implicit Feedback by Rendle et al (2012), the authors propose an optimization criterion, which they call BPR-Opt.
Previously, we looked at individual items in separation from the other items. That is, we tried to predict the ...