Once we have built our recommendation engine using collaborative filtering, we don't want to wait till the model is deployed in production to get to know it's performance. We want to produce the best performing recommendation engine. Therefore, during the development process, we can split our data into a training set and a test set. We can build our algorithm on the training set and test it against our test set to validate or infer the performance of our collaborative filtering method.
The recommenderlab package provides the necessary infrastructure to achieve this. The evaluationScheme class is provided to help us create a train/test strategy. It takes a ratingsMatrix as an input and provides several schemes including simple ...