I suggest you follow http://www.diva-portal.org/smash/get/diva2:927356/FULLTEXT01.pdf and https://medium.com/fnplus/evaluating-recommender-systems-with-python-code-ae0c370c90be
Hello Alex, the simplest method for cross-validation is the leave-one-out cross-validation where you split your input matrix (real_ratings in this case) into k parts. You then make k validation iterations and in each iteration you use exactly one of the k parts for validation and the other k-1 parts for training. The challenge is to define the parts in a way that each part ideally includes each book and each user. If you, for example, would split our matrix real_ratings into k parts by simply splitting the vector of users into k parts you would end up in a situation where your trained models is not suitable to predict recommendations for the left-out users. I thus recommend that you split your matrix real_ratings into k parts by randomly drawing the elements for each part from those matrix elements that represent a rating. The following R code demonstrates this by using the function createFolds from package caret for a 10-fold cross-validation (k = 10):
You create an empty matrix (filled with zeros or NAs) and fill those elements that are include in 9 out of the 10 folds for model training. The elements in the 10th fold are then predicted and used in your accuracy computation.