Latent Factor Models with Additive Hierarchically-smoothed User Preferences
Venue
Proceedings of The 6th ACM International Conference on Web Search and Data Mining (WSDM) (2013)
Publication Year
2013
Authors
Amr Ahmed, Bhargav Kanagal, Sandeep Pandey, Vanja Josifovski, Lluis Garcia-Pueyo
BibTeX
Abstract
Items in recommender systems are usually associated with annotated attributes such
as brand and price for products; agency for news articles, etc. These attributes
are highly informative and must be exploited for accurate recommendation. While
learning a user preference model over these attributes can result in an
interpretable recommender system and can hands the cold start problem, it suffers
from two major drawbacks: data sparsity and the inability to model random effects.
On the other hand, latent-factor collaborative filtering models have shown great
promise in recommender systems; however, its performance on rare items is poor. In
this paper we propose a novel model LFUM, which provides the advantages of both of
the above models. We learn user preferences (over the attributes) using a
personalized Bayesian hierarchical model that uses a combination (additive model)
of a globally learned preference model along with user-specific preferences. To
combat we smooth these preferences over the item-taxonomy an efficient
forward-filtering and backward-smoothing algorithm. Our inference algorithms can
handle both attributes (e.g., item brands) and continuous attributes (e.g.,
prices). We combine the user preferences with the latent- models and train the
resulting collaborative filtering system end- using the successful BPR ranking
algorithm. In our experimental analysis, we show that our proposed model several
commonly used baselines and we carry out an ablation study showing the benefits of
each component of our model.
