Jump to Content

Efficient Learning of Sparse Ranking Functions

Mark Stevens
Samy Bengio
Yoram Singer
Empirical Inference, Springer (2013)

Abstract

Algorithms for learning to rank can be inefficient when they employ risk functions that use structural information. We describe and analyze a learning algorithm that efficiently learns a ranking function using a domination loss. This loss is designed for problems in which we need to rank a small number of positive examples over a vast number of negative examples. In that context, we propose an efficient coordinate descent approach that scales linearly with the number of examples. We then present an extension that incorporates regularization thus extending Vapnik¿s notion of regularized empirical risk minimization to ranking learning. We also discuss an extension to the case of multi-values feedback. Experiments performed on several benchmark datasets and large scale Google internal dataset demonstrate the effectiveness of learning algorithm in constructing compact models while retaining the empirical performance accuracy.

Research Areas