Efficient Learning of Sparse Ranking Functions
Venue
Empirical Inference, Springer (2013)
Publication Year
2013
Authors
Mark Stevens, Samy Bengio, Yoram Singer
BibTeX
Abstract
Algorithms for learning to rank can be inefficient when they employ risk functions
that use structural information. We describe and analyze a learning algorithm that
efficiently learns a ranking function using a domination loss. This loss is
designed for problems in which we need to rank a small number of positive examples
over a vast number of negative examples. In that context, we propose an efficient
coordinate descent approach that scales linearly with the number of examples. We
then present an extension that incorporates regularization thus extending Vapnik¿s
notion of regularized empirical risk minimization to ranking learning. We also
discuss an extension to the case of multi-values feedback. Experiments performed on
several benchmark datasets and large scale Google internal dataset demonstrate the
effectiveness of learning algorithm in constructing compact models while retaining
the empirical performance accuracy.
