Robust Large-Scale Machine Learning in the Cloud
Venue
Proceedings of the 22th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, San Francisco, CA, USA (2016) (to appear)
Publication Year
2016
Authors
Steffen Rendle, Dennis Fetterly, Eugene J. Shekita, Bor-yiing Su
BibTeX
Abstract
The convergence behavior of many distributed machine learning (ML) algorithms can
be sensitive to the number of machines being used or to changes in the computing
environment. As a result, scaling to a large number of machines can be challenging.
In this paper, we describe a new scalable coordinate descent (SCD) algorithm for
generalized linear models whose convergence behavior is always the same, regardless
of how much SCD is scaled out and regardless of the computing environment. This
makes SCD highly robust and enables it to scale to massive datasets on low-cost
commodity servers. Experimental results on a real advertising dataset in Google are
used to demonstrate SCD's cost effectiveness and scalability. Using Google's
internal cloud, we show that SCD can provide near linear scaling using thousands of
cores for 1 trillion training examples on a petabyte of compressed data. This
represents 10,000x more training examples than the `large-scale' Netflix prize
dataset. We also show that SCD can learn a model for 20 billion training examples
in two hours for about $10.
