On the Impact of Kernel Approximation on Learning Accuracy
Venue
Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS 2010)
Publication Year
2010
Authors
Corinna Cortes, Mehryar Mohri, Ameet Talwalkar
BibTeX
Abstract
Kernel approximation is commonly used to scale kernel-based algorithms to
applications containing as many as several million instances. This paper analyzes
the effect of such approximations in the kernel matrix on the hypothesis generated
by several widely used learning algorithms. We give stability bounds based on the
norm of the kernel approximation for these algorithms, including SVMs, KRR, and
graph Laplacian-based regularization algorithms. These bounds help determine the
degree of approximation that can be tolerated in the estimation of the kernel
matrix. Our analysis is general and applies to arbitrary approximations of the
kernel matrix. However, we also give a specific analysis of the Nystr¨om low-rank
approximation in this context and report the results of experiments evaluating the
quality of the Nystr¨om low-rank kernel approximation when used with ridge
regression.
