Stability Bounds for Stationary $\phi$-mixing and $\beta$-mixing Processes
Venue
Journal of Machine Learning Research (JMLR), vol. 11 (2010), pp. 798-814
Publication Year
2010
Authors
Mehryar Mohri, Afshin Rostamizadeh
BibTeX
Abstract
Most generalization bounds in learning theory are based on some measure of the
complexity of the hypothesis class used, independently of any algorithm. In
contrast, the notion of algorithmic stability can be used to derive tight
generalization bounds that are tailored to specific learning algorithms by
exploiting their particular properties. However, as in much of learning theory,
existing stability analyses and bounds apply only in the scenario where the samples
are independently and identically distributed. In many machine learning
applications, however, this assumption does not hold. The observations received by
the learning algorithm often have some inherent temporal dependence.
