Why does Unsupervised Pre-training Help Deep Learning?
Venue
Journal of Machine Learning Research (2010), pp. 625-660
Publication Year
2010
Authors
Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, Samy Bengio
BibTeX
Abstract
Much recent research has been devoted to learning algorithms for deep architectures
such as Deep Belief Networks and stacks of auto-encoder variants, with impressive
results obtained in several areas, mostly on vision and language data sets. The
best results obtained on supervised learning tasks involve an unsupervised learning
component, usually in an unsupervised pre-training phase. Even though these new
algorithms have enabled training deep models, many questions remain as to the
nature of this difficult learning problem. The main question investigated here is
the following: how does unsupervised pre-training work? Answering this questions is
important if learning in deep architectures is to be further improved. We propose
several explanatory hypotheses and test them through extensive simulations. We
empirically show the influence of pre-training with respect to architecture depth,
model capacity, and number of training examples. The experiments confirm and
clarify the advantage of unsupervised pre-training. The results suggest that
unsupervised pre-training guides the learning towards basins of attraction of
minima that support better generalization from the training data set; the evidence
from these results supports a regularization explanation for the effect of
pre-training.
