Adversarial Autoencoders
Venue
International Conference on Learning Representations (2016) (to appear)
Publication Year
2016
Authors
Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow
BibTeX
Abstract
In this paper we propose a new method for regularizing autoencoders by imposing an
arbitrary prior on the latent representation of the autoencoder. Our method, named
"adversarial autoencoder", uses the recently proposed generative adversarial
networks (GAN) in order to match the aggregated posterior of the hidden code vector
of the autoencoder with an arbitrary prior. Matching the aggregated posterior to
the prior ensures that there are no "holes" in the prior, and generating from any
part of prior space results in meaningful samples. As a result, the decoder of the
adversarial autoencoder learns a deep generative model that maps the imposed prior
to the data distribution. We show how adversarial autoencoders can be used to
disentangle style and content of images and achieve competitive generative
performance on MNIST, Street View House Numbers and Toronto Face datasets.
