Intriguing properties of neural networks
Venue
International Conference on Learning Representations (2014)
Publication Year
2014
Authors
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, Rob Fergus
BibTeX
Abstract
Deep neural networks are highly expressive models that have recently achieved state
of the art performance on speech and visual recognition tasks. While their
expressiveness is the reason they succeed, it also causes them to learn
uninterpretable solutions that could have counter-intuitive properties. In this
paper we report two such properties. First, we find that there is no distinction
between individual high level units and random linear combinations of high level
units, according to various methods of unit analysis. It suggests that it is the
space, rather than the individual units, that contains of the semantic information
in the high layers of neural networks. Second, we find that deep neural networks
learn input-output mappings that are fairly discontinuous to a significant extend.
We can cause the network to misclassify an image by applying a certain
imperceptible perturbation, which is found by maximizing the network's prediction
error. In addition, the specific nature of these perturbations is not a random
artifact of learning: the same perturbation can cause a different network, that was
trained on a different subset of the dataset, to misclassify the same input.
