Domain Separation Networks
Venue
NIPS 2016 (2016) (to appear)
Publication Year
2016
Authors
Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, Dumitru Erhan
BibTeX
Abstract
The cost of large scale data collection and annotation often makes the application
of machine learning algorithms to new tasks or datasets prohibitively expensive.
One approach circumventing this cost is training models on synthetic data where
annotations domain adaptation algorithms to manipulate these models before they can
be successfully applied. Existing approaches focus either on mapping
representations from one domain to the other, or on learning to extract features
that are invariant to the domain Inspired by work on private-shared component
analysis, we explicitly learn to extract image representations that are partitioned
into two subspaces: one component which is private to each domain and one which is
shared across domains. Our model is trained task we care about in the source
domain, but also to use the partitioned representation to reconstruct the images
from both domains. Our novel architecture results in a model that outperforms the
state-of-the-art on a range of unsupervised domain adaptation scenarios and
additionally produces visualizations of the private and shared representations
enabling interpretation of the domain adaptation process.
