12/07/2020

Understanding and Estimating the Adaptability of Domain-Invariant Representations

Ching-Yao Chuang, Antonio Torralba, Stefanie Jegelka

Keywords: Transfer, Multitask and Meta-learning

Abstract: Learning domain-invariant representations is a popular approach to unsupervised domain adaptation, i.e., generalizing from a source domain with labels to an unlabeled target domain. In this work, we aim to better understand and estimate the effect of domain-invariant representations on generalization to the target. In particular, we study the effect of the complexity of the latent, domain-invariant representation, and find that it has a significant influence on the target risk. Based on these findings, we propose a general approach for addressing this complexity tradeoff in neural networks. We also propose a method for estimating how well a model based on domain-invariant representations will perform on the target domain, without having seen any target labels. Applications of our results include model selection, deciding early stopping, and predicting the adaptability of a model between domains.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at ICML 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers