Dissertation Proposal
In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy
Dainis Boumber
will defend his dissertation proposal
Multi-Domain Adaptation and Generalization using Deep Adversarial Models
Abstract
In Machine Learning, a good model generalizes from training data to accurately classify instances from new unseen data pertaining to the same domain. Traditionally, datasets lie within the same domain, thus the same distribution is assumed for both training and testing sets. In many real-world scenarios, however, such assumption would lead to very poor results, because data may come from similar but not identical domains. One way to address this problem is through the use of domain adaptation and domain generalization techniques. These techniques typically do some form of domain-shift to mitigate the difference in the distribution of source and target data. The primary goal of this research is to create several novel methods that can learn from multiple source domains and extract a domain-agnostic model to be applied to one or more target domains. A variety of scenarios and tasks are explored. First, the classical domain generalization problem is studied, where the algorithm has no access to target data, labeled or not. Second, we investigate a special case where the unlabeled target data is available --- a scenario commonly encountered in real-world applications but not very well understood, as it does not clearly fall under any specific type of machine learning problems. In this case, our algorithm acts in a semi-supervised learning fashion insofar as the target is concerned. To the best of our knowledge, our research is one of the first to address this problem. Third, we explore a supervised learning scenario where a small number of labeled target samples is available for training. We show that, with minor modifications, the proposed semi-supervised and supervised domain generalization algorithms are applicable to unsupervised and semi-supervised domain adaptation problems, respectively. To validate our results, we perform an extensive number of experiments on the standard image-based domain adaptation and generalization datasets. Furthermore, we extend this research to Natural Language Processing, where domain adaptation and generalization tasks often prove to be even more challenging. We address the task of Authorship Verification and experiment with two standard semantic datasets, as well as a custom dataset we created. These goals are achieved by mapping the source or the target (and at times both) domains into a domain-invariant feature space. Adversarial Learning is used to adjust the learning space of the previously unseen samples and domains to the domain-invariant feature space that was created. To this end, many adversarial architectures are proposed that learn an embedding subspace that is discriminative and where the mapped domains are semantically aligned and yet maximally separated. To achieve stable training and greater accuracy we introduce a few modifications to existing loss functions. For semantic data, we designed several network custom network architectures to be used as base models for the adversarial networks, some of which surpass state-of-the-art in various tasks based on the experiments we performed. Our secondary objective is to attempt achieving reasonable results in a few-shot manner. Once the source data resides in a domain-invariant subspace, we address this by replacing the datasets forming the source domains in question with simplified surrogates. We propose several candidate techniques and compare them to the trivial and more commonly used technique of computing mean distances between samples in source domains. The experimental results so far are very promising, matching or exceeding state-of-the-art in some scenarios.
Date: Monday, April 30, 2018
Time: 12:00 PM
Place: MREB 205B
Advisors: Dr. Ricard Vilalta
Faculty, students, and the general public are invited.