Unsupervised domain adaptation (UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain. Previous UDA methods assume that the source and target domains share an identical label space, which is unrealistic in practice since the label information of the target domain is agnostic. This article focuses on a more realistic UDA scenario, i.e., partial domain adaptation (PDA), where the target label space is subsumed to the source label space. In the PDA scenario, the source outliers that are absent in the target domain may be wrongly matched to the target domain (technically named negative transfer), leading to performance degradation of UDA methods. This article proposes a novel target-domain-specific classifier learning-based domain adaptation (TSCDA) method. Read More
C. -X. Ren, P. Ge, P. Yang and S. Yan, “Learning Target-Domain-Specific Classifier for Partial Domain Adaptation,” in IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 5, pp. 1989-2001, May 2021, doi: 10.1109/TNNLS.2020.2995648.