Unified Deep Supervised Domain Adaptation and Generalization
Citations
1,211 citations
908 citations
Cites background or methods from "Unified Deep Supervised Domain Adap..."
...We randomly split data of each domain into a training set (70%) and a test set (30%) and adopt the leave-one-domain-out strategy as suggested in [11, 25] and report the average results based on 20 trails....
[...]
...[25] proposed to minimize the semantic alignment loss as well as the separation loss based on deep learning models....
[...]
...• CCSA [25]: We consider the network proposed in [25] as another baseline....
[...]
...The network setting is the same as [25] with two fully connected layers of output size 1,024 and 128, respectively, and another fully connected layer with softmax activation for classification....
[...]
843 citations
678 citations
Cites background or methods from "Unified Deep Supervised Domain Adap..."
...This was formalized with the use of deep learning autoencoders in [20, 27], while [33] proposed to learn an embedding space where images of same classes but different sources are projected nearby....
[...]
...CCSA [33] learns an embedding subspace where mapped visual domains are semantically aligned and yet maximally separated....
[...]
617 citations
References
49,914 citations
42,067 citations
30,811 citations
30,124 citations
"Unified Deep Supervised Domain Adap..." refers methods in this paper
...First, we considered the row images of the MNIST and USPS datasets and plotted 2D visualization of them using t-SNE [41]....
[...]
15,935 citations
"Unified Deep Supervised Domain Adap..." refers methods in this paper
...In this section, we use images of 5 shared object categories (bird, car, chair, dog, and person), of the PASCAL VOC2007 (V) [16], LabelMe (L) [52], Caltech-101 (C) [18], and SUN09 (S) [10] datasets, which is known as VLCS dataset [17]....
[...]