PseudoSeg: Designing Pseudo Labels for Semantic Segmentation
Citations
109 citations
Cites methods from "PseudoSeg: Designing Pseudo Labels ..."
...To address this issue, several existing works use the large-scale ImageNet classification dataset for pretraining their models, and also leverage additional weaker forms of supervision such as image-level labels [1, 26, 34, 42], bounding boxes [12, 31, 42, 47], scribbles [36, 50] and points [2], or unlabeled images [5, 19, 39, 40, 48, 67]....
[...]
75 citations
40 citations
39 citations
27 citations
References
52,856 citations
30,811 citations
30,462 citations
"PseudoSeg: Designing Pseudo Labels ..." refers methods in this paper
...To further verify the effectiveness of the proposed method, we also conduct experiments on the COCO dataset (Lin et al., 2014)....
[...]
10,422 citations
"PseudoSeg: Designing Pseudo Labels ..." refers methods in this paper
...In Figure 5 (f), we compare the performance of using ResNet-50, ResNet-101, and Xception-65 as backbone architectures, respectively....
[...]
...B IMPLEMENTATION DETAILS We implement our method on top of the publicly available official DeepLab codebase.3 Unless specified, we adopt the DeepLabv3+ model with Xception-65 (Chollet, 2017) as the feature backbone, which is pre-trained on the ImageNet dataset (Russakovsky et al., 2015)....
[...]
7,951 citations
"PseudoSeg: Designing Pseudo Labels ..." refers background or methods in this paper
...Second, the distribution sharpening operation Sharpen(a, T )i = a 1/T i / ∑C j a 1/T j adjusts the temperature scalar T of categorical distribution (Berthelot et al., 2019b; Chen et al., 2020b)....
[...]
...The magnitude of jittering strength is controlled by a scalar (Chen et al., 2020b)....
[...]
...To push the performance of state of the arts, iterative self-training approaches (Chen et al., 2020a; Zoph et al., 2020; Zhu et al., 2020) have been proposed....
[...]
...For strong data augmentation, we simply follow color jittering operations from SimCLR (Chen et al., 2020b) and remove all geometric transformations....
[...]
...Recent exploration demonstrates improvement over high-data regime settings with large-scale data, including self-training (Chen et al., 2020a; Zoph et al., 2020) and backbone pretraining (Zhang et al., 2020a)....
[...]