scispace - formally typeset
D

Dong Liu

Researcher at University of Science and Technology of China

Publications -  257
Citations -  12525

Dong Liu is an academic researcher from University of Science and Technology of China. The author has contributed to research in topics: Convolutional neural network & Image compression. The author has an hindex of 37, co-authored 236 publications receiving 6433 citations. Previous affiliations of Dong Liu include Microsoft & Nokia.

Papers
More filters
Proceedings ArticleDOI

Deep High-Resolution Representation Learning for Human Pose Estimation

TL;DR: This paper proposes a network that maintains high-resolution representations through the whole process of human pose estimation and empirically demonstrates the effectiveness of the network through the superior pose estimation results over two benchmark datasets: the COCO keypoint detection dataset and the MPII Human Pose dataset.
Posted Content

Deep High-Resolution Representation Learning for Visual Recognition

TL;DR: The superiority of the proposed HRNet in a wide range of applications, including human pose estimation, semantic segmentation, and object detection, is shown, suggesting that the HRNet is a stronger backbone for computer vision problems.
Journal ArticleDOI

Deep High-Resolution Representation Learning for Visual Recognition

TL;DR: The High-Resolution Network (HRNet) as mentioned in this paper maintains high-resolution representations through the whole process by connecting the high-to-low resolution convolution streams in parallel and repeatedly exchanging the information across resolutions.
Posted Content

High-Resolution Representations for Labeling Pixels and Regions

TL;DR: A simple modification is introduced to augment the high-resolution representation by aggregating the (upsampled) representations from all the parallel convolutions rather than only the representation from thehigh-resolution convolution, which leads to stronger representations, evidenced by superior results.
Proceedings ArticleDOI

Fully Convolutional Adaptation Networks for Semantic Segmentation

TL;DR: FCAN is presented, a novel deep architecture for semantic segmentation which combines Appearance Adaptation Networks (AAN) and Representation Adaptation networks (RAN), which learns a transformation from one domain to the other in the pixel space and RAN is optimized in an adversarial learning manner to maximally fool the domain discriminator with the learnt source and target representations.