H
Houqiang Li
Researcher at University of Science and Technology of China
Publications - 612
Citations - 17591
Houqiang Li is an academic researcher from University of Science and Technology of China. The author has contributed to research in topics: Computer science & Motion compensation. The author has an hindex of 57, co-authored 520 publications receiving 12325 citations. Previous affiliations of Houqiang Li include China University of Science and Technology & Nanjing Medical University.
Papers
More filters
Journal ArticleDOI
Heterogeneous Contrastive Learning: Encoding Spatial Information for Compact Visual Representations
Huo Xinyue,Lingxi Xie,Longhui Wei,Xiaopeng Zhang,Chen Xin,Li Hao,Yang Zijie,Wengang Zhou,Houqiang Li,Qi Tian +9 more
TL;DR: In this article, the authors proposed heterogeneous contrastive learning (HCL), which adds spatial information to the encoding stage to alleviate the learning inconsistency between the contrastive objective and strong data augmentation operations.
Proceedings ArticleDOI
Line-based distributed coding scheme for onboard lossless compression of high-resolution stereo images
Jinlei Zhang,Houqiang Li +1 more
TL;DR: Experimental results on high-resolution remote sensing stereo images demonstrate that the proposed scheme is comparable to JPEG2000 with respect to the compression performance, but with much lower encoding complexity and storage requirement.
Proceedings ArticleDOI
Single image super-resolution based on nonlocal similarity and sparse representation
TL;DR: This paper presents an SR approach for single image, by combining the image observation model, image nonlocal similarity, and sparse representation of image patches, and shows obvious visual improvement in preserving edges and structures while achieving comparable overall objective quality to the state-of-the-art methods.
Book ChapterDOI
No-Reference Image Quality Assessment Based on Internal Generative Mechanism
TL;DR: Extensive experiments on some standard databases validate that the proposed IQA method shows highly competitive performance to state-of-the-art NR-IQA ones and demonstrates its effectiveness on the multiply-distorted images.
Journal ArticleDOI
CLIP2GAN: Towards Bridging Text with the Latent Space of GANs
TL;DR: Wang et al. as mentioned in this paper proposed a text-guided image generation framework by leveraging CLIP model and StyleGAN, which bridges the output feature embedding space of CLIP and the input latent space of StyleGAN by introducing a mapping network.