C
Chongyi Li
Researcher at Nanyang Technological University
Publications - 89
Citations - 6092
Chongyi Li is an academic researcher from Nanyang Technological University. The author has contributed to research in topics: Computer science & Underwater. The author has an hindex of 22, co-authored 59 publications receiving 2062 citations. Previous affiliations of Chongyi Li include City University of Hong Kong & Tianjin University.
Papers
More filters
Journal ArticleDOI
Nested Network With Two-Stream Pyramid for Salient Object Detection in Optical Remote Sensing Images
TL;DR: Zhang et al. as mentioned in this paper proposed an end-to-end deep network called LV-Net based on the shape of network architecture, which detects salient objects from optical remote sensing image (RSI) in a purely data-driven fashion.
Journal ArticleDOI
Underwater Image Enhancement via Minimal Color Loss and Locally Adaptive Contrast Enhancement
TL;DR: This work proposes an efficient and robust underwater image enhancement method, called MLLE, which first locally adjust the color and details of an input image according to a minimum color loss principle and a maximum attenuation map-guided fusion strategy, and which outperforms the state-of-the-art methods.
Journal ArticleDOI
Underwater image enhancement by dehazing and color correction
Chongyi Li,Jichang Guo +1 more
TL;DR: The enhanced images, as a result of implementing the proposed approach, are characterized by relatively genuine color, increased contrast and brightness, reduced noise level, and better visibility.
Posted Content
Deep Underwater Image Enhancement
TL;DR: A convolutional neural network based image enhancement model, i.e., UWCNN, which is trained efficiently using a synthetic underwater image database is proposed, which directly reconstructs the clear latent underwater image by leveraging on an automatic end-to-end and data-driven training mechanism.
Posted Content
RGB-D Salient Object Detection with Cross-Modality Modulation and Selection
TL;DR: Wang et al. as mentioned in this paper proposed a cross-modality feature modulation (cmFM) module to enhance feature representations by taking the depth features as prior, which models the complementary relations of RGB-D data.