scispace - formally typeset
Search or ask a question
Author

Chongyi Li

Bio: Chongyi Li is an academic researcher from Nanyang Technological University. The author has contributed to research in topics: Computer science & Underwater. The author has an hindex of 22, co-authored 59 publications receiving 2062 citations. Previous affiliations of Chongyi Li include City University of Hong Kong & Tianjin University.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: This paper constructs an Underwater Image Enhancement Benchmark (UIEB) including 950 real-world underwater images, 890 of which have the corresponding reference images and proposes an underwater image enhancement network (called Water-Net) trained on this benchmark as a baseline, which indicates the generalization of the proposed UIEB for training Convolutional Neural Networks (CNNs).
Abstract: Underwater image enhancement has been attracting much attention due to its significance in marine engineering and aquatic robotics. Numerous underwater image enhancement algorithms have been proposed in the last few years. However, these algorithms are mainly evaluated using either synthetic datasets or few selected real-world images. It is thus unclear how these algorithms would perform on images acquired in the wild and how we could gauge the progress in the field. To bridge this gap, we present the first comprehensive perceptual study and analysis of underwater image enhancement using large-scale real-world images. In this paper, we construct an Underwater Image Enhancement Benchmark (UIEB) including 950 real-world underwater images, 890 of which have the corresponding reference images. We treat the rest 60 underwater images which cannot obtain satisfactory reference images as challenging data. Using this dataset, we conduct a comprehensive study of the state-of-the-art underwater image enhancement algorithms qualitatively and quantitatively. In addition, we propose an underwater image enhancement network (called Water-Net) trained on this benchmark as a baseline, which indicates the generalization of the proposed UIEB for training Convolutional Neural Networks (CNNs). The benchmark evaluations and the proposed Water-Net demonstrate the performance and limitations of state-of-the-art algorithms, which shed light on future research in underwater image enhancement. The dataset and code are available at https://li-chongyi.github.io/proj_benchmark.html .

697 citations

Journal ArticleDOI
Chongyi Li1, Jichang Guo1, Runmin Cong1, Yanwei Pang1, Bo Wang1 
TL;DR: Extensive experiments demonstrate that the proposed method achieves better visual quality, more valuable information, and more accurate color restoration than several state-of-the-art methods, even for underwater images taken under several challenging scenes.
Abstract: Images captured under water are usually degraded due to the effects of absorption and scattering. Degraded underwater images show some limitations when they are used for display and analysis. For example, underwater images with low contrast and color cast decrease the accuracy rate of underwater object detection and marine biology recognition. To overcome those limitations, a systematic underwater image enhancement method, which includes an underwater image dehazing algorithm and a contrast enhancement algorithm, is proposed. Built on a minimum information loss principle, an effective underwater image dehazing algorithm is proposed to restore the visibility, color, and natural appearance of underwater images. A simple yet effective contrast enhancement algorithm is proposed based on a kind of histogram distribution prior, which increases the contrast and brightness of underwater images. The proposed method can yield two versions of enhanced output. One version with relatively genuine color and natural appearance is suitable for display. The other version with high contrast and brightness can be used for extracting more valuable information and unveiling more details. Simulation experiment, qualitative and quantitative comparisons, as well as color accuracy and application tests are conducted to evaluate the performance of the proposed method. Extensive experiments demonstrate that the proposed method achieves better visual quality, more valuable information, and more accurate color restoration than several state-of-the-art methods, even for underwater images taken under several challenging scenes.

459 citations

Proceedings ArticleDOI
14 Jun 2020
TL;DR: A novel method, Zero-Reference Deep Curve Estimation (Zero-DCE), which formulates light enhancement as a task of image-specific curve estimation with a deep network and shows that it generalizes well to diverse lighting conditions.
Abstract: The paper presents a novel method, Zero-Reference Deep Curve Estimation (Zero-DCE), which formulates light enhancement as a task of image-specific curve estimation with a deep network. Our method trains a lightweight deep network, DCE-Net, to estimate pixel-wise and high-order curves for dynamic range adjustment of a given image. The curve estimation is specially designed, considering pixel value range, monotonicity, and differentiability. Zero-DCE is appealing in its relaxed assumption on reference images, i.e., it does not require any paired or unpaired data during training. This is achieved through a set of carefully formulated non-reference loss functions, which implicitly measure the enhancement quality and drive the learning of the network. Our method is efficient as image enhancement can be achieved by an intuitive and simple nonlinear curve mapping. Despite its simplicity, we show that it generalizes well to diverse lighting conditions. Extensive experiments on various benchmarks demonstrate the advantages of our method over state-of-the-art methods qualitatively and quantitatively. Furthermore, the potential benefits of our Zero-DCE to face detection in the dark are discussed.

447 citations

Journal ArticleDOI
TL;DR: The proposed UWCNN model directly reconstructs the clear latent underwater image, which benefits from the underwater scene prior which can be used to synthesize underwater image training data, and can be easily extended to underwater videos for frame-by-frame enhancement.

408 citations

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a weakly supervised color transfer method to correct color distortion, which relaxes the need for paired underwater images for training and allows the underwater images being taken in unknown locations.
Abstract: Underwater vision suffers from severe effects due to selective attenuation and scattering when light propagates through water. Such degradation not only affects the quality of underwater images, but limits the ability of vision tasks. Different from existing methods that either ignore the wavelength dependence on the attenuation or assume a specific spectral profile, we tackle color distortion problem of underwater images from a new view. In this letter, we propose a weakly supervised color transfer method to correct color distortion. The proposed method relaxes the need for paired underwater images for training and allows the underwater images being taken in unknown locations. Inspired by cycle-consistent adversarial networks, we design a multiterm loss function including adversarial loss, cycle consistency loss, and structural similarity index measure loss, which makes the content and structure of the outputs same as the inputs, meanwhile the color is similar to the images that were taken without the water. Experiments on underwater images captured under diverse scenes show that our method produces visually pleasing results, even outperforms the state-of-the-art methods. Besides, our method can improve the performance of vision tasks.

308 citations


Cited by
More filters
Posted Content
TL;DR: This paper extensively reviews 400+ papers of object detection in the light of its technical evolution, spanning over a quarter-century's time (from the 1990s to 2019), and makes an in-deep analysis of their challenges as well as technical improvements in recent years.
Abstract: Object detection, as of one the most fundamental and challenging problems in computer vision, has received great attention in recent years. Its development in the past two decades can be regarded as an epitome of computer vision history. If we think of today's object detection as a technical aesthetics under the power of deep learning, then turning back the clock 20 years we would witness the wisdom of cold weapon era. This paper extensively reviews 400+ papers of object detection in the light of its technical evolution, spanning over a quarter-century's time (from the 1990s to 2019). A number of topics have been covered in this paper, including the milestone detectors in history, detection datasets, metrics, fundamental building blocks of the detection system, speed up techniques, and the recent state of the art detection methods. This paper also reviews some important detection applications, such as pedestrian detection, face detection, text detection, etc, and makes an in-deep analysis of their challenges as well as technical improvements in recent years.

802 citations

Journal ArticleDOI
TL;DR: This paper constructs an Underwater Image Enhancement Benchmark (UIEB) including 950 real-world underwater images, 890 of which have the corresponding reference images and proposes an underwater image enhancement network (called Water-Net) trained on this benchmark as a baseline, which indicates the generalization of the proposed UIEB for training Convolutional Neural Networks (CNNs).
Abstract: Underwater image enhancement has been attracting much attention due to its significance in marine engineering and aquatic robotics. Numerous underwater image enhancement algorithms have been proposed in the last few years. However, these algorithms are mainly evaluated using either synthetic datasets or few selected real-world images. It is thus unclear how these algorithms would perform on images acquired in the wild and how we could gauge the progress in the field. To bridge this gap, we present the first comprehensive perceptual study and analysis of underwater image enhancement using large-scale real-world images. In this paper, we construct an Underwater Image Enhancement Benchmark (UIEB) including 950 real-world underwater images, 890 of which have the corresponding reference images. We treat the rest 60 underwater images which cannot obtain satisfactory reference images as challenging data. Using this dataset, we conduct a comprehensive study of the state-of-the-art underwater image enhancement algorithms qualitatively and quantitatively. In addition, we propose an underwater image enhancement network (called Water-Net) trained on this benchmark as a baseline, which indicates the generalization of the proposed UIEB for training Convolutional Neural Networks (CNNs). The benchmark evaluations and the proposed Water-Net demonstrate the performance and limitations of state-of-the-art algorithms, which shed light on future research in underwater image enhancement. The dataset and code are available at https://li-chongyi.github.io/proj_benchmark.html .

697 citations

Proceedings ArticleDOI
14 Jun 2020
TL;DR: A novel method, Zero-Reference Deep Curve Estimation (Zero-DCE), which formulates light enhancement as a task of image-specific curve estimation with a deep network and shows that it generalizes well to diverse lighting conditions.
Abstract: The paper presents a novel method, Zero-Reference Deep Curve Estimation (Zero-DCE), which formulates light enhancement as a task of image-specific curve estimation with a deep network. Our method trains a lightweight deep network, DCE-Net, to estimate pixel-wise and high-order curves for dynamic range adjustment of a given image. The curve estimation is specially designed, considering pixel value range, monotonicity, and differentiability. Zero-DCE is appealing in its relaxed assumption on reference images, i.e., it does not require any paired or unpaired data during training. This is achieved through a set of carefully formulated non-reference loss functions, which implicitly measure the enhancement quality and drive the learning of the network. Our method is efficient as image enhancement can be achieved by an intuitive and simple nonlinear curve mapping. Despite its simplicity, we show that it generalizes well to diverse lighting conditions. Extensive experiments on various benchmarks demonstrate the advantages of our method over state-of-the-art methods qualitatively and quantitatively. Furthermore, the potential benefits of our Zero-DCE to face detection in the dark are discussed.

447 citations