scispace - formally typeset
Search or ask a question
Author

Delu Zeng

Bio: Delu Zeng is an academic researcher from South China University of Technology. The author has contributed to research in topics: Computer science & Metric (mathematics). The author has an hindex of 14, co-authored 57 publications receiving 1674 citations. Previous affiliations of Delu Zeng include Xiamen University & University of Waterloo.


Papers
More filters
Proceedings Article•DOI•
01 Jul 2017
TL;DR: A deep detail network is proposed to directly reduce the mapping range from input to output, which makes the learning process easier and significantly outperforms state-of-the-art methods on both synthetic and real-world images in terms of both qualitative and quantitative measures.
Abstract: We propose a new deep network architecture for removing rain streaks from individual images based on the deep convolutional neural network (CNN). Inspired by the deep residual network (ResNet) that simplifies the learning process by changing the mapping form, we propose a deep detail network to directly reduce the mapping range from input to output, which makes the learning process easier. To further improve the de-rained result, we use a priori image domain knowledge by focusing on high frequency detail during training, which removes background interference and focuses the model on the structure of rain in images. This demonstrates that a deep architecture not only has benefits for high-level vision tasks but also can be used to solve low-level imaging problems. Though we train the network on synthetic data, we find that the learned network generalizes well to real-world test images. Experiments show that the proposed method significantly outperforms state-of-the-art methods on both synthetic and real-world images in terms of both qualitative and quantitative measures. We discuss applications of this structure to denoising and JPEG artifact reduction at the end of the paper.

853 citations

Proceedings Article•DOI•
Xueyang Fu1, Delu Zeng1, Yue Huang1, Xiao-Ping Zhang2, Xinghao Ding1 •
01 Jun 2016
TL;DR: It is shown that, though it is widely adopted for ease of modeling, the log-transformed image for this task is not ideal and the proposed weighted variational model can suppress noise to some extent.
Abstract: We propose a weighted variational model to estimate both the reflectance and the illumination from an observed image. We show that, though it is widely adopted for ease of modeling, the log-transformed image for this task is not ideal. Based on the previous investigation of the logarithmic transformation, a new weighted variational model is proposed for better prior representation, which is imposed in the regularization terms. Different from conventional variational models, the proposed model can preserve the estimated reflectance with more details. Moreover, the proposed model can suppress noise to some extent. An alternating minimization scheme is adopted to solve the proposed model. Experimental results demonstrate the effectiveness of the proposed model with its algorithm. Compared with other variational methods, the proposed method yields comparable or better results on both subjective and objective assessments.

676 citations

Journal Article•DOI•
Xueyang Fu1, Delu Zeng1, Yue Huang1, Yinghao Liao1, Xinghao Ding1, John Paisley2 •
TL;DR: A fusion-based method for enhancing various weakly illuminated images that requires only one input to obtain the enhanced image and represents a trade-off among detail enhancement, local contrast improvement and preserving the natural feel of the image.

464 citations

Journal Article•DOI•
TL;DR: It is shown that the linear domain model can better represent prior information for better estimation of reflectance and illumination than the logarithmic domain.
Abstract: In this paper, a new probabilistic method for image enhancement is presented based on a simultaneous estimation of illumination and reflectance in the linear domain We show that the linear domain model can better represent prior information for better estimation of reflectance and illumination than the logarithmic domain A maximum a posteriori (MAP) formulation is employed with priors of both illumination and reflectance To estimate illumination and reflectance effectively, an alternating direction method of multipliers is adopted to solve the MAP problem The experimental results show the satisfactory performance of the proposed method to obtain reflectance and illumination with visually pleasing enhanced results and a promising convergence rate Compared with other testing methods, the proposed method yields comparable or better results on both subjective and objective assessments

276 citations

Journal Article•DOI•
TL;DR: The proposed method constitutes an empirical approach by using the regularized-histogram equalization (HE) and the discrete cosine transform (DCT) to improve the image quality of remote sensing images with higher contrast and richer details without introducing saturation artifacts.
Abstract: In this letter, an effective enhancement method for remote sensing images is introduced to improve the global contrast and the local details. The proposed method constitutes an empirical approach by using the regularized-histogram equalization (HE) and the discrete cosine transform (DCT) to improve the image quality. First, a new global contrast enhancement method by regularizing the input histogram is introduced. More specifically, this technique uses the sigmoid function and the histogram to generate a distribution function for the input image. The distribution function is then used to produce a new image with improved global contrast by adopting the standard lookup table-based HE technique. Second, the DCT coefficients of the previous contrast improved image are automatically adjusted to further enhance the local details of the image. Compared with conventional methods, the proposed method can generate enhanced remote sensing images with higher contrast and richer details without introducing saturation artifacts.

106 citations


Cited by
More filters
Journal Article•DOI•
TL;DR: Experiments on a number of challenging low-light images are present to reveal the efficacy of the proposed LIME and show its superiority over several state-of-the-arts in terms of enhancement quality and efficiency.
Abstract: When one captures images in low-light conditions, the images often suffer from low visibility. Besides degrading the visual aesthetics of images, this poor quality may also significantly degenerate the performance of many computer vision and multimedia algorithms that are primarily designed for high-quality inputs. In this paper, we propose a simple yet effective low-light image enhancement (LIME) method. More concretely, the illumination of each pixel is first estimated individually by finding the maximum value in R, G, and B channels. Furthermore, we refine the initial illumination map by imposing a structure prior on it, as the final illumination map. Having the well-constructed illumination map, the enhancement can be achieved accordingly. Experiments on a number of challenging low-light images are present to reveal the efficacy of our LIME and show its superiority over several state-of-the-arts in terms of enhancement quality and efficiency.

1,364 citations

Journal Article•DOI•
TL;DR: This work attempts to leverage powerful generative modeling capabilities of the recently introduced conditional generative adversarial networks (CGAN) by enforcing an additional constraint that the de-rained image must be indistinguishable from its corresponding ground truth clean image.
Abstract: Severe weather conditions, such as rain and snow, adversely affect the visual quality of images captured under such conditions, thus rendering them useless for further usage and sharing. In addition, such degraded images drastically affect the performance of vision systems. Hence, it is important to address the problem of single image de-raining. However, the inherent ill-posed nature of the problem presents several challenges. We attempt to leverage powerful generative modeling capabilities of the recently introduced conditional generative adversarial networks (CGAN) by enforcing an additional constraint that the de-rained image must be indistinguishable from its corresponding ground truth clean image. The adversarial loss from GAN provides additional regularization and helps to achieve superior results. In addition to presenting a new approach to de-rain images, we introduce a new refined loss function and architectural novelties in the generator–discriminator pair for achieving improved results. The loss function is aimed at reducing artifacts introduced by GANs and ensure better visual quality. The generator sub-network is constructed using the recently introduced densely connected networks, whereas the discriminator is designed to leverage global and local information to decide if an image is real/fake. Based on this, we propose a novel single image de-raining method called image de-raining conditional generative adversarial network (ID-CGAN) that considers quantitative, visual, and also discriminative performance into the objective function. The experiments evaluated on synthetic and real images show that the proposed method outperforms many recent state-of-the-art single image de-raining methods in terms of quantitative and visual performances. Furthermore, the experimental results evaluated on object detection datasets using the Faster-RCNN also demonstrate the effectiveness of proposed method in improving the detection performance on images degraded by rain.

747 citations

Proceedings Article•DOI•
04 Feb 2021
TL;DR: MPRNet as discussed by the authors proposes a multi-stage architecture that progressively learns restoration functions for the degraded inputs, thereby breaking down the overall recovery process into more manageable steps, and introduces a novel per-pixel adaptive design that leverages in-situ supervised attention to reweight the local features.
Abstract: Image restoration tasks demand a complex balance between spatial details and high-level contextualized information while recovering images. In this paper, we propose a novel synergistic design that can optimally balance these competing goals. Our main proposal is a multi-stage architecture, that progressively learns restoration functions for the degraded inputs, thereby breaking down the overall recovery process into more manageable steps. Specifically, our model first learns the contextualized features using encoder-decoder architectures and later combines them with a high-resolution branch that retains local information. At each stage, we introduce a novel per-pixel adaptive design that leverages in-situ supervised attention to reweight the local features. A key ingredient in such a multi-stage architecture is the information exchange between different stages. To this end, we propose a two-faceted approach where the information is not only exchanged sequentially from early to late stages, but lateral connections between feature processing blocks also exist to avoid any loss of information. The resulting tightly interlinked multi-stage architecture, named as MPRNet, delivers strong performance gains on ten datasets across a range of tasks including image deraining, deblurring, and denoising. The source code and pre-trained models are available at https://github.com/swz30/MPRNet.

716 citations

Journal Article•DOI•
01 Jun 2006
TL;DR: An apposite and eminently readable reference for all behavioral science research and development.
Abstract: An apposite and eminently readable reference for all behavioral science research and development

649 citations

Journal Article•DOI•
TL;DR: This paper proposes to use the convolutional neural network (CNN) to train a SICE enhancer, and builds a large-scale multi-exposure image data set, which contains 589 elaborately selected high-resolution multi-Exposure sequences with 4,413 images.
Abstract: Due to the poor lighting condition and limited dynamic range of digital imaging devices, the recorded images are often under-/over-exposed and with low contrast. Most of previous single image contrast enhancement (SICE) methods adjust the tone curve to correct the contrast of an input image. Those methods, however, often fail in revealing image details because of the limited information in a single image. On the other hand, the SICE task can be better accomplished if we can learn extra information from appropriately collected training data. In this paper, we propose to use the convolutional neural network (CNN) to train a SICE enhancer. One key issue is how to construct a training data set of low-contrast and high-contrast image pairs for end-to-end CNN learning. To this end, we build a large-scale multi-exposure image data set, which contains 589 elaborately selected high-resolution multi-exposure sequences with 4,413 images. Thirteen representative multi-exposure image fusion and stack-based high dynamic range imaging algorithms are employed to generate the contrast enhanced images for each sequence, and subjective experiments are conducted to screen the best quality one as the reference image of each scene. With the constructed data set, a CNN can be easily trained as the SICE enhancer to improve the contrast of an under-/over-exposure image. Experimental results demonstrate the advantages of our method over existing SICE methods with a significant margin.

632 citations