scispace - formally typeset
Search or ask a question
Author

Chunmei Qing

Other affiliations: University of Lincoln
Bio: Chunmei Qing is an academic researcher from South China University of Technology. The author has contributed to research in topics: Feature extraction & Convolutional neural network. The author has an hindex of 13, co-authored 57 publications receiving 2557 citations. Previous affiliations of Chunmei Qing include University of Lincoln.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: DehazeNet as discussed by the authors adopts convolutional neural network-based deep architecture, whose layers are specially designed to embody the established assumptions/priors in image dehazing.
Abstract: Single image haze removal is a challenging ill-posed problem. Existing methods use various constraints/priors to get plausible dehazing solutions. The key to achieve haze removal is to estimate a medium transmission map for an input hazy image. In this paper, we propose a trainable end-to-end system called DehazeNet, for medium transmission estimation. DehazeNet takes a hazy image as input, and outputs its medium transmission map that is subsequently used to recover a haze-free image via atmospheric scattering model. DehazeNet adopts convolutional neural network-based deep architecture, whose layers are specially designed to embody the established assumptions/priors in image dehazing. Specifically, the layers of Maxout units are used for feature extraction, which can generate almost all haze-relevant features. We also propose a novel nonlinear activation function in DehazeNet, called bilateral rectified linear unit, which is able to improve the quality of recovered haze-free image. We establish connections between the components of the proposed DehazeNet and those used in existing methods. Experiments on benchmark images show that DehazeNet achieves superior performance over existing methods, yet keeps efficient and easy to use.

1,880 citations

Journal ArticleDOI
TL;DR: This paper proposes a trainable end-to-end system called DehazeNet, for medium transmission estimation, which takes a hazy image as input, and outputs its medium transmission map that is subsequently used to recover a haze-free image via atmospheric scattering model.
Abstract: Single image haze removal is a challenging ill-posed problem. Existing methods use various constraints/priors to get plausible dehazing solutions. The key to achieve haze removal is to estimate a medium transmission map for an input hazy image. In this paper, we propose a trainable end-to-end system called DehazeNet, for medium transmission estimation. DehazeNet takes a hazy image as input, and outputs its medium transmission map that is subsequently used to recover a haze-free image via atmospheric scattering model. DehazeNet adopts Convolutional Neural Networks (CNN) based deep architecture, whose layers are specially designed to embody the established assumptions/priors in image dehazing. Specifically, layers of Maxout units are used for feature extraction, which can generate almost all haze-relevant features. We also propose a novel nonlinear activation function in DehazeNet, called Bilateral Rectified Linear Unit (BReLU), which is able to improve the quality of recovered haze-free image. We establish connections between components of the proposed DehazeNet and those used in existing methods. Experiments on benchmark images show that DehazeNet achieves superior performance over existing methods, yet keeps efficient and easy to use.

837 citations

Journal ArticleDOI
TL;DR: Segmented SAE (S-SAE) is proposed by confronting the original features into smaller data segments, which are separately processed by different smaller SAEs, which has resulted in reduced complexity but improved efficacy of data abstraction and accuracy of data classification.

308 citations

Journal ArticleDOI
TL;DR: The results support the point that emotions are progressively activated throughout the experiment, and the weighting coefficients based on the correlation coefficient and the entropy coefficient can effectively improve the EEG-based emotion recognition accuracy.
Abstract: Electroencephalogram (EEG) signal-based emotion recognition has attracted wide interests in recent years and has been broadly adopted in medical, affective computing, and other relevant fields. However, the majority of the research reported in this field tends to focus on the accuracy of classification whilst neglecting the interpretability of emotion progression. In this paper, we propose a new interpretable emotion recognition approach with the activation mechanism by using machine learning and EEG signals. This paper innovatively proposes the emotional activation curve to demonstrate the activation process of emotions. The algorithm first extracts features from EEG signals and classifies emotions using machine learning techniques, in which different parts of a trial are used to train the proposed model and assess its impact on emotion recognition results. Second, novel activation curves of emotions are constructed based on the classification results, and two emotion coefficients, i.e., the correlation coefficients and entropy coefficients. The activation curve can not only classify emotions but also reveals to a certain extent the emotional activation mechanism. Finally, a weight coefficient is obtained from the two coefficients to improve the accuracy of emotion recognition. To validate the proposed method, experiments have been carried out on the DEAP and SEED dataset. The results support the point that emotions are progressively activated throughout the experiment, and the weighting coefficients based on the correlation coefficient and the entropy coefficient can effectively improve the EEG-based emotion recognition accuracy.

99 citations

Journal ArticleDOI
TL;DR: Experiments show that the proposed novel hierarchical lifelong learning algorithm (HLLA) method outperforms many other recent LML algorithms, especially when dealing with higher dimensional, lower correlation, and fewer labeled data problems.
Abstract: In lifelong machine learning (LML) systems, consecutive new tasks from changing circumstances are learned and added to the system. However, sufficiently labeled data are indispensable for extracting intertask relationships before transferring knowledge in classical supervised LML systems. Inadequate labels may deteriorate the performance due to the poor initial approximation. In order to extend the typical LML system, we propose a novel hierarchical lifelong learning algorithm (HLLA) consisting of two following layers: 1) the knowledge layer consisted of shared representations and integrated knowledge basis at the bottom and 2) parameterized hypothesis functions with features at the top. Unlabeled data is leveraged in HLLA for pretraining of the shared representations. We also have considered a selective inherited updating method to deal with intertask distribution shifting. Experiments show that our HLLA method outperforms many other recent LML algorithms, especially when dealing with higher dimensional, lower correlation, and fewer labeled data problems.

81 citations


Cited by
More filters
Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
TL;DR: A general framework of DL for RS data is provided, and the state-of-the-art DL methods in RS are regarded as special cases of input-output data combined with various deep networks and tuning tricks.
Abstract: Deep-learning (DL) algorithms, which learn the representative and discriminative features in a hierarchical manner from the data, have recently become a hotspot in the machine-learning area and have been introduced into the geoscience and remote sensing (RS) community for RS big data analysis. Considering the low-level features (e.g., spectral and texture) as the bottom level, the output feature representation from the top level of the network can be directly fed into a subsequent classifier for pixel-based classification. As a matter of fact, by carefully addressing the practical demands in RS applications and designing the input?output levels of the whole network, we have found that DL is actually everywhere in RS data analysis: from the traditional topics of image preprocessing, pixel-based classification, and target recognition, to the recent challenging tasks of high-level semantic feature extraction and RS scene understanding.

1,625 citations

Proceedings ArticleDOI
01 Oct 2017
TL;DR: An image dehazing model built with a convolutional neural network (CNN) based on a re-formulated atmospheric scattering model, called All-in-One Dehazing Network (AOD-Net), which demonstrates superior performance than the state-of-the-art in terms of PSNR, SSIM and the subjective visual quality.
Abstract: This paper proposes an image dehazing model built with a convolutional neural network (CNN), called All-in-One Dehazing Network (AOD-Net). It is designed based on a re-formulated atmospheric scattering model. Instead of estimating the transmission matrix and the atmospheric light separately as most previous models did, AOD-Net directly generates the clean image through a light-weight CNN. Such a novel end-to-end design makes it easy to embed AOD-Net into other deep models, e.g., Faster R-CNN, for improving high-level tasks on hazy images. Experimental results on both synthesized and natural hazy image datasets demonstrate our superior performance than the state-of-the-art in terms of PSNR, SSIM and the subjective visual quality. Furthermore, when concatenating AOD-Net with Faster R-CNN, we witness a large improvement of the object detection performance on hazy images.

1,185 citations

Journal ArticleDOI
TL;DR: This review covers nearly every application and technology in the field of remote sensing, ranging from preprocessing to mapping, and a conclusion regarding the current state-of-the art methods, a critical conclusion on open challenges, and directions for future research are presented.
Abstract: Deep learning (DL) algorithms have seen a massive rise in popularity for remote-sensing image analysis over the past few years. In this study, the major DL concepts pertinent to remote-sensing are introduced, and more than 200 publications in this field, most of which were published during the last two years, are reviewed and analyzed. Initially, a meta-analysis was conducted to analyze the status of remote sensing DL studies in terms of the study targets, DL model(s) used, image spatial resolution(s), type of study area, and level of classification accuracy achieved. Subsequently, a detailed review is conducted to describe/discuss how DL has been applied for remote sensing image analysis tasks including image fusion, image registration, scene classification, object detection, land use and land cover (LULC) classification, segmentation, and object-based image analysis (OBIA). This review covers nearly every application and technology in the field of remote sensing, ranging from preprocessing to mapping. Finally, a conclusion regarding the current state-of-the art methods, a critical conclusion on open challenges, and directions for future research are presented.

1,181 citations

Journal ArticleDOI
TL;DR: In this article, the authors present a comprehensive study and evaluation of existing single image dehazing algorithms, using a new large-scale benchmark consisting of both synthetic and real-world hazy images, called Realistic Single-Image DEhazing (RESIDE).
Abstract: We present a comprehensive study and evaluation of existing single-image dehazing algorithms, using a new large-scale benchmark consisting of both synthetic and real-world hazy images, called REalistic Single-Image DEhazing (RESIDE). RESIDE highlights diverse data sources and image contents, and is divided into five subsets, each serving different training or evaluation purposes. We further provide a rich variety of criteria for dehazing algorithm evaluation, ranging from full-reference metrics to no-reference metrics and to subjective evaluation, and the novel task-driven evaluation. Experiments on RESIDE shed light on the comparisons and limitations of the state-of-the-art dehazing algorithms, and suggest promising future directions.

922 citations