scispace - formally typeset
Journal ArticleDOI

Image Enhancement Algorithm Based on Depth Difference and Illumination Adjustment

17 Jul 2021-Scientific Programming (Hindawi)-Vol. 2021, pp 1-10

TL;DR: In this paper, a traffic image enhancement model based on illumination adjustment and depth of field difference is proposed to improve the clarity and color fidelity of traffic images under the complex environment of haze and uneven illumination and promote road traffic safety monitoring.

AbstractIn order to improve the clarity and color fidelity of traffic images under the complex environment of haze and uneven illumination and promote road traffic safety monitoring, a traffic image enhancement model based on illumination adjustment and depth of field difference is proposed. The algorithm is based on Retinex theory, uses dark channel principle to obtain image depth of the field, and uses spectral clustering algorithm to cluster image depth. After the subimages are divided, the local haze concentration is estimated according to the depth of field and the subimages are adaptively enhanced and fused. In addition, the illumination component is obtained by multiscale guided filtering to maintain the edge characteristics of the image, and the uneven illumination problem is solved by adjusting the curve function. The experimental results show that the proposed model can effectively enhance the uneven illumination and haze weather image in the traffic scene and the visual effect of the images is good. The generated image has rich details, improves the quality of traffic images, and can meet the needs of traffic practical application.

...read more

Content maybe subject to copyright    Report


References
More filters
Journal ArticleDOI
04 Mar 2020-Nature
TL;DR: It is demonstrated that an image sensor can itself constitute an ANN that can simultaneously sense and process optical images without latency, and is trained to classify and encode images with high throughput, acting as an artificial neural network.
Abstract: Machine vision technology has taken huge leaps in recent years, and is now becoming an integral part of various intelligent systems, including autonomous vehicles and robotics. Usually, visual information is captured by a frame-based camera, converted into a digital format and processed afterwards using a machine-learning algorithm such as an artificial neural network (ANN)1. The large amount of (mostly redundant) data passed through the entire signal chain, however, results in low frame rates and high power consumption. Various visual data preprocessing techniques have thus been developed2-7 to increase the efficiency of the subsequent signal processing in an ANN. Here we demonstrate that an image sensor can itself constitute an ANN that can simultaneously sense and process optical images without latency. Our device is based on a reconfigurable two-dimensional (2D) semiconductor8,9 photodiode10-12 array, and the synaptic weights of the network are stored in a continuously tunable photoresponsivity matrix. We demonstrate both supervised and unsupervised learning and train the sensor to classify and encode images that are optically projected onto the chip with a throughput of 20 million bins per second.

147 citations

Journal ArticleDOI
TL;DR: A new hand-crafted feature extraction method, based on multiscale covariance maps (MCMs), that is specifically aimed at improving the classification of HSIs using CNNs, which demonstrates that the proposed method can indeed increase the robustness of the CNN model.
Abstract: The classification of hyperspectral images (HSIs) using convolutional neural networks (CNNs) has recently drawn significant attention. However, it is important to address the potential overfitting problems that CNN-based methods suffer when dealing with HSIs. Unlike common natural images, HSIs are essentially three-order tensors which contain two spatial dimensions and one spectral dimension. As a result, exploiting both spatial and spectral information is very important for HSI classification. This paper proposes a new hand-crafted feature extraction method, based on multiscale covariance maps (MCMs), that is specifically aimed at improving the classification of HSIs using CNNs. The proposed method has the following distinctive advantages. First, with the use of covariance maps, the spatial and spectral information of the HSI can be jointly exploited. Each entry in the covariance map stands for the covariance between two different spectral bands within a local spatial window, which can absorb and integrate the two kinds of information (spatial and spectral) in a natural way. Second, by means of our multiscale strategy, each sample can be enhanced with spatial information from different scales, increasing the information conveyed by training samples significantly. To verify the effectiveness of our proposed method, we conduct comprehensive experiments on three widely used hyperspectral data sets, using a classical 2-D CNN (2DCNN) model. Our experimental results demonstrate that the proposed method can indeed increase the robustness of the CNN model. Moreover, the proposed MCMs+2DCNN method exhibits better classification performance than other CNN-based classification strategies and several standard techniques for spectral-spatial classification of HSIs.

107 citations

Proceedings ArticleDOI
Li Tao1, Chuang Zhu1, Guoqing Xiang1, Yuan Li1, Huizhu Jia1, Xiaodong Xie1 
01 Dec 2017
TL;DR: A CNN based method to perform low-light image enhancement with a special module to utilize multiscale feature maps, which can avoid gradient vanishing problem and demonstrates that this method outperforms other contrast enhancement methods.
Abstract: In this paper, we propose a CNN based method to perform low-light image enhancement. We design a special module to utilize multiscale feature maps, which can avoid gradient vanishing problem as well. In order to preserve image textures as much as possible, we use SSIM loss to train our model. The contrast of low-light images can be adaptively enhanced using our method. Results demonstrate that our CNN based method outperforms other contrast enhancement methods.

73 citations

Journal ArticleDOI
Seonhee Park1, Soohwan Yu1, Minseo Kim1, Kwanwoo Park1, Joonki Paik1 
TL;DR: A dual autoencoder network model based on the retinex theory to perform the low-light enhancement and noise reduction by combining the stacked and convolutional autoencoders is presented.
Abstract: This paper presents a dual autoencoder network model based on the retinex theory to perform the low-light enhancement and noise reduction by combining the stacked and convolutional autoencoders. The proposed method first estimates the spatially smooth illumination component which is brighter than an input low-light image using a stacked autoencoder with a small number of hidden units. Next, we use a convolutional autoencoder which deals with 2-D image information to reduce the amplified noise in the brightness enhancement process. We analyzed and compared roles of the stacked and convolutional autoencoders with the constraint terms of the variational retinex model. In the experiments, we demonstrate the performance of the proposed algorithm by comparing with the state-of-the-art existing low-light and contrast enhancement methods.

71 citations

Journal ArticleDOI
TL;DR: A learning strategy to select the optimal parameters of the nonlinear stretching by optimizing a novel image quality measurement, named as the Modified Contrast-Naturalness-Colorfulness (MCNC) function, which employs a more effective objective criterion and can better agree with human visual perception.
Abstract: This paper presents a biologically inspired adaptive image enhancement method, consisting of four stages: illumination estimation, reflection extraction, color restoration and postprocessing. The illumination of the input image is estimated using guided filter. We propose to utilize the smoothed Y channel in the YCbCr color space as the guidance image, since it can better capture the illuminance of the real scene. The reflection of the input image is extracted using the Retinex algorithm and refined through color restoration. In order to further improve the quality of the extracted reflection, we explore a learning strategy to select the optimal parameters of the nonlinear stretching by optimizing a novel image quality measurement, named as the Modified Contrast-Naturalness-Colorfulness (MCNC) function. Compared with the original CNC function, the proposed MCNC function employs a more effective objective criterion and can better agree with human visual perception. Both qualitative and quantitative experiments demonstrate that the proposed method is adaptive and robust to outdoor images and achieves favorable performance against state-of-the-art methods especially for images captured under extremely hazed or low-light conditions.

52 citations