scispace - formally typeset
Search or ask a question

Showing papers on "Image gradient published in 2014"


Journal ArticleDOI
TL;DR: This work proposes a novel BIQA model that utilizes the joint statistics of two types of commonly used local contrast features: 1) the gradient magnitude (GM) map and 2) the Laplacian of Gaussian response.
Abstract: Blind image quality assessment (BIQA) aims to evaluate the perceptual quality of a distorted image without information regarding its reference image. Existing BIQA models usually predict the image quality by analyzing the image statistics in some transformed domain, e.g., in the discrete cosine transform domain or wavelet domain. Though great progress has been made in recent years, BIQA is still a very challenging task due to the lack of a reference image. Considering that image local contrast features convey important structural information that is closely related to image perceptual quality, we propose a novel BIQA model that utilizes the joint statistics of two types of commonly used local contrast features: 1) the gradient magnitude (GM) map and 2) the Laplacian of Gaussian (LOG) response. We employ an adaptive procedure to jointly normalize the GM and LOG features, and show that the joint statistics of normalized GM and LOG features have desirable properties for the BIQA task. The proposed model is extensively evaluated on three large-scale benchmark databases, and shown to deliver highly competitive performance with state-of-the-art BIQA models, as well as with some well-known full reference image quality assessment models.

535 citations


Proceedings ArticleDOI
23 Jun 2014
TL;DR: This work addresses the question of what are effective features to differentiate between blurred and unblurred image regions by studying a few blur feature representations in image gradient, Fourier domain, and data-driven local filters.
Abstract: Ubiquitous image blur brings out a practically important question--what are effective features to differentiate between blurred and unblurred image regions We address it by studying a few blur feature representations in image gradient, Fourier domain, and data-driven local filters Unlike previous methods, which are often based on restoration mechanisms, our features are constructed to enhance discriminative power and are adaptive to various blur scales in images To avail evaluation, we build a new blur perception dataset containing thousands of images with labeled ground-truth Our results are applied to several applications, including blur region segmentation, deblurring, and blur magnification

228 citations


Proceedings ArticleDOI
28 Aug 2014
TL;DR: An improved Canny edge detection algorithm based on adaptive smooth filtering is proposed in this article, according to the revulsion characteristic of image pixels gray scale, this algorithm adaptively changes the coefficients of the filter.
Abstract: This paper introduces the fundamental theory of Canny operator and carries on its analysis and evaluation.On this foundation,an improved Canny edge detection algorithm based on adaptive smooth filtering is proposed.According to the revulsion characteristic of image pixels gray scale,this algorithm adaptively changes the coefficients of the filter.The results of the experiment pictures indicate that the improved algorithm has better accuracy and precision in the edge detection.

189 citations


Journal ArticleDOI
TL;DR: Compared with the state-of-the-art approaches, the proposed method is among the first to focus on the problem of single color image rain removal and achieves promising results with not only the rain component being removed more completely, but also the visual quality of restored images being improved.
Abstract: Rain removal from a single color image is a challenging problem as no temporal information among successive images can be obtained. In this paper, we propose a single-color-image-based rain removal framework by properly formulating rain removal as an image decomposition problem based on sparse representation. In our framework, an input color image is first decomposed into a low-frequency part and a high-frequency part by using the guided image filter so that the rain streaks would be in the high-frequency part with nonrain textures/edges, and the high-frequency part is then decomposed into a rain component and a nonrain component by performing dictionary learning and sparse coding. To separate rain streaks from the high-frequency part, a hybrid feature set, including histogram of oriented gradients, depth of field, and Eigen color, is employed to further decompose the high-frequency part. With the hybrid feature set applied, most rain streaks can be removed; simultaneously nonrain component can be enhanced. To the best of our knowledge, compared with the state-of-the-art approaches, the proposed method is among the first to focus on the problem of single color image rain removal and achieves promising results with not only the rain component being removed more completely, but also the visual quality of restored images being improved.

167 citations


Journal ArticleDOI
TL;DR: A distributed Canny edge detection algorithm that adaptively computes the edge detection thresholds based on the block type and the local distribution of the gradients in the image block to have a significantly reduced latency and can be easily integrated with other block-based image codecs.
Abstract: The Canny edge detector is one of the most widely used edge detection algorithms due to its superior performance. Unfortunately, not only is it computationally more intensive as compared with other edge detection algorithms, but it also has a higher latency because it is based on frame-level statistics. In this paper, we propose a mechanism to implement the Canny algorithm at the block level without any loss in edge detection performance compared with the original frame-level Canny algorithm. Directly applying the original Canny algorithm at the block-level leads to excessive edges in smooth regions and to loss of significant edges in high-detailed regions since the original Canny computes the high and low thresholds based on the frame-level statistics. To solve this problem, we present a distributed Canny edge detection algorithm that adaptively computes the edge detection thresholds based on the block type and the local distribution of the gradients in the image block. In addition, the new algorithm uses a nonuniform gradient magnitude histogram to compute block-based hysteresis thresholds. The resulting block-based algorithm has a significantly reduced latency and can be easily integrated with other block-based image codecs. It is capable of supporting fast edge detection of images and videos with high resolutions, including full-HD since the latency is now a function of the block size instead of the frame size. In addition, quantitative conformance evaluations and subjective tests show that the edge detection performance of the proposed algorithm is better than the original frame-based algorithm, especially when noise is present in the images. Finally, this algorithm is implemented using a 32 computing engine architecture and is synthesized on the Xilinx Virtex-5 FPGA. The synthesized architecture takes only 0.721 ms (including the SRAM read/write time and the computation time) to detect edges of \(512\times 512\) images in the USC SIPI database when clocked at 100 MHz and is faster than existing FPGA and GPU implementations.

149 citations


Journal ArticleDOI
TL;DR: A novel gradient histogram preservation (GHP) algorithm is developed to enhance the texture structures while removing noise in image denoising and can well preserve the texture appearance in the denoised images, making them look more natural.
Abstract: Natural image statistics plays an important role in image denoising, and various natural image priors, including gradient-based, sparse representation-based, and nonlocal self-similarity-based ones, have been widely studied and exploited for noise removal. In spite of the great success of many denoising algorithms, they tend to smooth the fine scale image textures when removing noise, degrading the image visual quality. To address this problem, in this paper, we propose a texture enhanced image denoising method by enforcing the gradient histogram of the denoised image to be close to a reference gradient histogram of the original image. Given the reference gradient histogram, a novel gradient histogram preservation (GHP) algorithm is developed to enhance the texture structures while removing noise. Two region-based variants of GHP are proposed for the denoising of images consisting of regions with different textures. An algorithm is also developed to effectively estimate the reference gradient histogram from the noisy observation of the unknown image. Our experimental results demonstrate that the proposed GHP algorithm can well preserve the texture appearance in the denoised images, making them look more natural.

125 citations


Proceedings ArticleDOI
23 Jun 2014
TL;DR: A dynamic gradient sparsity penalty is proposed for regularization of image fusion from a high resolution panchromatic image and a low resolution multispectral image at the same geographical location to efficiently solve the severely ill-posed problem.
Abstract: In this paper, we propose a novel method for image fusion from a high resolution panchromatic image and a low resolution multispectral image at the same geographical location. Different from previous methods, we do not make any assumption about the upsampled multispectral image, but only assume that the fused image after downsampling should be close to the original multispectral image. This is a severely ill-posed problem and a dynamic gradient sparsity penalty is thus proposed for regularization. Incorporating the intra- correlations of different bands, this penalty can effectively exploit the prior information (e.g. sharp boundaries) from the panchromatic image. A new convex optimization algorithm is proposed to efficiently solve this problem. Extensive experiments on four multispectral datasets demonstrate that the proposed method significantly outperforms the state-of-the-arts in terms of both spatial and spectral qualities.

118 citations


Journal ArticleDOI
TL;DR: Experimental results indicated that the proposed approach outperforms existing methods in terms of objective criteria and subjective perception improving the image resolution.
Abstract: This letter addresses the problem of generating a super-resolution (SR) image from a single low-resolution (LR) input image in the wavelet domain. To achieve a sharper image, an intermediate stage for estimating the high-frequency (HF) subbands has been proposed. This stage includes an edge preservation procedure and mutual interpolation between the input LR image and the HF subband images, as performed via the discrete wavelet transform (DWT). Sparse mixing weights are calculated over blocks of coefficients in an image, which provides a sparse signal representation in the LR image. All of the subband images are used to generate the new high-resolution image using the inverse DWT. Experimental results indicated that the proposed approach outperforms existing methods in terms of objective criteria and subjective perception improving the image resolution.

99 citations


Journal ArticleDOI
TL;DR: A variational approach where a specific energy is designed to model the color selection and the spatial constraint problems simultaneously is proposed and a minimization scheme, which computes a local minima of the defined nonconvex energy is proposed.
Abstract: In this paper, we address the problem of recovering a color image from a grayscale one. The input color data comes from a source image considered as a reference image. Reconstructing the missing color of a grayscale pixel is here viewed as the problem of automatically selecting the best color among a set of color candidates while simultaneously ensuring the local spatial coherency of the reconstructed color information. To solve this problem, we propose a variational approach where a specific energy is designed to model the color selection and the spatial constraint problems simultaneously. The contributions of this paper are twofold. First, we introduce a variational formulation modeling the color selection problem under spatial constraints and propose a minimization scheme, which computes a local minima of the defined nonconvex energy. Second, we combine different patch-based features and distances in order to construct a consistent set of possible color candidates. This set is used as input data and our energy minimization automatically selects the best color to transfer for each pixel of the grayscale image. Finally, the experiments illustrate the potentiality of our simple methodology and show that our results are very competitive with respect to the state-of-the-art methods.

91 citations


Journal ArticleDOI
TL;DR: Conventional ELM training of the SLFN improves over the classification performance of state of the art algorithms reported in the literature dealing with the data treated in this paper.

89 citations


Proceedings ArticleDOI
01 Oct 2014
TL;DR: This paper introduces a novel effective and efficient local-tuned-global (LTG) model induced IQA metric under the supposition that the human visual perception to image quality depends on salient local distortion and global quality degradation.
Abstract: This paper investigates the problem of full-reference (FR) image quality assessment (IQA). In general, the ideal IQA metric should be effective and efficient, yet most of existing FR IQA methods cannot reach these two targets simultaneously. Under the supposition that the human visual perception to image quality depends on salient local distortion and global quality degradation, we introduce a novel effective and efficient local-tuned-global (LTG) model induced IQA metric. Extensive experiments are conducted on five publicly available subject-rated color image quality databases, including LIVE, TID2008, CSIQ, IVC and TID2013, to evaluate and compare our algorithm with classical and state-of-the-art FR IQA approaches. The proposed LTG is shown to work fast and outperform those competing methods.

Journal ArticleDOI
TL;DR: This paper compares each of these operators by the manner of checking Peak signal to Noise Ratio (PSNR) and Mean Squared Error (MSE) of resultant image and finds Canny operator found as the best among others in edge detection accuracy.
Abstract: Edge detection is the vital task in digital image processing. It makes the image segmentation and pattern recognition more comfort. It also helps for object detection. There are many edge detectors available for pre-processing in computer vision. But, Canny, Sobel, Laplacian of Gaussian (LoG), Robert's and Prewitt are most applied algorithms. This paper compares each of these operators by the manner of checking Peak signal to Noise Ratio (PSNR) and Mean Squared Error (MSE) of resultant image. It evaluates the performance of each algorithm with Matlab and Java. The set of four universally standardized test images are used for the experimentation. The PSNR and MSE results are numeric values, based on that, performance of algorithms identified. The time required for each algorithm to detect edges is also documented. After the Experimentation, Canny operator found as the best among others in edge detection accuracy. Index Terms—Canny operator, Edge Detectors, Laplacian of Gaussian, MSE, PSNR, Sobel operator.

Journal ArticleDOI
TL;DR: A novel and powerful local image descriptor that extracts the histograms of second-order gradients (HSOGs) to capture the curvature related geometric properties of the neural landscape, i.e., cliffs, ridges, summits, valleys, basins, and so on is introduced.
Abstract: Recent investigations on human vision discover that the retinal image is a landscape or a geometric surface, consisting of features such as ridges and summits. However, most of existing popular local image descriptors in the literature, e.g., scale invariant feature transform (SIFT), histogram of oriented gradient (HOG), DAISY, local binary Patterns (LBP), and gradient location and orientation histogram, only employ the first-order gradient information related to the slope and the elasticity, i.e., length, area, and so on of a surface, and thereby partially characterize the geometric properties of a landscape. In this paper, we introduce a novel and powerful local image descriptor that extracts the histograms of second-order gradients (HSOGs) to capture the curvature related geometric properties of the neural landscape, i.e., cliffs, ridges, summits, valleys, basins, and so on. We conduct comprehensive experiments on three different applications, including the problem of local image matching, visual object categorization, and scene classification. The experimental results clearly evidence the discriminative power of HSOG as compared with its first-order gradient-based counterparts, e.g., SIFT, HOG, DAISY, and center-symmetric LBP, and the complementarity in terms of image representation, demonstrating the effectiveness of the proposed local descriptor.

Journal ArticleDOI
TL;DR: This work proposes simple image enhancement algorithms, which conserve the hue and preserve the range (gamut) of the R, G, B channels in an optimal way and competes with well-established alternative methods for images where hue-preservation is desired.
Abstract: Color image enhancement is a complex and challenging task in digital imaging with abundant applications. Preserving the hue of the input image is crucial in a wide range of situations. We propose simple image enhancement algorithms, which conserve the hue and preserve the range (gamut) of the R, G, B channels in an optimal way. In our setup, the intensity input image is transformed into a target intensity image whose histogram matches a specified, well-behaved histogram. We derive a new color assignment methodology where the resulting enhanced image fits the target intensity image. We analyze the obtained algorithms in terms of chromaticity improvement and compare them with the unique and quite popular histogram-based hue and range preserving algorithm of Naik and Murthy. Numerical tests confirm our theoretical results and show that our algorithms perform much better than the Naik-Murthy algorithm. In spite of their simplicity, they compete with well-established alternative methods for images where hue-preservation is desired.

Journal ArticleDOI
TL;DR: Computer simulation results support the idea of the proposed fused color image encryption scheme, which provides enlarged key space and hence enhanced security in the asymmetric encryption scheme.
Abstract: Image fusion is a popular method which provides better quality fused image for interpreting the image data. In this paper, color image fusion using wavelet transform is applied for securing data through asymmetric encryption scheme and image hiding. The components of a color image corresponding to different wavelengths (red, green, and blue) are fused together using discrete wavelet transform for obtaining a better quality retrieved color image. The fused color components are encrypted using amplitude- and phase-truncation approach in Fresnel transform domain. Also, the individual color components are transformed into different cover images in order to result disguising information of input image to an attacker. Asymmetric keys, Fresnel propagation parameters, weighing factor, and three cover images provide enlarged key space and hence enhanced security. Computer simulation results support the idea of the proposed fused color image encryption scheme.

Journal ArticleDOI
TL;DR: Algorithms for constrained minimization of the total p-variation (TpV), lp of the image gradient, which are applied to projection data from a realistic breast CT simulation, where the total X-ray dose is equivalent to two-view digital mammography.
Abstract: Exploiting sparsity in the image gradient magnitude has proved to be an effective means for reducing the sampling rate in the projection view angle in computed tomography (CT). Most of the image reconstruction algorithms, developed for this purpose, solve a nonsmooth convex optimization problem involving the image total variation (TV). The TV seminorm is the l1 norm of the image gradient magnitude, and reducing the l1 norm is known to encourage sparsity in its argument. Recently, there has been interest in employing nonconvex lp quasinorms with for sparsity exploiting image reconstruction, which is potentially more effective than l1 because nonconvex lp is closer to l0-a direct measure of sparsity. This paper develops algorithms for constrained minimization of the total p-variation (TpV), lp of the image gradient. Use of the algorithms is illustrated in the context of breast CT-an imaging modality that is still in the research phase and for which constraints on X-ray dose are extremely tight. The TpV-based image reconstruction algorithms are demonstrated on computer simulated data for exploiting gradient magnitude sparsity to reduce the projection view angle sampling. The proposed algorithms are applied to projection data from a realistic breast CT simulation, where the total X-ray dose is equivalent to two-view digital mammography. Following the simulation survey, the algorithms are then demonstrated on a clinical breast CT data set.

Journal ArticleDOI
TL;DR: A new notion of treating vector-valued images which is based on the angle between the spatial gradients of their channels is proposed, which shows that parallel level sets are a suitable concept for color image enhancement.
Abstract: Vector-valued images such as RGB color images or multimodal medical images show a strong interchannel correlation, which is not exploited by most image processing tools. We propose a new notion of treating vector-valued images which is based on the angle between the spatial gradients of their channels. Through minimizing a cost functional that penalizes large angles, images with parallel level sets can be obtained. After formally introducing this idea and the corresponding cost functionals, we discuss their Gateaux derivatives that lead to a diffusion-like gradient descent scheme. We illustrate the properties of this cost functional by several examples in denoising and demosaicking of RGB color images. They show that parallel level sets are a suitable concept for color image enhancement. Demosaicking with parallel level sets gives visually perfect results for low noise levels. Furthermore, the proposed functional yields sharper images than the other approaches in comparison.

Journal ArticleDOI
TL;DR: A method of locating the axial positions of both opaque and transparent objects in the reconstructed 3D field in the wavelet domain is proposed and validated by both simulated and experimental holograms of transparent spherical water droplets and opaque nonspherical coal particles.
Abstract: Depth-of-field extension and accurate 3D position location are two important issues in digital holography for particle characterization and motion tracking. We propose a method of locating the axial positions of both opaque and transparent objects in the reconstructed 3D field in the wavelet domain. The spatial–frequency property of the reconstructed image is analyzed from the viewpoint of the point spread function of the digital inline holography. The reconstructed image is decomposed into high- and low-frequency subimages. By using the variance of the image gradient in the subimages as focus metrics, the depth-of-field of the synthesis image can be extended with all the particles focalized, and the focal plane of the object can be accurately determined. The method is validated by both simulated and experimental holograms of transparent spherical water droplets and opaque nonspherical coal particles. The extended-focus image is applied to the particle pairing in a digital holographic particle tracking velocimetry to obtain the 3D vector field.

Journal ArticleDOI
TL;DR: A fast image upsampling method within a two-scale framework to ensure the sharp construction of upsampled image for both large-scale edges and small-scale structures that outperforms current state-of-the-art approaches based on quantitative and qualitative evaluations, as well as perceptual evaluation by a user study.
Abstract: In this paper, we present a fast image upsampling method within a two-scale framework to ensure the sharp construction of upsampled image for both large-scale edges and small-scale structures. In our approach, the low-frequency image is recovered via a novel sharpness preserving interpolation technique based on a well-constructed displacement field, which is estimated by a cross-resolution sharpness preserving model. Within this model, the distances of pixels on edges are preserved, which enables the recovery of sharp edges in the high-resolution result. Likewise, local high-frequency structures are reconstructed via a sharpness preserving reconstruction algorithm. Extensive experiments show that our method outperforms current state-of-the-art approaches, based on quantitative and qualitative evaluations, as well as perceptual evaluation by a user study. Moreover, our approach is very fast so as to be practical for real applications.

Journal ArticleDOI
TL;DR: A novel color image demosaicking algorithm using a voting-based edge direction detection method and a directional weighted interpolation method that provides superior performance in terms of both objective and subjective image qualities is presented.
Abstract: In this paper, we present a novel color image demosaicking algorithm using a voting-based edge direction detection method and a directional weighted interpolation method. By introducing the voting strategy, the interpolation direction of the center missing color component can be determined accurately. Along the determined interpolation direction, the center missing color component is interpolated using the gradient weighted interpolation method by exploring the intra-channel gradient correlation of the neighboring pixels. As compared with the latest demosaicking algorithms, experiments show that the proposed algorithm provides superior performance in terms of both objective and subjective image qualities.

Proceedings ArticleDOI
30 Mar 2014
TL;DR: This paper has proposed method for segmenting lung region in CXR images using canny edge filter and morphology and produced convincing result as most of the segmented image is almost similar to the ground truth image.
Abstract: Studies of medical image segmentation have long been done as a mean to distinguish object region from one to another for further image analysis. The segmentation of lung region in chest X-ray (CXR) based on object edge detection is one of the popular method applied. Early edge detection algorithms like Sobel, Prewitt and Laplacian have been used to segment the lung however, none of them can successfully generate a truly satisfied segmentation output. The reason for this fail is because they are high pass filter that is sensitive to image noise. Hence, the requirement for better edge detection algorithm that can cope with reasonable lower and upper threshold value for image noise like canny edge should be highlighted. Moreover, combining this algorithm with morphology method (dilation and erosion) will produce better outcome. Therefore, this paper has proposed method for segmenting lung region in CXR images using canny edge filter and morphology. Although the filter can detect the lung edge, unfortunately, the final edges lines produce are still unsatisfied. To solve the problem, Euler number method is applied to extract the lung region before executing the edge detection using the filter. The implementation produced convincing result as most of the segmented image is almost similar to the ground truth image.

Journal ArticleDOI
TL;DR: A novel salient region model based on the top-down cue of depth-from-focus from the same single image is used to guide the generation of final salient regions, which outperforms the state-of-the-art models on three public available data sets.
Abstract: Recently, some global contrast-based salient region detection models have been proposed based on only the low-level feature of color. It is necessary to consider both color and orientation features to overcome their limitations, and thus improve the performance of salient region detection for images with low-contrast in color and high-contrast in orientation. In addition, the existing fusion methods for different feature maps, like the simple averaging method and the selective method, are not effective sufficiently. To overcome these limitations of existing salient region detection models, we propose a novel salient region model based on the bottom-up and top-down mechanisms: the color contrast and orientation contrast are adopted to calculate the bottom-up feature maps, while the top-down cue of depth-from-focus from the same single image is used to guide the generation of final salient regions, since depth-from-focus reflects the photographer's preference and knowledge of the task. A more general and effective fusion method is designed to combine the bottom-up feature maps. According to the degree-of-scattering and eccentricities of feature maps, the proposed fusion method can assign adaptive weights to different feature maps to reflect the confidence level of each feature map. The depth-from-focus of the image as a significant top-down feature for visual attention in the image is used to guide the salient regions during the fusion process; with its aid, the proposed fusion method can filter out the background and highlight salient regions for the image. Experimental results show that the proposed model outperforms the state-of-the-art models on three public available data sets.

Proceedings ArticleDOI
03 Apr 2014
TL;DR: A novel algorithm for Content Based Image Retrieval (CBIR) based on Color Edge Detection and Discrete Wavelet Transform (DWT) is described, different from the existing histogram based methods.
Abstract: Color is one of the most important low-level features used in image retrieval and most content-based image retrievals (CBIR) systems use color as an image features. However, image retrieval using only color features often provide very unsatisfactory results because in many cases, images with similar colors do not have similar content. As the solution of this problem this paper describes a novel algorithm for Content Based Image Retrieval (CBIR) based on Color Edge Detection and Discrete Wavelet Transform (DWT). This method is different from the existing histogram based methods. The proposed algorithm generates feature vectors that combines both color and edge features. This paper also uses wavelet transform to reduce the size of the feature vector and simultaneously preserving the content details. The robustness of the system is also tested against query image alterations such as geometric deformations and noise addition etc. Wang's image database is used for experimental analysis and results are shown in terms of precision and recall.

Proceedings ArticleDOI
Chulhoon Jang1, Chansoo Kim1, Dongchul Kim1, Minchae Lee1, Myoungho Sunwoo1 
08 Jun 2014
TL;DR: The multiple exposure technique is proposed which enhances the robustness of the color segmentation and recognition accuracy by integrating both low and normal exposure images and solves the color saturation problem and reduces false positives since the low exposure image is exposed for a short time.
Abstract: This paper proposes a multiple exposure images based traffic light recognition method. For traffic light recognition, color segmentation is widely used to detect traffic light signals; however, the color in an image is easily affected by various illuminations and leads to incorrect recognition results. In order to overcome the problem, we propose the multiple exposure technique which enhances the robustness of the color segmentation and recognition accuracy by integrating both low and normal exposure images. The technique solves the color saturation problem and reduces false positives since the low exposure image is exposed for a short time. Based on candidate regions selected from the low exposure image, the status of six three and four bulb traffic lights in a normal image are classified utilizing a support vector machine with a histogram of oriented gradients. Our algorithm was finally evaluated in various urban scenarios and the results show that the proposed method works robustly for outdoor environments.

Journal ArticleDOI
TL;DR: It is demonstrated that the dehazing technique is suitable for the challenging problem of image matching based on local feature points and is the first that present an image matching evaluation performed for hazy images.
Abstract: In this letter we present a novel strategy to enhance images degraded by the atmospheric phenomenon of haze. Our single-based image technique does not require any geometrical information or user interaction enhancing such images by restoring the contrast of the degraded images. The degradation of the finest details and gradients is constrained to a minimum level. Using a simple formulation that is derived from the lightness predictor our contrast enhancement technique restores lost discontinuities only in regions that insufficiently represent original chromatic contrast of the scene. The parameters of our simple formulation are optimized to preserve the original color spatial distribution and the local contrast. We demonstrate that our dehazing technique is suitable for the challenging problem of image matching based on local feature points. Moreover, we are the first that present an image matching evaluation performed for hazy images. Extensive experiments demonstrates the utility of the novel technique.

Journal ArticleDOI
TL;DR: This letter proposes a machine learning based blocking artifacts metric for JPEG images by measuring the regularities of pseudo structures by utilizing support vector regression to learn the underlying relations between these features and perceived blocking artifacts.
Abstract: Image degradation damages genuine visual structures and causes pseudo structures. Pseudo structures are usually present with regularities. This letter proposes a machine learning based blocking artifacts metric for JPEG images by measuring the regularities of pseudo structures. Image corner, block boundary and color change properties are used to differentiate the blocking artifacts. A support vector regression (SVR) model is adopted to learn the underlying relations between these features and perceived blocking artifacts. The blocking artifacts score of a test image is predicted using the trained model. Extensive experiments demonstrate the effectiveness of the method.

Proceedings ArticleDOI
Xueyang Fu1, Yue Huang1, Delu Zeng1, Xiao-Ping Zhang1, Xinghao Ding1 
20 Nov 2014
TL;DR: The proposed method is the first to adopt a fusion-based method for enhancing single sandstorm image, which can be improved by color correction, well enhanced details and local contrast while promoted global brightness, increasing the visibility, naturalness preservation.
Abstract: In this paper, a novel image enhancing approach focuses on single sandstorm image is proposed. The degraded image has some problems, such as color distortion, low-visibility, fuzz and non-uniform luminance, due to the light is absorbed and scattered by particles in sandstorm. The proposed approach based on fusion principles aims to overcome the aforementioned limitations. First, the degraded image is color corrected by adopting a statistical strategy. Then two inputs, which represent different brightness, are derived only from the color corrected image by applying Gamma correction. Three weighted maps (sharpness, chromaticity and prominence), which contain important features to increase the quality of the degraded image, are computed from the derived inputs. Finally, the enhanced image is obtained by fusing the inputs with the weight maps. The proposed method is the first to adopt a fusion-based method for enhancing single sandstorm image. Experimental results show that enhanced results can be improved by color correction, well enhanced details and local contrast while promoted global brightness, increasing the visibility, naturalness preservation. Moreover, the proposed algorithm is mostly calculated by per-pixel operation, which is appropriate for real-time applications.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a method of enhancing the quality of underwater image consisting of two stages consisting of contrast correction technique and color correction technique, where the image is first converted into hue-saturation value (HSV) color model.
Abstract: The quality of underwater image is poor due to the properties of water and its impurities. The properties of water cause attenuation of light travels through the water medium, resulting in low contrast, blur, inhomogeneous lighting, and color diminishing of the underwater images. This paper proposes a method of enhancing the quality of underwater image. The proposed method consists of two stages. At the first stage, the contrast correction technique is applied to the image, where the image is applied with the modified Von Kries hypothesis and stretching the image into two different intensity images at the average value with respects to Rayleigh distribution. At the second stage, the color correction technique is applied to the image where the image is first converted into hue-saturation-value (HSV) color model. The modification of the color component increases the image color performance. Qualitative and quantitative analyses indicate that the proposed method outperforms other state-of-the-art methods in terms of contrast, details, and noise reduction.

Journal ArticleDOI
TL;DR: An improved nonlinear IHS (intensity, hue, saturation; iNIHS) color space and related color transformations are proposed in this paper to solve the gamut problem without appealing to color clipping.
Abstract: An image fusion method must ideally preserve both the detail of the panchromatic image and the color of the multispectral image. Existing image fusion methods incur the gamut problem of creating new colors which fall out of the RGB cube. These methods solve the problem by color clipping which yields undesirable color distortions and contrast reductions. An improved nonlinear IHS (intensity, hue, saturation; iNIHS) color space and related color transformations are proposed in this paper to solve the gamut problem without appealing to color clipping. The iNIHS space includes two halves, one being constructed from the lower half of the RGB cube by RGB to IHS transformations, and the other from the upper half of the RGB cube by CMY to IHS transformations. While incurring no out-of-gamut colors, desired intensity substitutions and additions in substitutive and additive image fusions, respectively, are all achievable, with the saturation component regulated within the maximum attainable range. Good experimental results show the feasibility of the proposed method.

Book
02 May 2014
TL;DR: This work proposes strategies and solutions to tackle the problem of building photo-mosaics of very large underwater optical surveys, presenting contributions to the image preprocessing, enhancing and blending steps, and resulting in an improved visual quality of the final photo- mosaic.
Abstract: This work proposes strategies and solutions to tackle the problem of building photo-mosaics of very large underwater optical surveys, presenting contributions to the image preprocessing, enhancing and blending steps, and resulting in an improved visual quality of the final photo-mosaic. The text opens with a comprehensive review of mosaicing and blending techniques, before proposing an approach for large scale underwater image mosaicing and blending. In the image preprocessing step, a depth dependent illumination compensation function is used to solve the non-uniform illumination appearance due to light attenuation. For image enhancement, the image contrast variability due to different acquisition altitudes is compensated using an adaptive contrast enhancement based on an image quality reference selected through a total variation criterion. In the blending step, a graph-cut strategy operating in the image gradient domain over the overlapping regions is suggested. Next, an out-of-core blending strategy for very large scale photo-mosaics is presented and tested on real data. Finally, the performance of the approach is evaluated and compared with other approaches.