scispace - formally typeset
Search or ask a question
Topic

Contourlet

About: Contourlet is a research topic. Over the lifetime, 3533 publications have been published within this topic receiving 38980 citations.


Papers
More filters
Proceedings ArticleDOI
12 Dec 2013
TL;DR: This paper describes the basic tenets of the Laplacian pyramid decomposition, and analysis using user-defined threshold values to distinguish between the image detail and edges of the disadvantages, and proposes to use the global information directly to obtain the threshold value method.
Abstract: Image enhancement is a low-level image processing and have many methods of image enhancement processing. Which based on Laplacian pyramid of the image processing algorithm for image enhancement is an essential field of image processing and crucial technologies. The Laplacian pyramid is ubiquitous for decomposing images into multiple scales and is widely used for image analysis. In this paper, we describes the basic tenets of the Laplacian pyramid decomposition, and analysis using user-defined threshold values to distinguish between the image detail and edges of the disadvantages, and propose to use the global information directly to obtain the threshold value method. In the basis of the obtained good results, the limit remapping layers that not only able to reduce time and cost, and can reduce the cost of unnecessary calculations. As we demonstrate, our method produces consistently high-quality results in the process of image detail enhancement.

10 citations

Patent
09 Oct 2013
TL;DR: In this paper, a pulse coupled neural network (PCNN) image enhancement algorithm and device based on Contourlet transformation is presented. But the method is limited to low-frequency subband images.
Abstract: The invention provides a pulse coupled neural network (PCNN) image enhancement algorithm and device based on Contourlet transformation. The PCNN image enhancement algorithm mainly comprises the following steps of: S1, converting a component of an image to be processed in a red, green and blue (RGB) color space to obtain a hue component H, a luminance component I and a saturation component S, S2, decomposing the luminance component I through the Contourlet transformation to obtain a low-frequency subband image and a series of multi-scale and multidirectional bandpass subband contour image sequences, S3, taking the bandpass subband contour image sequences obtained by decomposing as external inputs of PCNN enhancement operators, so as to obtain enhanced bandpass subband contour image sequences, S4, combining the enhanced bandpass subband contour image sequences and an original low-pass subband image sequence, and performing the Contourlet transformation to obtain an enhanced luminance component I', S5, adjusting the saturation of the saturation component S to obtain a new saturation component S', and S6, converting the hue component H, the new luminance component I' and the new saturation component S' to the RGB color space to obtain an enhanced image.

10 citations

Proceedings ArticleDOI
30 Oct 2009
TL;DR: A comparison of the proposed method with existing traditional method is presented, which confirms that interpolation and decimation operations will indeed affect recognition accuracy when texture and direction information are used as features.
Abstract: We proposed a new normalization method for iris recognition, which is different from the conventional one in which the annular iris region is unwrapped to a rectangular block under polar coordinate. In this method, we investigate the effect of interpolation and decimation in conventional normalization method to recognition rate for the first time. We used the original texture to fill the pupil area, then a novel normalized image can be obtained with the geometric structure and directional information of original iris image well preserved, which enables us to choose simpler features than before. Subsequently, we extracted the multi-direction and multi-scale information feature of normalized iris image by contourlet transform, and adopt SVM to classify the features. Experimental results validate the improvement of recognition rate. required. Thus the geometric structure and the directional information of iris image are altered, and the recognition performance may be affected if texture and directional information are adopted as features. However, the effect has not been studied previously. The purpose of the letter is to study the distortion of the polar coordinate transform to the recognition rate. In order to validate and attempt to alleviate this effect, we proposed another different normalization method without adopting the polar coordinate transformation, thus the original geometric structure and directional information can be preserved. We also propose to use a simple feature that is derived from multi-scale and multi-direction local information so that we can highlight the effect of the normalization step to the recognition rate. After this preprocessing procedure, we use contourlet transform to extract the multi-direction and multi-scale information feature of normalized iris image, and then use support-vector-machine (SVM) to classify the features and give the classification result (8). Contourlet transform is a typical multi-scale and multi-direction system (7). It contains Laplacian Pyramid to achieve multi-scale property and Direction Filter Bank to achieve multi-direction characteristic. SVMs belong to a family of generalized linear classifiers. A special property is that they simultaneously minimize the empirical classification error and maximize the geometric margin. A comparison of our proposed method with existing traditional method is presented, which confirms that interpolation and decimation operations will indeed affect recognition accuracy when texture and direction information are used as features. Experimental results show that the recognition rate can be improved by approximately 30% compared with conventional methods with simple feature being used.

10 citations

Journal ArticleDOI
TL;DR: A novel image sharpening detection method based on multiresolution overshoot artifact analysis (MOAA) is proposed, which finds that although undergoing the same sharpening operation, the edge with large slope will present a stronger overshoot artifacts than the one with small slope.
Abstract: With the wide use of sophisticated photo editing tools, digital image manipulation becomes very convenient, which makes the detection of image tampering significant. Image sharpening, which aims to enhance the contrast of edges in an image, is a ubiquitous image tampering operation. The detection of image sharpening can serve as a reliable clue for image forgery. In this paper, we propose a novel image sharpening detection method based on multiresolution overshoot artifact analysis (MOAA). By building the relationship between the overshoot artifact strength and the slope of a sharpened edge, we find that although undergoing the same sharpening operation, the edge with large slope will present a stronger overshoot artifact than the one with small slope. Based on this finding, we use the nonsubsampled contourlet transform (NSCT) to classify the image edge points into three categories, i.e., weak, middle and strong edge points and measure the overshoot artifact of each category respectively. A cascaded decision strategy is adopted to decide an image is sharpened or not. Experimental results on digital images with various sharpening operators demonstrate the superiority of our proposed method when compared with state-of-the-art approaches.

10 citations

Journal ArticleDOI
TL;DR: Extensive simulation results and comparisons show that the algorithm gets a good trade-off of invisibility, robustness and capacity,thus obtaining good quality of the image while being able to effectively resist common image processing, and geometric and combo attacks, and normalized similarity is almost all reached.

10 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
89% related
Image processing
229.9K papers, 3.5M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
84% related
Deep learning
79.8K papers, 2.1M citations
82% related
Artificial neural network
207K papers, 4.5M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202336
202299
202175
2020109
2019155
2018164