scispace - formally typeset
Search or ask a question
Topic

Contourlet

About: Contourlet is a research topic. Over the lifetime, 3533 publications have been published within this topic receiving 38980 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The proposed improved image denoising method can remove Gaussian white noise more effectively, and get a higher PSNR value and keep image texture and detail information more clearly, which also has a better visual effect.

15 citations

Journal ArticleDOI
TL;DR: The experimental results show that the contourlet HMM–PCNN model proposed in this paper is superior to thecontourlet with hidden Markov tree model and the wavelet threshold method.
Abstract: In this paper, we propose a novel model of sparse representation for image denoising that we call an adaptive contourlet hidden Markov model (HMM)-pulse-coupled neural network (PCNN). In this study, we first adopted a contourlet transform to decompose a noisy image to be some subband coefficients of various directions at various scales. The contourlet emulated extremely well the sparse representation performance of human visual perception, such as its multiscale characteristics, geometric features, and bandpass properties. Second, we used an HMM method to create a statistical model that expressed the coefficient relationships in intrabands, interbands, intrascales, and interscales. Then we used an expectation-maximization training algorithm to obtain the state probability. The result included the state, scale, and direction, the position of the coefficient, the noisy image, and the parameter set of the HMM model. Third, we put the state probability into the PCNN model, which could adaptively optimize the parameters of the HMM model and get better coefficients of clean images. Finally, we transformed the image denoising problem into a Bayesian posterior probability estimation problem. We also reconstructed a denoised image based on the clean coefficients obtained from our proposed method. The experimental results show that the contourlet HMM-PCNN model proposed in this paper is superior to the contourlet with hidden Markov tree model and the wavelet threshold method.

15 citations

Dissertation
14 Dec 2011
TL;DR: Three types of dictionaries, which correspond to three types of transforms/representations, will be studied for their applicability in some image analysis and pattern recognition tasks: Radon transform, unit disk-based moments, and sparse representation.
Abstract: One of the main requirements in many signal processing applications is to have a "meaningful representation'' in which signal's characteristics are readily apparent. For example, for recognition, the representation should highlight salient features; for denoising, it should efficiently separate signal and noise; and for compression, it should capture a large part of signal using only a few coefficients. Interestingly, despite these seemingly different goals, good performance of signal processing applications generally has roots in the appropriateness of the adopted representations. Representing a signal involves the design of a set of elementary generating signals, or a dictionary of atoms, which is used to decompose the signal. For many years, dictionary design has been pursued by many researchers for various fields of applications: Fourier transform was proposed to solve the heat equation; Radon transform was created for the reconstruction problem; wavelet transform was developed for piece-wise smooth, one-dimensional signals with a finite number of discontinuities; and contourlet transform was designed to efficiently represent two-dimensional signals made of smooth regions separated by smooth boundaries, etc. For the developed dictionaries up to the present time, they can be roughly classified into two families: mathematical models of the data and sets of realizations of the data. Dictionaries of the first family are characterized by analytical formulations, which can sometimes be fast implemented. The representation coefficients of a signal in one dictionary are obtained by performing signal transform. Dictionaries of the second family, which are often general overcomplete, deliver greater flexibility and the ability to adapt to specific signal data. They are the results of much more recent dictionary designing approaches where dictionaries are learned from data for their representation. The existence of many dictionaries naturally leads to the problem of selecting the most appropriate one for the representation of signals in a certain situation. The selected dictionary should have distinguished and beneficial properties which are preferable in the targeted applications. Speaking differently, it is the actual application that controls the selection of dictionary, not the reverse. In the framework of this thesis, three types of dictionaries, which correspond to three types of transforms/representations, will be studied for their applicability in some image analysis and pattern recognition tasks. They are the Radon transform, unit disk-based moments, and sparse representation. The Radon transform and unit disk-based moments are for invariant pattern recognition problems, whereas sparse representation for image denoising, separation, and classification problems. This thesis contains a number of theoretical contributions which are accompanied by numerous validating experimental results. For the Radon transform, it discusses possible directions that can be followed to define invariant pattern descriptors, leading to the proposal of two descriptors that are totally invariant to rotation, scaling, and translation. For unit disk-based moments, it presents a unified view on strategies that have been used to define unit disk-based orthogonal moments, leading to the proposal of four generic polar harmonic moments and strategies for their fast computation. For sparse representation, it uses sparsity-based techniques for denoising and separation of graphical document images and proposes a representation framework that balances the three criteria sparsity, reconstruction error, and discrimination power for classification.

15 citations

Journal ArticleDOI
TL;DR: Both qualitative and quantitative evaluations show that the combination of Least Square Support Vector Machine (LSSVM) as a classifier and the statistical parametric framework based reduced feature representation in Non-Subsampled Contourlet Transform (NSCT) with “pyrexc” and “sinc” filters gives the best retrieval performances.
Abstract: Recently, Content Based Image Retrieval (CBIR) has emerged as an active research area having applications in various fields There exist several states-of-the art CBIR systems that uses both spatial and transform features as input However, as hardly any details study reported so far on the effectiveness of different transform domain features in CBIR paradigm This motivates the current article where we have presented extensive comparative assessment of five different transform domain features considering various filter combinations Three different feature representation schemes and three different classifiers have been used for this purpose Extensive experiments on four widely used benchmark image databases (Oliva, Caltech101, Caltech256 and MIRFlickr25000) were conducted to determine the best combination of transform, filters, feature representation and classifier Furthermore, we have also attempted to discover the optimal features from the best combinations using maximal information compression index (MICI) Both qualitative and quantitative evaluations show that the combination of Least Square Support Vector Machine (LSSVM) as a classifier and the statistical parametric framework based reduced feature representation in Non-Subsampled Contourlet Transform (NSCT) with "pyrexc" and "sinc" filters gives the best retrieval performances

15 citations

Proceedings ArticleDOI
01 Oct 2016
TL;DR: This paper presents a two-stage fusion framework using the cascaded combination of Discrete wavelet transform(DWT) and Non sub-sampled Contourlet transform (NSCT) domains for images acquired using two distinct medical imaging modalities.
Abstract: Multimodal medical image fusion is used to minimize the redundancy while increasing the necessary information from the input images obtained using different medical imaging sensors. The sole aim is to yield a single fused image, which could be more informative for an efficient analysis. This paper presents a two-stage fusion framework using the cascaded combination of Discrete wavelet transform(DWT) and Non sub-sampled Contourlet transform (NSCT) domains for images acquired using two distinct medical imaging modalities (i.e., magnetic resonance imaging and computed tomography scan). The first stage employs a principal component analysis algorithm in DWT domain to minimize the redundancy. At second stage Maximum fusion rule is then applied in NSCT domain to enhance the contrast of the diagnostic features. A quantitative analysis of fused image is carried out using fusion metrics.

15 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
89% related
Image processing
229.9K papers, 3.5M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
84% related
Deep learning
79.8K papers, 2.1M citations
82% related
Artificial neural network
207K papers, 4.5M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202336
202299
202175
2020109
2019155
2018164