scispace - formally typeset
Search or ask a question
Topic

Contourlet

About: Contourlet is a research topic. Over the lifetime, 3533 publications have been published within this topic receiving 38980 citations.


Papers
More filters
Proceedings ArticleDOI
01 Jan 2008
TL;DR: The results show that the new robust watermarking algorithm using non- redundant contourlet method is highly robust to different kinds of attacks including non-geometrical and geometrical attacks.
Abstract: In contrast to conventional methods operating in the wavelet domain, a new robust watermarking algorithm using non- redundant contourlet transform is presented in this paper. We explore the high degree of directionality and anisotropy of this transform to obtain a sparser image representation. Through experiments, we find that most of the energy relations between parent and children non-redundant contourlet coefficients maintain invariant before and after JPEG compression. Performance improvement is obtained by means of embedding a watermark exploiting the modulation of the energy relations. Test results based on 16 set of attacks using 6 images are obtained. The results show that our non- redundant contourlet method is highly robust to different kinds of attacks including non-geometrical and geometrical attacks. These include JPEG 2000 compression (as low as QF=10), 400 pixels circular shifting, and contrast stretching, (as low as 20%). Comparisons with two other wavelet methods further demonstrate the potential of the non- redundant contourlet in digital watermarking applications.

14 citations

Patent
03 Aug 2011
TL;DR: In this paper, a neural network based sonar image super-resolution reconstruction method which is used for performing superresolution reconstruction on a sonar images r to be reconstructed at super resolution is presented.
Abstract: The invention discloses a neural network based sonar image super-resolution reconstruction method which is used for performing super-resolution reconstruction on a sonar image r to be reconstructed at super resolution. The method comprises the following steps of: performing nonsubsampled contourlet decomposition and neural network training on a high-resolution sonar image and four degraded sample images; performing cubic interpolation on the sonar image r to be reconstructed at super resolution and taking the interpolated image as a high-resolution low-pass sub-band coefficient; and performing nonsubsampled contourlet decomposition on the sonar image r again, inputting the sub-band coefficient of each band-pass direction of the sonar image r to be reconstructed at super resolution into a trained neural network to acquire the high-resolution sub-band coefficient of each band-pass direction, and finally performing nonsubsampled contourlet decomposition to acquire a super-resolution reconstructed sonar image R. The sonar image reconstructed at super resolution has a better edge, a detail keeping effect and a better visual effect and contributes to processing such as sea bottom survey, subsequent underwater target positioning and recognizing and the like.

13 citations

Proceedings ArticleDOI
25 Feb 2013
TL;DR: A novel method for adaptive fusion of multimodal surveillance images, based on Non-Subsampled Contourlet Transform (NSCT), which has an improved performance over Visual Sensor Networks (VSN), and achieves better visual quality and objective metrics than the state-of-art methods.
Abstract: In this paper, we present a novel method for adaptive fusion of multimodal surveillance images, based on Non-Subsampled Contourlet Transform (NSCT), which has an improved performance over Visual Sensor Networks (VSN). In sensor networks, energy consumption and bandwidth are the main factors that determine the lifetime of the sensors. In order to reduce the energy and bandwidth used in transmission, the proposed method uses Compressive sensing (CS) which can compress the input data in the sampling process efficiently. Since CS is more efficient for sparse signals, in this work, each sensor image is first decomposed into sparse and dense components. We have introduced Contourlet Transform for this decomposition because of its ability to capture and represent smooth boundaries of objects in images, so that the reconstructed images have a better quality. The reconstructed input images are fused using an adaptive algorithm based on NSCT in a centralized server. The improvement in the quality of the fused image is achieved by the use of an image fusion metric and a search algorithm to assign optimum weights to the various regions in the segmented source images. Experimental results show, no significant change in the quality of the fused images with and without compression. The results show that the proposed method achieves better visual quality and objective metrics than the state-of-art methods.

13 citations

Book ChapterDOI
28 Jun 2012
TL;DR: It was found that performance of proposed fusion method is better than wavelet transform (Discrete wavelets transform and Stationary wavelet transforms) based image fusion methods.
Abstract: Image fusion is an emerging area of research having a number of applications in medical imaging, remote sensing, satellite imaging, target tracking, concealed weapon detection and biometrics. In the present work, we have proposed a new edge preserving image fusion method based on contourlet transform. As contourlet transform has high directionality and anisotropy, it gives better image representation than wavelet transforms. Also contourlet transform represents salient features of images such as edges, curves and contours in better way. So it is well suited for image fusion. We have performed experiments on several image data sets and results are shown for two datasets of multifocus images and one dataset of medical images. On the basis of experimental results, it was found that performance of proposed fusion method is better than wavelet transform (Discrete wavelet transform and Stationary wavelet transform) based image fusion methods. We have verified the goodness of the proposed fusion algorithm by well known image fusion measures (entropy, standard deviation, mutual information (MI) and $Q_{AB}^{F}$). The fusion evaluation parameters also imply that the proposed edge preserving image fusion method is better than wavelet transform (Discrete wavelet transform and Stationary wavelet transform) based image fusion methods.

13 citations

Proceedings ArticleDOI
08 Apr 2011
TL;DR: Comparisons between different methods of image fusion such as Brovey method, proposed RGB, Haar wavelet, Daubechies wavelet methods are presented and the final result shows that which methods is suitable for required application.
Abstract: A image fusion algorithm based on wavelet transform to fuse multisensor images is presented. When images are merged in wavelet space, different frequency ranges are processed differently. It can merge information from original images adequately and improve abilities of information analysis and feature extraction. Extensive experiments including the fusion of registered multiband images, multispectral images, multifocus digital camera images, multisensor of VISMR images and medical CT\MRI images are presented. The evolution of image fusion research begins from Simple image fusion attempts, Pyramid-decomposition-based image fusion and Wavelet-transform-based image fusion. Several types of pyramid decomposition are used or developed for image fusion, such as Laplacian Pyramid, Ratio-of-low-pass Pyramid and Gradient Pyramid. Since then, image fusion began to receive increasing attention. More recently, with the development of wavelet theory, people began to apply wavelet multiscale decomposition to take the place of pyramid decomposition for image fusion. Actually, wavelet transform can be taken as one special type of pyramid decompositions. This paper presents comparison between different methods of image fusion such as Brovey method, proposed RGB, Haar wavelet, Daubechies wavelet methods. The final result shows that which methods is suitable for required application.

13 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
89% related
Image processing
229.9K papers, 3.5M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
84% related
Deep learning
79.8K papers, 2.1M citations
82% related
Artificial neural network
207K papers, 4.5M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202336
202299
202175
2020109
2019155
2018164