scispace - formally typeset
Search or ask a question
Topic

Contourlet

About: Contourlet is a research topic. Over the lifetime, 3533 publications have been published within this topic receiving 38980 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A minimum regional cross-gradient method is proposed, and the cross- gradient is gained by calculating the gradient between the pixel of bandpass subbands and the adjacent pixel in the fused image of the low-frequency components.

96 citations

Journal ArticleDOI
TL;DR: With RF classifiers, the classification accuracies of eating are over 88% in holdout and cross-validation experiments, thus demonstrating the effectiveness of the proposed feature extraction method and the importance of RF classifier in automatically understanding and characterising driver behaviours towards human-centric driver assistance systems.
Abstract: An efficient feature extraction approach for driving postures from a video camera, which consists of Homomorphic filtering, skin-like regions segmentation and contourlet transform (CT), was proposed. With features extracted from a driving posture dataset created at Southeast University (SEU), holdout and cross-validation experiments on driving posture classification were then conducted using random forests (RF) classifier. Compared with a number of commonly used classification methods including linear perceptron classifier, k -nearest-neighbour classifier and multilayer perceptron (MLP) classifier, the experiments results showed that the RF classifier offers the best classification performance among the four classifiers. Among the four predefined classes, that is, grasping the steering wheel, operating the shift gear, eating and talking on a cellular phone, the class of eating is the most difficult to classify. With RF classifier, the classification accuracies of eating are over 88% in holdout and cross-validation experiments, thus demonstrating the effectiveness of the proposed feature extraction method and the importance of RF classifier in automatically understanding and characterising driver%s behaviours towards human-centric driver assistance systems.

94 citations

Journal ArticleDOI
TL;DR: This paper proposes a learning-based approach for automatic detection of fabric defects based on a statistical representation of fabric patterns using the redundant contourlet transform (RCT) using a finite mixture of generalized Gaussians (MoGG).
Abstract: We propose a learning-based approach for automatic detection of fabric defects Our approach is based on a statistical representation of fabric patterns using the redundant contourlet transform (RCT) The distribution of the RCT coefficients are modeled using a finite mixture of generalized Gaussians (MoGG), which constitute statistical signatures distinguishing between defective and defect-free fabrics In addition to being compact and fast to compute, these signatures enable accurate localization of defects Our defect detection system is based on three main steps In the first step, a preprocessing is applied for detecting basic pattern size for image decomposition and signature calculation In the second step, labeled fabric samples are used to train a Bayes classifier (BC) to discriminate between defect-free and defective fabrics Finally, defects are detected during image inspection by testing local patches using the learned BC Our approach can deal with multiple types of textile fabrics, from simple to more complex ones Experiments on the TILDA database have demonstrated that our method yields better results compared with recent state-of-the-art methods Note to Practitioners —Fabric defect detection is central to automated visual inspection and quality control in textile manufacturing This paper deals with this problem through a learning-based approach By opposite to several existing approaches for fabric defect detection, which are effective in only some types of fabrics and/or defects, our method can deal with almost all types of patterned fabric and defects To enable both detection and localization of defects, a fabric image is first divided into local blocks, which are representative of the repetitive pattern structure of the fabric Then, statistical signatures are calculated by modeling the distribution of coefficients of an RCT using the finite MoGG The discrimination between defect-free and defective fabrics is then achieved through supervised classification of RCT-MoGG signatures based on expert-labeled examples of defective fabric images Experiments have shown that our method yields very good performance in terms of defect detection and localization In addition to its accuracy, inspection of images can be performed in a fully automatic fashion, whereas only labeled examples are initially required Finally, our method can be easily adapted to a real-time scenario since defect detection on inspected images is performed at the block level, which can be easily parallelized through hardware implementation

94 citations

Journal ArticleDOI
TL;DR: A learning-based, single-image super-resolution reconstruction technique using the contourlet transform, which is capable of capturing the smoothness along contours making use of directional decompositions, which outperforms standard interpolation techniques as well as a standard (Cartesian) wavelet-based learning.
Abstract: We propose a learning-based, single-image super-resolution reconstruction technique using the contourlet transform, which is capable of capturing the smoothness along contoursmaking use of directional decompositions. The contourlet coefficients at finer scales of the unknown high-resolution image are learned locally from a set of high-resolution training images, the inverse contourlet transform of which recovers the super-resolved image. In effect, we learn the high-resolution representation of an oriented edge primitive from the training data. Our experiments show that the proposed approach outperforms standard interpolation techniques as well as a standard (Cartesian) wavelet-based learning both visually and in terms of the PSNR values, especially for images with arbitrarily oriented edges.

94 citations

Journal ArticleDOI
TL;DR: This paper has classified a set of Histopathological Breast-Cancer images utilizing a state-of-the-art CNN model containing a residual block and examined the performance of the novel CNN model as Histopathology image classifier.
Abstract: Identification of the malignancy of tissues from Histopathological images has always been an issue of concern to doctors and radiologists. This task is time-consuming, tedious and moreover very challenging. Success in finding malignancy from Histopathological images primarily depends on long-term experience, though sometimes experts disagree on their decisions. However, Computer Aided Diagnosis (CAD) techniques help the radiologist to give a second opinion that can increase the reliability of the radiologist’s decision. Among the different image analysis techniques, classification of the images has always been a challenging task. Due to the intense complexity of biomedical images, it is always very challenging to provide a reliable decision about an image. The state-of-the-art Convolutional Neural Network (CNN) technique has had great success in natural image classification. Utilizing advanced engineering techniques along with the CNN, in this paper, we have classified a set of Histopathological Breast-Cancer (BC) images utilizing a state-of-the-art CNN model containing a residual block. Conventional CNN operation takes raw images as input and extracts the global features; however, the object oriented local features also contain significant information—for example, the Local Binary Pattern (LBP) represents the effective textural information, Histogram represent the pixel strength distribution, Contourlet Transform (CT) gives much detailed information about the smoothness about the edges, and Discrete Fourier Transform (DFT) derives frequency-domain information from the image. Utilizing these advantages, along with our proposed novel CNN model, we have examined the performance of the novel CNN model as Histopathological image classifier. To do so, we have introduced five cases: (a) Convolutional Neural Network Raw Image (CNN-I); (b) Convolutional Neural Network CT Histogram (CNN-CH); (c) Convolutional Neural Network CT LBP (CNN-CL); (d) Convolutional Neural Network Discrete Fourier Transform (CNN-DF); (e) Convolutional Neural Network Discrete Cosine Transform (CNN-DC). We have performed our experiments on the BreakHis image dataset. The best performance is achieved when we utilize the CNN-CH model on a 200× dataset that provides Accuracy, Sensitivity, False Positive Rate, False Negative Rate, Recall Value, Precision and F-measure of 92.19%, 94.94%, 5.07%, 1.70%, 98.20%, 98.00% and 98.00%, respectively.

91 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
89% related
Image processing
229.9K papers, 3.5M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
84% related
Deep learning
79.8K papers, 2.1M citations
82% related
Artificial neural network
207K papers, 4.5M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202336
202299
202175
2020109
2019155
2018164