scispace - formally typeset
Search or ask a question

Showing papers on "Contourlet published in 2015"


Journal ArticleDOI
TL;DR: A general image fusion framework by combining MST and SR to simultaneously overcome the inherent defects of both the MST- and SR-based fusion methods is presented and experimental results demonstrate that the proposed fusion framework can obtain state-of-the-art performance.

952 citations


Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed algorithm can significantly improve image fusion performance, accomplish notable target information and high contrast, simultaneously preserve rich details information, and excel other typical current methods in both objective evaluation criteria and visual effect.

161 citations


Journal ArticleDOI
TL;DR: A two-stage multimodal fusion framework using the cascaded combination of stationary wavelet transform (SWT) and non sub-sampled Contourlet Transform (NSCT) domains for images acquired using two distinct medical imaging sensor modalities is presented.
Abstract: Multimodal medical image fusion is effectuated to minimize the redundancy while augmenting the necessary information from the input images acquired using different medical imaging sensors. The sole aim is to yield a single fused image, which could be more informative for an efficient clinical analysis. This paper presents a two-stage multimodal fusion framework using the cascaded combination of stationary wavelet transform (SWT) and non sub-sampled Contourlet transform (NSCT) domains for images acquired using two distinct medical imaging sensor modalities (i.e., magnetic resonance imaging and computed tomography scan). The major advantage of using a cascaded combination of SWT and NSCT is to improve upon the shift variance, directionality, and phase information in the finally fused image. The first stage employs a principal component analysis algorithm in SWT domain to minimize the redundancy. Maximum fusion rule is then applied in NSCT domain at second stage to enhance the contrast of the diagnostic features. A quantitative analysis of fused images is carried out using dedicated fusion metrics. The fusion responses of the proposed approach are also compared with other state-of-the-art fusion approaches; depicting the superiority of the obtained fusion results.

160 citations


Journal ArticleDOI
TL;DR: A novel framework for spatially registered multimodal medical image fusion, which is primarily based on the non-subsampled contourlet transform (NSCT), is proposed that enables the decomposition of source medical images into low- and high-frequency bands in NSCT domain.

143 citations


Journal ArticleDOI
TL;DR: Novel texture classification and retrieval methods that model adjacent shearlet subband dependences using linear regression and outperform the current state-of-the-art are proposed.
Abstract: Statistical modeling of wavelet subbands has frequently been used for image recognition and retrieval. However, traditional wavelets are unsuitable for use with images containing distributed discontinuities, such as edges. Shearlets are a newly developed extension of wavelets that are better suited to image characterization. Here, we propose novel texture classification and retrieval methods that model adjacent shearlet subband dependences using linear regression. For texture classification, we use two energy features to represent each shearlet subband in order to overcome the limitation that subband coefficients are complex numbers. Linear regression is used to model the features of adjacent subbands; the regression residuals are then used to define the distance from a test texture to a texture class. Texture retrieval consists of two processes: the first is based on statistics in contourlet domains, while the second is performed using a pseudo-feedback mechanism based on linear regression modeling of shearlet subband dependences. Comprehensive validation experiments performed on five large texture datasets reveal that the proposed classification and retrieval methods outperform the current state-of-the-art.

111 citations


Journal ArticleDOI
TL;DR: Visual and statistical assessments show that the proposed algorithm clearly improves the fusion quality in terms of correlation coefficient, relative dimensionless global error in synthesis, spectral angle mapper, universal image quality index, and quality without reference, as compared with fusion methods, including improved intensity-hue-saturation, multiscale Kalman filter, Bayesian, improved nonsubsampled contourlet transform, and sparse fusion of image.
Abstract: Image fusion aims at improving spectral information in a fused image as well as adding spatial details to it. Among the existing fusion algorithms, filter-based fusion methods are the most frequently discussed cases in recent publications due to their ability to improve spatial and spectral information of multispectral (MS) and panchromatic (PAN) images. Filter-based approaches extract spatial information from the PAN image and inject it into MS images. Designing an optimal filter that is able to extract relevant and nonredundant information from the PAN image is presented in this letter. The optimal filter coefficients extracted from statistical properties of the images are more consistent with type and texture of the remotely sensed images compared with other kernels such as wavelets. Visual and statistical assessments show that the proposed algorithm clearly improves the fusion quality in terms of correlation coefficient, relative dimensionless global error in synthesis, spectral angle mapper, universal image quality index, and quality without reference, as compared with fusion methods, including improved intensity-hue-saturation, multiscale Kalman filter, Bayesian, improved nonsubsampled contourlet transform, and sparse fusion of image.

79 citations


Journal ArticleDOI
TL;DR: A novel CBIR scheme that abstracts each image in the database in terms of statistical features computed using the Multi-scale Geometric Analysis of Non-subsampled Contourlet Transform (NSCT) and incorporates a Relevance Feedback mechanism that uses a graph-theoretic approach to rank the images in accordance with the user's feedback.
Abstract: Content-Based Image Retrieval (CBIR) is an important problem in the domain of digital data management. There is indeed a growing availability of images, but unfortunately the traditional metadata-based search systems are unable to properly exploit their visual information content. In this article we introduce a novel CBIR scheme that abstracts each image in the database in terms of statistical features computed using the Multi-scale Geometric Analysis (MGA) of Non-subsampled Contourlet Transform (NSCT). Noise resilience is one of the main advantages of this feature representation. To improve the retrieval performance and reduce the semantic gap, our system incorporates a Relevance Feedback (RF) mechanism that uses a graph-theoretic approach to rank the images in accordance with the user's feedback. First, a graph of images is constructed with edges reflecting the similarity of pairs of images with respect to the proposed feature representation. Then, images are ranked at each feedback round in terms of the probability that a random walk on this graph reaches an image tagged as relevant by the user before hitting a non-relevant one. Experimental analyses on three different databases show the effectiveness of our algorithm compared to state-of-the-art approaches in particular when the images are corrupted with different types of noise.

63 citations


Journal ArticleDOI
TL;DR: A novel fusion algorithm named pixel information estimation is proposed, which determines the weights by evaluating the information of pixel and is well applied in visible light and infrared image fusion with better fusion quality and lower time-consumption.

62 citations


Journal ArticleDOI
TL;DR: A new algorithm is proposed for breast cancer detection and classification in digital mammography based on Non-Subsampled Contourlet Transform (NSCT) and Super Resolution (SR) and AdaBoost algorithm which achieves significant performance and superiority in comparison with the state of the art approaches.

60 citations


Journal ArticleDOI
TL;DR: A novel SR method is proposed by exploiting both the directional group sparsity of the image gradients and the directional features in similarity weight estimation to achieve higher quality SR reconstruction than the state-of-the-art algorithms.
Abstract: Single image superresolution (SR) aims to construct a high-resolution version from a single low-resolution (LR) image. The SR reconstruction is challenging because of the missing details in the given LR image. Thus, it is critical to explore and exploit effective prior knowledge for boosting the reconstruction performance. In this paper, we propose a novel SR method by exploiting both the directional group sparsity of the image gradients and the directional features in similarity weight estimation. The proposed SR approach is based on two observations: 1) most of the sharp edges are oriented in a limited number of directions and 2) an image pixel can be estimated by the weighted averaging of its neighbors. In consideration of these observations, we apply the curvelet transform to extract directional features which are then used for region selection and weight estimation. A combined total variation regularizer is presented which assumes that the gradients in natural images have a straightforward group sparsity structure. In addition, a directional nonlocal means regularization term takes pixel values and directional information into account to suppress unwanted artifacts. By assembling the designed regularization terms, we solve the SR problem of an energy function with minimal reconstruction error by applying a framework of templates for first-order conic solvers. The thorough quantitative and qualitative results in terms of peak signal-to-noise ratio, structural similarity, information fidelity criterion, and preference matrix demonstrate that the proposed approach achieves higher quality SR reconstruction than the state-of-the-art algorithms.

59 citations


Journal ArticleDOI
TL;DR: Experimental results indicate that the proposed image fusion approach that integrates the non-subsampled shearlet transform (NSST) with spiking cortical model (SCM) does significantly improve the fusion quality in both aspects of subjective visual performance and objective comparisons compared with other current popular ones.

Journal ArticleDOI
TL;DR: The advantage of the proposed method lies in preserving the discontinuities without using the discontinuity preserving prior, thus avoiding the use of computationally taxing optimization techniques for regularization purposes.
Abstract: In this paper, we propose a new approach for multiresolution fusion using contourlet transform (CT). The method is based on modeling the low spatial resolution (LR) and high spectral resolution multispectral (MS) image as the degraded and noisy version of their high spatial resolution version. Since this is an ill-posed problem, it requires regularization in order to obtain the final solution. In this paper, we first obtain the initial estimate of the fused image from the available MS image and the panchromatic (Pan) image by using the CT domain learning. Since CT provides better directional edges, the initial estimate has better edge details. Using the initial estimate, we obtain the degradation that accounts for the aliasing between the LR MS image and fused image. Regularization is carried out by modeling the texture of the final fused image as a homogeneous Markov random field (MRF) prior, where the MRF parameter is estimated using the initial estimate. The use of MRF prior on the final fused image takes care of the spatial dependencies among the pixels. A simple gradient-based optimization technique is used to obtain the final fused image. Although we use homogeneous MRF, the proposed approach preserves the edges in the final fused image by retaining the edges from the initial estimate and by carrying out the optimization on nonedge pixels only. Therefore, the advantage of the proposed method lies in preserving the discontinuities without using the discontinuity preserving prior, thus avoiding the use of computationally taxing optimization techniques for regularization purposes. In addition, the proposed method causes minimum spectral distortion since it learns the texture using contourlet coefficients and does not use actual Pan image pixel intensities. We demonstrate the effectiveness of our approach by conducting the experiments using subsampled and nonsubsampled CT on different data sets captured using Ikonos-2, Quickbird, and Worldview-2 satellites.

Journal ArticleDOI
TL;DR: A new transformation function is developed based on the existing sigmoid function and the tanh functions which have very interesting properties in enhancing images which are suffering from low illuminations or non-uniform lighting conditions.
Abstract: Images captured with insufficient illumination generally have dark shadows and low contrast. This problem seriously affects other forms of image processing schemes such as face detection, security surveillance, image fusion. In this paper, a new image enhancement algorithm using the important features of the contourlet transform is presented. A new transformation function is developed based on the existing sigmoid function and the tanh functions which have very interesting properties in enhancing images which are suffering from low illuminations or non-uniform lighting conditions. Literature dictates that contourlet transform has better performance in representing the image salient features such as edges, lines, curves, and contours than wavelets for its anisotropy and directionality and is therefore well suited for multiscale edge-based image enhancement. The algorithm works for gray scale and color images. For a color image, it is first converted from RGB (red, green, and blue) to HSI (hue, saturation, and intensity) color model. Then, the intensity component of the HSI color space is adjusted the preserving the original color using a new nonlinear transformation function. The simulation results show that this approach gives encouraging results for images taken in low-light and/or non-uniform lighting conditions. The results obtained are compared with other enhancement algorithms based on wavelet transform, curvelet transform, bandlet transform, histogram equalization (HE), and contrast limited adaptive histogram equalization. The performance of the enhancement based on the contourlet transform method is superior. The algorithm is checked for a total of 151 test images. A total of 120 of them are used for subjective evaluation and 31 are used for objective evaluation. For over 90 % of the cases, the system is superior over the other enhancement methods.

Journal ArticleDOI
TL;DR: DST-KLPP provides higher classification rates than other methods, including Wavelet, Curvelet and Contourlet transform, and can recognize tiny defects from low-contrast images availably.

Journal ArticleDOI
20 May 2015-Entropy
TL;DR: The signal to noise ratio (SNR) and the visual quality of the denoised images are considerably enhanced using these denoising structures that combine multiple noisy copies that enable a reduction in the exposure time.
Abstract: Image denoising is a very important step in cryo-transmission electron microscopy (cryo-TEM) and the energy filtering TEM images before the 3D tomography reconstruction, as it addresses the problem of high noise in these images, that leads to a loss of the contained information. High noise levels contribute in particular to difficulties in the alignment required for 3D tomography reconstruction. This paper investigates the denoising of TEM images that are acquired with a very low exposure time, with the primary objectives of enhancing the quality of these low-exposure time TEM images and improving the alignment process. We propose denoising structures to combine multiple noisy copies of the TEM images. The structures are based on Bayesian estimation in the transform domains instead of the spatial domain to build a novel feature preserving image denoising structures; namely: wavelet domain, the contourlet transform domain and the contourlet transform with sharp frequency localization. Numerical image denoising experiments demonstrate the performance of the Bayesian approach in the contourlet transform domain in terms of improving the signal to noise ratio (SNR) and recovering fine details that may be hidden in the data. The SNR and the visual quality of the denoised images are considerably enhanced using these denoising structures that combine multiple noisy copies. The proposed methods also enable a reduction in the exposure time.

Journal ArticleDOI
TL;DR: A novel 3D RRIQA metric based on 3D natural image statistics in contourlet domain is proposed, which has good consistency with 3D subjective perception of human, and can be implemented in many end-to-end 3D video systems.

Journal ArticleDOI
01 Oct 2015-Optik
TL;DR: A novel multi-focus image fusion method using modified pulse coupled neural network (PCNN) in nonsubsampled contourlet transform (NSCT) domain is presented and the proposed algorithm produces better results compared to other state-of-the-art algorithms.

Journal ArticleDOI
TL;DR: A new fusion framework for spatially registered visual and infrared images is described, which utilizes the properties of fractal dimension and phase congruency in the non-subsampled contourlet transform (NSCT) domain.
Abstract: The night-vision image fusion plays a critical role in detecting targets and obstructions in low light or total darkness, which has great importance for pedestrian recognition, vehicle navigation, surveillance and monitoring applications. The central idea is to fuse low-light visible and infrared imagery into a single output. In this paper, we describe a new fusion framework for spatially registered visual and infrared images. The proposed framework utilizes the properties of fractal dimension and phase congruency in the non-subsampled contourlet transform (NSCT) domain. The proposed framework applies multiscale NSCT on visual and IR images to get low- and high-frequency bands. The varied frequency bands of the transformed images are then fused while exploiting their characteristics. Finally, the inverse NSCT is performed to get the fused image. The performance of the proposed framework is validated by extensive experiments on different scene imaginary, where the definite advantages are demonstrated subjectively and objectively.

Journal ArticleDOI
TL;DR: This paper proposes a novel algorithm that uses supervised learning to classify textile textures in defect and non-defect classes based on suitable feature extraction and classification and uses statistical modeling of multi-scale contourlet image decomposition to obtain compact and accurate signatures for texture description.

Journal ArticleDOI
TL;DR: A sewer pipe defect classification based on ensemble methods is proposed and achieves the highest classification rates in the experiments with 239 pipe images obtained by the SSET inspection.
Abstract: Side scanning evaluation technology (SSET) is a visual inspection technique for sewer pipelines. It provides both frontal and 360 degree images of the interior surface of the pipe wall. Image-based pipe defect classification has been widely used for rating sewage structural conditions. The classification of defects in sewer pipe is of vital importance for maintaining the sewerage systems. Usually, a human operator identifies the defect types from the acquired images. However, the diagnosis can be easily influenced by the subjective human factors. To overcome such limitations, a reliable automated sewer pipe defect classification is highly desirable. In this paper, a sewer pipe defect classification based on ensemble methods is proposed. Due to the natural shape irregularities of pipe defects and the complexity of imaging environments, pipe defect images are highly variable. A feature extraction procedure consisting of contourlet transform and the maximum response filter bank is implemented in the proposed method. A feature vector is generated with the statistical features derived from the outputs of contourlet transform and maximum response filter bank. Four ensemble classifiers are trained to classify the feature vector to assign a defect type to the input pipe image. The best ensemble method, namely RobBoot, achieves the highest classification rates in the experiments with 239 pipe images obtained by the SSET inspection. The effectiveness and performance of the proposed method are demonstrated by comparing with other state-of-the-art techniques.

Journal ArticleDOI
08 May 2015-Sensors
TL;DR: It is shown that MEMD overcomes problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales.
Abstract: A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.

Journal ArticleDOI
TL;DR: A selection principle for lowpass frequency coefficients is presented and the connection between a low-frequency image and the defocused image is investigated and the validity and superiority of the proposed scheme in terms of both the visual qualities and the quantitative evaluation are indicated.
Abstract: An efficient multifocus image fusion scheme in nonsubsampled contourlet transform (NSCT) domain is proposed. Based on the property of optical imaging and the theory of defocused image, we present a selection principle for lowpass frequency coefficients and also investigate the connection between a low-frequency image and the defocused image. Generally, the NSCT algorithm decomposes detail image information indwells in different scales and different directions in the bandpass subband coefficient. In order to correctly pick out the prefused bandpass directional coefficients, we introduce multiscale curvature, which not only inherits the advantages of windows with different sizes, but also correctly recognizes the focused pixels from source images, and then develop a new fusion scheme of the bandpass subband coefficients. The fused image can be obtained by inverse NSCT with the different fused coefficients. Several multifocus image fusion methods are compared with the proposed scheme. The experimental results clearly indicate the validity and superiority of the proposed scheme in terms of both the visual qualities and the quantitative evaluation.

Proceedings ArticleDOI
20 Apr 2015
TL;DR: This paper proposes a framework for recognizing human actions from depth video sequences by designing a novel feature descriptor based on Depth Motion Maps, Contour let Transform (CT) and Histogram of Oriented Gradients (HOGs).
Abstract: This paper proposes a framework for recognizing human actions from depth video sequences by designing a novel feature descriptor based on Depth Motion Maps (DMMs), Contour let Transform (CT) and Histogram of Oriented Gradients (HOGs). First, CT is implemented on the generated DMMs of a depth video sequence and then HOGs are computed for each contour let sub-band. Finally, the concatenation of these HOG features is used as a feature descriptor for the depth video sequence. With this new feature descriptor, the l2-regularized collaborative representation classifier is utilized to recognize human actions. The experimental results on Microsoft Research Action3D dataset demonstrate that our proposed method can achieve the state-of-the-art performance for human activity recognition due to the precise feature extraction of contour let transform on the DMMs.

Journal ArticleDOI
TL;DR: A new image denoising method with using bivariate shrinkage threshold on the coefficients of DCT, which achieves better performance than those outstanding Denoising algorithms in terms of peak signal-to-noise ratio (PSNR), as well as visual quality.

Proceedings Article
04 May 2015
TL;DR: Experiments demonstrate that the proposed pixel and feature level image fusion methods provides better visual quality with clear edge information and objective quality indexes than individual multiresolution-based methods such as discrete wavelet transform and NSCT.
Abstract: In recent times multiple imaging sensors are employed in several applications such as surveillance, medical imaging and machine vision. In these multi-sensor systems there is a need for image fusion techniques to effectively combine the information from disparate imaging sensors into a single composite image which enables a good understanding of the scene. The prevailing fusion algorithms employ either the mean or choose-max fusion rule for selecting the best coefficients for fusion at each pixel location. The choose-max rule distorts constants background information whereas the mean rule blurs the edges. Hence, in this proposed paper, the fusion rule is replaced by a soft computing technique that makes intelligent decisions to improve the accuracy of the fusion process in both pixel and feature based image fusion. Non Sub-sampled Contourlet Transform (NSCT) is employed for multi-resolution decomposition as it is demonstrated to capture the intrinsic geometric structures in images effectively. Experiments demonstrate that the proposed pixel and feature level image fusion methods provides better visual quality with clear edge information and objective quality indexes than individual multiresolution-based methods such as discrete wavelet transform and NSCT.

Journal ArticleDOI
09 Sep 2015-PLOS ONE
TL;DR: An effective denoising algorithm for Poisson-Gaussian noise is proposed using the contourlet transform, hidden Markov models and noise estimation in the transform domain and is supplemented by cycle spinning and Wiener filtering for further improvements.
Abstract: In certain image acquisitions processes, like in fluorescence microscopy or astronomy, only a limited number of photons can be collected due to various physical constraints. The resulting images suffer from signal dependent noise, which can be modeled as a Poisson distribution, and a low signal-to-noise ratio. However, the majority of research on noise reduction algorithms focuses on signal independent Gaussian noise. In this paper, we model noise as a combination of Poisson and Gaussian probability distributions to construct a more accurate model and adopt the contourlet transform which provides a sparse representation of the directional components in images. We also apply hidden Markov models with a framework that neatly describes the spatial and interscale dependencies which are the properties of transformation coefficients of natural images. In this paper, an effective denoising algorithm for Poisson-Gaussian noise is proposed using the contourlet transform, hidden Markov models and noise estimation in the transform domain. We supplement the algorithm by cycle spinning and Wiener filtering for further improvements. We finally show experimental results with simulations and fluorescence microscopy images which demonstrate the improved performance of the proposed approach.

Journal ArticleDOI
01 Oct 2015
TL;DR: Comp contributions for a reliable iris recognition method using a new scale-, shift- and rotation-invariant feature-extraction method in time-frequency and spatial domains are given.
Abstract: The conventional iris recognition methods do not perform well for the datasets where the eye image may contain nonideal data such as specular reflection, off-angle view, eyelid, eyelashes and other artifacts. This paper gives contributions for a reliable iris recognition method using a new scale-, shift- and rotation-invariant feature-extraction method in time-frequency and spatial domains. Indeed, a 2-level nonsubsampled contourlet transform (NSCT) is applied on the normalized iris images and a gray level co-occurrence matrix (GLCM) with 3 different orientations is computed on both spatial image and NSCT frequency subbands. Moreover, the effect of the occluded parts is reduced by performing an iris localization algorithm followed by a four regions of interest (ROI) selection. The extracted feature set is transformed and normalized to reduce the effect of extreme values in the feature vector. Next, significant features for iris recognition are selected by a two-step method composed by a filtering stage and wrapper based selection. Finally, the selected feature set is classified using support vector machine (SVM). The proposed iris identification method was tested on the public iris datasets CASIA Ver.1 and CASIA Ver.4-lamp showing a state-of-the-art performance.

Journal ArticleDOI
19 Feb 2015
TL;DR: In this article, a multi-focus image fusion algorithm based on the nonsubsampled contourlet transform (NSCT) and the nonsubampled shearlet transform(NSST) is proposed.
Abstract: In this paper, a multi-focus image fusion algorithm based on the nonsubsampled contourlet transform (NSCT) and the nonsubsampled shearlet transform (NSST) is proposed. The source images are first decomposed by the NSCT and NSST into low frequency coefficients and high frequency coefficients. Then, the average method is used to fuse low frequency coefficient of the NSCT. To obtain more accurate salience measurement, the high frequency coefficients of the NSST and NSCT are combined to measure salience. The high frequency coefficients of the NSCT with larger salience are selected as fused high frequency coefficients. Finally, the fused image is reconstructed by the inverse NSCT. We adopt three metrics (Q AB/F , Q e and Q w ) to evaluate the quality of fused images. The experimental results demonstrate that the proposed method outperforms other methods. It retains highly detailed edges and contours.

Journal ArticleDOI
TL;DR: This paper presents a method for the prediction of NOx emissions in a biomass combustion process through the combination of flame radical imaging, contourlet transform and Zernike moment (CTZM), and least squares support vector regression (LS-SVR) modeling.
Abstract: This paper presents a method for the prediction of NO x emissions in a biomass combustion process through the combination of flame radical imaging, contourlet transform and Zernike moment (CTZM), and least squares support vector regression (LS-SVR) modeling. A novel feature extraction technique based on the CTZM algorithm is developed. The contourlet transform provides the multiscale decomposition for flame radical images and the selected operator based on Zernike moments is designed to provide the well-defined structure for the images. The resulted image features are a variable structure, which is originated from the CTZM. Finally, the variable features of the images of four flame radicals (OH*, CN*, CH*, and $\text{C}^{\ast }_{2}$ ) are defined. The relationship between the variable features of radical images and NO x emissions is established through radial basis function network modeling, SVR modeling, and the LS-SVR modeling. A comparison between the three modeling approaches shows that the LS-SVR model outperforms the other two methods in terms of root-mean-square error and mean relative error criteria. In addition, the structure of the image features has a significant impact on the performance of the prediction models. The test results obtained on a biomass-gas fired test rig show the effectiveness of the proposed technical approach for the prediction of NO x emissions.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed algorithm can improve the image visual effects, remove the noise and enhance the details of medical images.
Abstract: In order to solve the problem of noise amplification, low contrast and image distortion in the process of medical image enhancement, a new algorithm is proposed which combines NSCT nonsubsampled contourlet transform and improved fuzzy contrast The image is decomposed by NSCT Firstly, linear enhancement method is used in low frequency coefficients; secondly the improved adaptive threshold function is used to deal with the high frequency coefficients Finally, the improved fuzzy contrast is used to enhance the global contrast and the Laplace operator is used to enhance the details of the medical images Experimental results show that the proposed algorithm can improve the image visual effects, remove the noise and enhance the details of medical images © 2015 Wiley Periodicals, Inc Int J Imaging Syst Technol, 25, 7-14, 2015