scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Signal Processing Letters in 2010"


Journal ArticleDOI
TL;DR: A new two-step framework for no-reference image quality assessment based on natural scene statistics (NSS) is proposed, which does not require any knowledge of the distorting process and the framework is modular in that it can be extended to any number of distortions.
Abstract: Present day no-reference/no-reference image quality assessment (NR IQA) algorithms usually assume that the distortion affecting the image is known. This is a limiting assumption for practical applications, since in a majority of cases the distortions in the image are unknown. We propose a new two-step framework for no-reference image quality assessment based on natural scene statistics (NSS). Once trained, the framework does not require any knowledge of the distorting process and the framework is modular in that it can be extended to any number of distortions. We describe the framework for blind image quality assessment and a version of this framework-the blind image quality index (BIQI) is evaluated on the LIVE image quality assessment database. A software release of BIQI has been made available online: http://live.ece.utexas.edu/research/quality/BIQI_release.zip.

1,085 citations


Journal ArticleDOI
TL;DR: A novel two-stage noise adaptive fuzzy switching median (NAFSM) filter for salt-and-pepper noise detection and removal that employs fuzzy reasoning to handle uncertainty present in the extracted local information as introduced by noise.
Abstract: This letter presents a novel two-stage noise adaptive fuzzy switching median (NAFSM) filter for salt-and-pepper noise detection and removal. Initially, the detection stage will utilize the histogram of the corrupted image to identify noise pixels. These detected ?noise pixels? will then be subjected to the second stage of the filtering action, while ?noise-free pixels? are retained and left unprocessed. Then, the NAFSM filtering mechanism employs fuzzy reasoning to handle uncertainty present in the extracted local information as introduced by noise. Simulation results indicate that the NAFSM is able to outperform some of the salt-and-pepper noise filters existing in literature.

385 citations


Journal ArticleDOI
TL;DR: The BLIINDS index (BLind Image Integrity Notator using DCT Statistics) is introduced which is a no-reference approach to image quality assessment that does not assume a specific type of distortion of the image and it requires only minimal training.
Abstract: The development of general-purpose no-reference approaches to image quality assessment still lags recent advances in full-reference methods. Additionally, most no-reference or blind approaches are distortion-specific, meaning they assess only a specific type of distortion assumed present in the test image (such as blockiness, blur, or ringing). This limits their application domain. Other approaches rely on training a machine learning algorithm. These methods however, are only as effective as the features used to train their learning machines. Towards ameliorating this we introduce the BLIINDS index (BLind Image Integrity Notator using DCT Statistics) which is a no-reference approach to image quality assessment that does not assume a specific type of distortion of the image. It is based on predicting image quality based on observing the statistics of local discrete cosine transform coefficients, and it requires only minimal training. The method is shown to correlate highly with human perception of quality.

383 citations


Journal ArticleDOI
TL;DR: It is found that, using these methods, compressed sensing can be carried out even when the quantization is very coarse, e.g., 1 or 2 bits per measurement.
Abstract: We consider the problem of estimating a sparse signal from a set of quantized, Gaussian noise corrupted measurements, where each measurement corresponds to an interval of values. We give two methods for (approximately) solving this problem, each based on minimizing a differentiable convex function plus an l 1 regularization term. Using a first order method developed by Hale et al, we demonstrate the performance of the methods through numerical simulation. We find that, using these methods, compressed sensing can be carried out even when the quantization is very coarse, e.g., 1 or 2 bits per measurement.

345 citations


Journal ArticleDOI
TL;DR: Simulations in a system identification context show that the proposed APSA outperforms the normalized least-mean-square (NLMS) algorithm, APA, and normalized sign algorithm (NSA) in terms of convergence rate and steady-state error.
Abstract: A new affine projection sign algorithm (APSA) is proposed, which is robust against non-Gaussian impulsive interferences and has fast convergence. The conventional affine projection algorithm (APA) converges fast at a high cost in terms of computational complexity and it also suffers performance degradation in the presence of impulsive interferences. The family of sign algorithms (SAs) stands out due to its low complexity and robustness against impulsive noise. The proposed APSA combines the benefits of the APA and SA by updating its weight vector according to the L 1-norm optimization criterion while using multiple projections. The features of the APA and the L 1-norm minimization guarantee the APSA an excellent candidate for combatting impulsive interference and speeding up the convergence rate for colored inputs at a low computational complexity. Simulations in a system identification context show that the proposed APSA outperforms the normalized least-mean-square (NLMS) algorithm, APA, and normalized sign algorithm (NSA) in terms of convergence rate and steady-state error. The robustness of the APSA against impulsive interference is also demonstrated.

235 citations


Journal ArticleDOI
TL;DR: An ensemble based ELM (EN-ELM) algorithm is proposed where ensemble learning and cross-validation are embedded into the training phase so as to alleviate the overtraining problem and enhance the predictive stability.
Abstract: Extreme learning machine (ELM) was proposed as a new class of learning algorithm for single-hidden layer feedforward neural network (SLFN). To achieve good generalization performance, ELM minimizes training error on the entire training data set, therefore it might suffer from overfitting as the learning model will approximate all training samples well. In this letter, an ensemble based ELM (EN-ELM) algorithm is proposed where ensemble learning and cross-validation are embedded into the training phase so as to alleviate the overtraining problem and enhance the predictive stability. Experimental results on several benchmark databases demonstrate that EN-ELM is robust and efficient for classification.

222 citations


Journal ArticleDOI
TL;DR: Using chaotic Lorenz data and calculating root-mean-square-error, Lyapunov exponent, and correlation dimension, it is shown that the adaptive algorithm more effectively reduces noise in the Chaos Lorenz system than wavelet denoising with three different thresholding choices.
Abstract: Time series measured in real world is often nonlinear, even chaotic. To effectively extract desired information from measured time series, it is important to preprocess data to reduce noise. In this Letter, we propose an adaptive denoising algorithm. Using chaotic Lorenz data and calculating root-mean-square-error, Lyapunov exponent, and correlation dimension, we show that our adaptive algorithm more effectively reduces noise in the chaotic Lorenz system than wavelet denoising with three different thresholding choices. We further analyze an electroencephalogram (EEG) signal in sleep apnea and show that the adaptive algorithm again more effectively reduces the Electrocardiogram (ECG) and other types of noise contaminated in EEG than wavelet approaches.

214 citations


Journal ArticleDOI
TL;DR: This work presents semi-supervised NMF (SSNMF), where they jointly incorporate the data matrix and the (partial) class label matrix into NMF, and develops multiplicative updates for SSNMF to minimize a sum of weighted residuals.
Abstract: Nonnegative matrix factorization (NMF) is a popular method for low-rank approximation of nonnegative matrix, providing a useful tool for representation learning that is valuable for clustering and classification. When a portion of data are labeled, the performance of clustering or classification is improved if the information on class labels is incorporated into NMF. To this end, we present semi-supervised NMF (SSNMF), where we jointly incorporate the data matrix and the (partial) class label matrix into NMF. We develop multiplicative updates for SSNMF to minimize a sum of weighted residuals, each of which involves the nonnegative 2-factor decomposition of the data matrix or the label matrix, sharing a common factor matrix. Experiments on document datasets and EEG datasets in BCI competition confirm that our method improves clustering as well as classification performance, compared to the standard NMF, stressing that semi-supervised NMF yields semi-supervised feature extraction.

205 citations


Journal ArticleDOI
TL;DR: This letter proposes to construct the sensing matrix with chaotic sequence following a trivial method and proves that with overwhelming probability, the RIP of this kind of matrix is guaranteed.
Abstract: Compressive sensing is a new methodology to capture signals at sub-Nyquist rate. To guarantee exact recovery from compressed measurements, one should choose specific matrix, which satisfies the Restricted Isometry Property (RIP), to implement the sensing procedure. In this letter, we propose to construct the sensing matrix with chaotic sequence following a trivial method and prove that with overwhelming probability, the RIP of this kind of matrix is guaranteed. Meanwhile, its experimental comparisons with Gaussian random matrix, Bernoulli random matrix and sparse matrix are carried out and show that the performances among these sensing matrix are almost equal.

190 citations


Journal ArticleDOI
TL;DR: An efficient integer transform based reversible watermarking based on Tian's difference expansion technique can be reformulated as an integer transform and the superiority of the proposed method is experimental verified by comparing with other existing schemes.
Abstract: In this letter, an efficient integer transform based reversible watermarking is proposed. We first show that Tian's difference expansion (DE) technique can be reformulated as an integer transform. Then, a generalized integer transform and a payload-dependent location map are constructed to extend the DE technique to the pixel blocks of arbitrary length. Meanwhile, the distortion can be controlled by preferentially selecting embeddable blocks that introduce less distortion. Finally, the superiority of the proposed method is experimental verified by comparing with other existing schemes.

190 citations


Journal ArticleDOI
TL;DR: Experimental results show that the novel feature-based model performs competitively for visual saliency detection task, and the potential application of matrix decomposition and convex optimization for image analysis is suggested.
Abstract: Saliency mechanism has been considered crucial in the human visual system and helpful to object detection and recognition. This paper addresses a novel feature-based model for visual saliency detection. It consists of two steps: first, using the learned overcomplete sparse bases to represent image patches; and then, estimating saliency information via low-rank and sparsity matrix decomposition. We compare our model with the previous methods on natural images. Experimental results on both natural images and psychological patterns show that our model performs competitively for visual saliency detection task, and suggest the potential application of matrix decomposition and convex optimization for image analysis.

Journal ArticleDOI
TL;DR: A novel approach for LR face recognition without any SR preprocessing based on coupled mappings, which significantly improves the recognition performance and is Inspired by locality preserving methods for dimensionality reduction.
Abstract: Practical face recognition systems are sometimes confronted with low-resolution face images. Traditional two-step methods solve this problem through employing super-resolution (SR). However, these methods usually have limited performance because the target of SR is not absolutely consistent with that of face recognition. Moreover, time-consuming sophisticated SR algorithms are not suitable for real-time applications. To avoid these limitations, we propose a novel approach for LR face recognition without any SR preprocessing. Our method based on coupled mappings (CMs), projects the face images with different resolutions into a unified feature space which favors the task of classification. These CMs are learned through optimizing the objective function to minimize the difference between the correspondences (i.e., low-resolution image and its high-resolution counterpart). Inspired by locality preserving methods for dimensionality reduction, we introduce a penalty weighting matrix into our objective function. Our method significantly improves the recognition performance. Finally, we conduct experiments on publicly available databases to verify the efficacy of our algorithm.

Journal ArticleDOI
TL;DR: It is verified numerically the common knowledge that the searching zone can be advantageously limited and an efficient modification of the central weight based on the Stein's unbiased risk estimate principle is proposed.
Abstract: Non-local means (NLM) provides a very efficient procedure to denoise digital images. We study the influence of two important parameters on this algorithm: the size of the searching window and the weight given to the central patch. We verify numerically the common knowledge that the searching zone can be advantageously limited and we propose an efficient modification of the central weight based on the Stein's unbiased risk estimate principle.

Journal ArticleDOI
TL;DR: A hierarchical statistical model applicable to both wavelet and JPEG-based DCT bases is developed, in which the tree structure in the sparseness pattern is exploited explicitly.
Abstract: In compressive sensing (CS) the known structure in the transform coefficients may be leveraged to improve reconstruction accuracy. We here develop a hierarchical statistical model applicable to both wavelet and JPEG-based DCT bases, in which the tree structure in the sparseness pattern is exploited explicitly. The analysis is performed efficiently via variational Bayesian (VB) analysis, and comparisons are made with MCMC-based inference, and with many of the CS algorithms in the literature. Performance is assessed for both noise-free and noisy CS measurements, based on both JPEG-DCT and wavelet representations.

Journal ArticleDOI
TL;DR: Results show that ASWM provides better performance in terms of PSNR and MAE than many other median filter variants for random-valued impulse noise and can preserve more image details in a high noise environment.
Abstract: A new Adaptive Switching Median (ASWM) filter for removing impulse noise from corrupted images is presented. The originality of ASWM is that no a priori Threshold is needed as in the case of a classical Switching Median filter. Instead, Threshold is computed locally from image pixels intensity values in a sliding window. Results show that ASWM provides better performance in terms of PSNR and MAE than many other median filter variants for random-valued impulse noise. In addition it can preserve more image details in a high noise environment.

Journal ArticleDOI
TL;DR: In this analysis, new expressions for the system's outage probability and the average bit error rate are derived and the effects of the rank of the relay chosen, the average SNR imbalance, and the correlation between the delayed and current signal-to-noise ratio (SNR) are investigated.
Abstract: We analyze the impact of outdated channel state information due to feedback delay on the performance of amplify-and-forward relays with the kth worst partial relay selection scheme. In our analysis, new expressions for the system's outage probability and the average bit error rate are derived. The effects of the rank of the relay chosen, the average SNR imbalance, and the correlation between the delayed and current signal-to-noise ratio (SNR) on the system performance are investigated. Additionally, simple and accurate outage and average BER approximations are also derived to quantify the performance at high SNR. We also give simulation results to support the theoretical study.

Journal ArticleDOI
TL;DR: Simulation results indicate that the proposed algorithm outperforms the classical one (achieving faster tracking and lower misadjustment) and has a lower computational complexity due to a recursive implementation of the ¿proportionate history¿.
Abstract: Proportionate-type normalized least-mean-square algorithms were developed in the context of echo cancellation. In order to further increase the convergence rate and tracking, the ?proportionate? idea was applied to the affine projection algorithm (APA) in a straightforward manner. The objective of this letter is twofold. First, a general framework for the derivation of proportionate-type APAs is proposed. Second, based on this approach, a new proportionate-type APA is developed, taking into account the ?history? of the proportionate factors. The benefit is also twofold. Simulation results indicate that the proposed algorithm outperforms the classical one (achieving faster tracking and lower misadjustment). Besides, it also has a lower computational complexity due to a recursive implementation of the ?proportionate history?.

Journal ArticleDOI
TL;DR: A speed up technique for the non-local means (NLM) image denoising algorithm based on probabilistic early termination (PET) based on a probability model to achieve early termination is proposed.
Abstract: A speed up technique for the non-local means (NLM) image denoising algorithm based on probabilistic early termination (PET) is proposed. A significant amount of computation in the NLM scheme is dedicated to the distortion calculation between pixel neighborhoods. The proposed PET scheme adopts a probability model to achieve early termination. Specifically, the distortion computation can be terminated and the corresponding contributing pixel can be rejected earlier, if the expected distortion value is too high to be of significance in weighted averaging. Performance comparative with several fast NLM schemes is provided to demonstrate the effectiveness of the proposed algorithm.

Journal ArticleDOI
TL;DR: A novel signal to interference and noise (SINR) balancing technique for a downlink cognitive radio network (CRN) wherein multiple cognitive users coexist and share the licensed spectrum with the primary users (PUs) using the underlay approach is proposed.
Abstract: We propose a novel signal to interference and noise (SINR) balancing technique for a downlink cognitive radio network (CRN) wherein multiple cognitive users (also referred to as secondary users (SUs)) coexist and share the licensed spectrum with the primary users (PUs) using the underlay approach. The proposed beamforming technique maximizes the worst SU SINR while ensuring that the interference leakage to PUs is below specific thresholds. Due to the additional interference constraints imposed by PUs, the principle of uplink-downlink duality used in the conventional downlink beamformer design cannot be directly applied anymore. To circumvent this problem, using an algebraic manipulation on the interference constraints, we propose a novel SINR balancing technique for CRNs based on uplink-downlink iterative design techniques. Simulation results illustrate the convergence and the optimality of the proposed beamformer design.

Journal ArticleDOI
TL;DR: Under the framework of switching median filtering, a highly effective algorithm for impulse noise detection is proposed aiming at providing solid basis for subsequent filtering and in principle simpler as it is intuitive and easy to implement as it has uncomplicated structure and few codes.
Abstract: Under the framework of switching median filtering, a highly effective algorithm for impulse noise detection is proposed aiming at providing solid basis for subsequent filtering. This algorithm consists of two iterations to make the decision as accurate as possible. Two robust and reliable decision criteria are proposed for each iteration. Extensive simulation results show that the false alarm rate and miss detection rate of the proposed algorithm are both very low and substantially outperform existing state-of-the-art algorithms. At the same time, the proposed algorithm is in principle simpler as it is intuitive and it is easy to implement as it has uncomplicated structure and few codes.

Journal ArticleDOI
TL;DR: The main feature of the proposed method is that it uses the strength of glottal activity as against using the periodicity of the signal to distinguish voiced epochs from random instants detected in nonvoiced regions.
Abstract: In this paper, a new method for voiced/nonvoiced detection based on epoch extraction is proposed. Zero-frequency filtered speech signal is used to extract the instants of significant excitation (or epochs). The robustness of the method to extract epochs in the voiced regions, even with small amount of additive white noise, is used to distinguish voiced epochs from random instants detected in nonvoiced regions. The main feature of the proposed method is that it uses the strength of glottal activity as against using the periodicity of the signal. Performance of the proposed algorithm is studied on TIMIT and CMU ARCTIC databases, for two different noise types, white and vehicle noise from the NOISEX database, at different signal-to-noise ratios (SNRs). The proposed method performs similar or better than the popular normalized crosscorrelation based voiced/nonvoiced detection used in the open source utility wavesurfer, especially at lower SNRs.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed saliency-based compressive sampling scheme for image signals improves the reconstructed image quality considerably compared to the case when saliency information is not used.
Abstract: Compressive sampling is a novel framework in signal acquisition and reconstruction, which achieves sub-Nyquist sampling by exploiting the sparse nature of most signals of interest. In this letter, we propose a saliency-based compressive sampling scheme for image signals. The key idea is to exploit the saliency information of images, and allocate more sensing resources to salient regions but fewer to nonsalient regions. The scheme takes human visual attention into consideration because human vision would pay more attention to salient regions. Simulation results on natural images show that the proposed scheme improves the reconstructed image quality considerably compared to the case when saliency information is not used.

Journal ArticleDOI
TL;DR: Analytical expressions are derived for the power spectral density of orthogonal frequency division multiplex signals employing a cyclic prefix (CP-OFDM) or zero padding time guard interval and validated by inspecting the power spectra of some standardized OFDM signals.
Abstract: In this letter, analytical expressions are derived for the power spectral density (PSD) of orthogonal frequency division multiplex (OFDM) signals employing a cyclic prefix (CP-OFDM) or zero padding (ZP-OFDM) time guard interval. Under the relatively weak assumptions that (i) the data are independent and identically distributed on all OFDM subcarriers and (ii) the OFDM pulse shape is sufficiently localized in time, simple closed-form PSD expressions can be obtained. These expressions are then compared to existing OFDM PSD expressions and validated by inspecting the power spectra of some standardized OFDM signals.

Journal ArticleDOI
TL;DR: Three methods for selecting the CRs with the best detection performance based only on hard (binary) local decisions from theCRs are proposed and indicate that the proposed CR selection methods are able to offer significant gains in terms of system performance.
Abstract: In cooperative spectrum sensing, information from several cognitive radios (CRs) is used for detecting the primary user. To reduce sensing overhead and total energy consumption, it is recommended to cooperate only with the CRs that have the best detection performance. However, the problem is that it is not known a priori which of the CRs have the best detection performance. In this letter, we are proposing three methods for selecting the CRs with the best detection performance based only on hard (binary) local decisions from the CRs. Simulations are used to evaluate and compare the methods. The results indicate that the proposed CR selection methods are able to offer significant gains in terms of system performance.

Journal ArticleDOI
TL;DR: A novel method of refining the time-domain synthesis of individual source estimates from a single channel mixture using a closed-loop architecture, and considerable improvements are obtained relative to phase binary masking, given accurate source magnitude spectra.
Abstract: In this letter, we propose a novel method of refining the time-domain synthesis of individual source estimates from a single channel mixture. Employing a closed-loop architecture, the algorithm refines the synthesis of each source by iteratively estimating the phase of the sources, given the estimates of the source magnitude spectra and a single channel time-domain mixture. The performance of the algorithm is evaluated for harmonic musical mixtures, and considerable improvements to the synthesized estimates are obtained relative to phase binary masking, given accurate source magnitude spectra.

Journal ArticleDOI
TL;DR: New and improved energy detectors for cognitive radios are derived by considering the effect of the primary user traffic on spectrum sensing, which shows that the new energy detector outperforms the conventional energy detector in all the cases examined.
Abstract: New and improved energy detectors for cognitive radios are derived by considering the effect of the primary user traffic on spectrum sensing. The new energy detectors are designed based on the assumption that the primary user randomly arrives or departs during the sensing period. Numerical results show that the new energy detector outperforms the conventional energy detector in all the cases examined. The performance gain depends on the operating signal-to-noise ratio as well as the sample size used.

Journal ArticleDOI
TL;DR: Experimental results demonstrate the potential of compressed sensing in speech coding techniques, offering high perceptual quality with a very sparse approximated prediction residual.
Abstract: Encouraged by the promising application of compressed sensing in signal compression, we investigate its formulation and application in the context of speech coding based on sparse linear prediction. In particular, a compressed sensing method can be devised to compute a sparse approximation of speech in the residual domain when sparse linear prediction is involved. We compare the method of computing a sparse prediction residual with the optimal technique based on an exhaustive search of the possible nonzero locations and the well known Multi-Pulse Excitation, the first encoding technique to introduce the sparsity concept in speech coding. Experimental results demonstrate the potential of compressed sensing in speech coding techniques, offering high perceptual quality with a very sparse approximated prediction residual.

Journal ArticleDOI
TL;DR: Inspired by the efficiency of the cross-entropy (CE) method for finding near-optimal solutions in huge search spaces, the application of the CE method to search the optimal PRT set is proposed.
Abstract: This letter considers selection of the optimal peak reduction tone (PRT) set for the tone reservation (TR) scheme to reduce the peak-to-average power ratio (PAPR) of an orthogonal frequency division multiplexing (OFDM) signal. In the TR scheme, PAPR reduction performance achieved by a randomly generated PRT set is superior to that by a consecutive PRT set and that achieved by an interleaved tone set. However, the optimal PRT set requires an exhaustive search of all combinations of possible PRT sets, which is known to be a nondeterministic polynomial-time (NP)-hard and cannot be solved for the practical number of tones. Inspired by the efficiency of the cross-entropy (CE) method for finding near-optimal solutions in huge search spaces, this letter proposes the application of the CE method to search the optimal PRT set. Computer simulation results show that the proposed CE method obtains near-optimal PRT sets and provides better PAPR performance.

Journal ArticleDOI
TL;DR: This letter presents a new filtering scheme based on contrast enhancement within the filtering window for removing the random valued impulse noise and demonstrates that the proposed method significantly outperforms many other well-known techniques.
Abstract: This letter presents a new filtering scheme based on contrast enhancement within the filtering window for removing the random valued impulse noise. The application of a nonlinear function for increasing the difference between a noise-free and noisy pixels results in efficient detection of noisy pixels. As the performance of a filtering system, in general, depends on the number of iterations used, an effective stopping criterion based on noisy image characteristics to determine the number of iterations is also proposed. Extensive simulation results exhibit that the proposed method significantly outperforms many other well-known techniques.

Journal ArticleDOI
TL;DR: A new subspace learning method, called uncorrelated discriminant nearest feature line analysis (UDNFLA), for face recognition using the NFL metric to seek a feature subspace such that the within-class feature line (FL) distances are minimized and between-class FL distances are maximized simultaneously in the reduced subspace.
Abstract: We propose in this letter a new subspace learning method, called uncorrelated discriminant nearest feature line analysis (UDNFLA), for face recognition. Motivated by the fact that existing nearest feature line (NFL) can effectively characterize the geometrical information of face samples, and uncorrelated features are desirable for many pattern analysis applications, we propose using the NFL metric to seek a feature subspace such that the within-class feature line (FL) distances are minimized and between-class FL distances are maximized simultaneously in the reduced subspace, and impose an uncorrelated constraint to make the extracted features statistically uncorrelated. Experimental results on two widely used face databases demonstrate the efficacy of the proposed method.