scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Signal Processing Letters in 2007"


Journal ArticleDOI
TL;DR: It is shown that by replacing the lscr1 norm with theLscrp norm, exact reconstruction is possible with substantially fewer measurements, and a theorem in this direction is given.
Abstract: Several authors have shown recently that It is possible to reconstruct exactly a sparse signal from fewer linear measurements than would be expected from traditional sampling theory. The methods used involve computing the signal of minimum lscr1 norm among those having the given measurements. We show that by replacing the lscr1 norm with the lscrp norm with p < 1, exact reconstruction is possible with substantially fewer measurements. We give a theorem in this direction, and many numerical examples, both in one complex dimension, and larger-scale examples in two real dimensions.

1,321 citations


Journal ArticleDOI
TL;DR: A new decision-based algorithm is proposed for restoration of images that are highly corrupted by impulse noise that removes the noise effectively even at noise level as high as 90% and preserves the edges without any loss up to 80% of noise level.
Abstract: A new decision-based algorithm is proposed for restoration of images that are highly corrupted by impulse noise. The new algorithm shows significantly better image quality than a standard median filter (SMF), adaptive median filters (AMF), a threshold decomposition filter (TDF), cascade, and recursive nonlinear filters. The proposed method, unlike other nonlinear filters, removes only corrupted pixel by the median value or by its neighboring pixel value. As a result of this, the proposed method removes the noise effectively even at noise level as high as 90% and preserves the edges without any loss up to 80% of noise level. The proposed algorithm (PA) is tested on different images and is found to produce better results in terms of the qualitative and quantitative measures of the image

679 citations


Journal ArticleDOI
TL;DR: It is shown that the maximum number of targets that can be uniquely identified by the MIMO radar is up to Mt times that of its phased-array counterpart, where Mt is the number of transmit antennas.
Abstract: A multi-input multi-output (MIMO) radar system, unlike a standard phased-array radar, can transmit multiple linearly independent probing signals via its antennas. We show herein that this waveform diversity enables the MIMO radar to significantly improve its parameter identifiability. Specifically, we show that the maximum number of targets that can be uniquely identified by the MIMO radar is up to Mt times that of its phased-array counterpart, where Mt is the number of transmit antennas.

589 citations


Journal ArticleDOI
TL;DR: The empirical mode decomposition is extended to bivariate time series that generalizes the rationale underlying the EMD to the bivariate framework and is designed to extract zero-mean rotating components.
Abstract: The empirical mode decomposition (EMD) has been introduced quite recently to adaptively decompose nonstationary and/or nonlinear time series. The method being initially limited to real-valued time series, we propose here an extension to bivariate (or complex-valued) time series that generalizes the rationale underlying the EMD to the bivariate framework. Where the EMD extracts zero-mean oscillating components, the proposed bivariate extension is designed to extract zero-mean rotating components. The method is illustrated on a real-world signal, and properties of the output components are discussed. Free Matlab/C codes are available at http://perso.ens-lyon.fr/patrick.flandrin.

504 citations


Journal ArticleDOI
TL;DR: Extensive simulations show that the proposed filter not only can provide better performance of suppressing impulse with high noise level but can preserve more detail features, even thin lines.
Abstract: The known median-based denoising methods tend to work well for restoring the images corrupted by random-valued impulse noise with low noise level but poorly for highly corrupted images. This letter proposes a new impulse detector, which is based on the differences between the current pixel and its neighbors aligned with four main directions. Then, we combine it with the weighted median filter to get a new directional weighted median (DWM) filter. Extensive simulations show that the proposed filter not only can provide better performance of suppressing impulse with high noise level but can preserve more detail features, even thin lines. As extended to restoring corrupted color images, this filter also performs very well

460 citations


Journal ArticleDOI
TL;DR: The embedded information bit-rates of the proposed spatial domain reversible watermarking scheme are close to the highest bit-rate reported so far and appears to be the lowest complexity one proposed up to now.
Abstract: Reversible contrast mapping (RCM) is a simple integer transform that applies to pairs of pixels. For some pairs of pixels, RCM is invertible, even if the least significant bits (LSBs) of the transformed pixels are lost. The data space occupied by the LSBs is suitable for data hiding. The embedded information bit-rates of the proposed spatial domain reversible watermarking scheme are close to the highest bit-rates reported so far. The scheme does not need additional data compression, and, in terms of mathematical complexity, it appears to be the lowest complexity one proposed up to now. A very fast lookup table implementation is proposed. Robustness against cropping can be ensured as well

321 citations


Journal ArticleDOI
TL;DR: A method for the empirical mode decomposition (EMD) of complex-valued data is proposed based on the filter bank interpretation of the EMD mapping and by making use of the relationship between the positive and negative frequency component of the Fourier spectrum.
Abstract: A method for the empirical mode decomposition (EMD) of complex-valued data is proposed. This is achieved based on the filter bank interpretation of the EMD mapping and by making use of the relationship between the positive and negative frequency component of the Fourier spectrum. The so-generated intrinsic mode functions (IMFs) are complex-valued, which facilitates the extension of the standard EMD to the complex domain. The analysis is supported by simulations on both synthetic and real-world complex-valued signals

267 citations


Journal ArticleDOI
TL;DR: Simulations for an interference suppression application show that the proposed scheme outperforms in convergence and tracking the state-of-the-art reduced-rank schemes at significantly lower complexity.
Abstract: This letter proposes a novel adaptive reduced-rank filtering scheme based on joint iterative optimization of adaptive filters. The novel scheme consists of a joint iterative optimization of a bank of full-rank adaptive filters that forms the projection matrix and an adaptive reduced-rank filter that operates at the output of the bank of filters. We describe minimum mean-squared error (MMSE) expressions for the design of the projection matrix and the reduced-rank filter and low-complexity normalized least-mean squares (NLMS) adaptive algorithms for its efficient implementation. Simulations for an interference suppression application show that the proposed scheme outperforms in convergence and tracking the state-of-the-art reduced-rank schemes at significantly lower complexity.

232 citations


Journal ArticleDOI
TL;DR: It is shown that the harmonic mean of two exponential random variables can be approximated, at high signal-to-noise ratio (SNR), to be an exponential random variable.
Abstract: In this letter, a novel approach for outage probability analysis of the multinode amplify-and-forward relay network is provided. It is shown that the harmonic mean of two exponential random variables can be approximated, at high signal-to-noise ratio (SNR), to be an exponential random variable. The single relay case considered before is a special case of our analysis. Based on that approximation, an outage probability bound is derived which proves to be tight at high SNR. Based on the derived outage probability bound, optimal power allocation is studied. Simulation results show a performance improvement, in terms of symbol error rate, of the optimal power allocation compared to the equal power-allocation scheme

182 citations


Journal ArticleDOI
TL;DR: In this letter, Z-complementary sequences are introduced and it is shown that, different from the normal complementary pair of binary sequences which exist only for very limited lengths, a Z- complementary pair ofbinary sequences exists for many more lengths.
Abstract: In this letter, Z-complementary sequences are introduced. These sequences include the conventional complementary sequences as special cases. It is shown that, different from the normal complementary pair of binary sequences which exist only for very limited lengths, i.e., 2a10b26c for a,b,cges0, a Z-complementary pair of binary sequences exists for many more lengths. In addition, for a Z-complementary set with zero correlation zone Z, P binary sequences, each having length N, the maximum number of distinct Z-complementary mates is smaller than or equal to P[N/Z].

179 citations


Journal ArticleDOI
TL;DR: This letter proposes a simple orthogonal frequency-division multiplexing (OFDM) scheme for an asynchronous cooperative system, where OFDM is implemented at the source node, and time-reversion and complex conjugation are implement at the relay nodes.
Abstract: In this letter, we propose a simple orthogonal frequency-division multiplexing (OFDM) scheme for an asynchronous cooperative system, where OFDM is implemented at the source node, and time-reversion and complex conjugation are implemented at the relay nodes. The cyclic prefix (CP) at the source node is used for combating the timing errors from the relay nodes. In this scheme, the received signals at the destination node have the Alamouti code structure on each subcarrier, and thus, it has the fast symbol-wise ML decoding. It should be emphasized that the relay nodes only need to implement the time-reversion, some sign changes from plus to minus, and/or the complex conjugation to the received signals, and no IDFT or DFT operation is needed. It is shown that this simple scheme achieves second-order diversity gain without the synchronization requirement at the relay nodes.

Journal ArticleDOI
TL;DR: A near-field source localization algorithm with one-dimensional (1-D) search via symmetric subarrays that transforms the two-dimensional search involved in the parameter estimation to a 1-D search, and it does not require high- order statistics computation in contrast with the traditional near- field high-order ESPRIT algorithm.
Abstract: We propose a near-field source localization algorithm with one-dimensional (1-D) search via symmetric subarrays. By dividing the uniform linear array (ULA) into two symmetric subarrays, the steering vectors of the subarrays yield the 1-D (only bearing-related) property of rotational invariance in signal subspace, which allows for the bearing estimation using the generalized far-field ESPRIT. With the estimated bearing, the range estimation of each source is consequently obtained by defining the 1-D MUSIC spectrum. This algorithm transforms the two-dimensional search involved in the parameter estimation to a 1-D search, and it does not require high-order statistics computation in contrast with the traditional near-field high-order ESPRIT algorithm

Journal ArticleDOI
TL;DR: The concept of position-based synchronization is introduced, which states that synchronization parameters can be recovered from a user position estimation and the root mean square error performance of the proposed algorithm is compared to those achieved with state-of-the-art synchronization techniques.
Abstract: In this letter, we obtain the maximum likelihood estimator of position in the framework of global navigation satellite systems. This theoretical result is the basis of a completely different approach to the positioning problem, in contrast to the conventional two-step position estimation, consisting of estimating the synchronization parameters of the in-view satellites and then performing a position estimation with that information. To the authors' knowledge, this is a novel approach that copes with signal fading, and it mitigates multipath and jamming interferences. Besides, the concept of position-based synchronization is introduced, which states that synchronization parameters can be recovered from a user position estimation. We provide computer simulation results showing the robustness of the proposed approach in fading multipath channels. The root mean square error performance of the proposed algorithm is compared to those achieved with state-of-the-art synchronization techniques. A sequential Monte Carlo-based method is used to deal with the multivariate optimization problem resulting from the maximum likelihood solution in an iterative way

Journal ArticleDOI
TL;DR: This paper derives analytic expressions for the minimum mean-square error (MMSE) in the STFT domain and shows that the system identification performance does not necessarily improve by increasing the length of the analysis window.
Abstract: The multiplicative transfer function (MTF) approximation is widely used for modeling a linear time invariant system in the short-time Fourier transform (STFT) domain. It relies on the assumption of a long analysis window compared with the length of the system impulse response. In this paper, we investigate the influence of the analysis window length on the performance of a system identifier that utilizes the MTF approximation. We derive analytic expressions for the minimum mean-square error (MMSE) in the STFT domain and show that the system identification performance does not necessarily improve by increasing the length of the analysis window. The optimal window length, that achieves the MMSE, depends on the signal-to-noise ratio and the length of the input signal. The theoretical analysis is supported by simulation results

Journal ArticleDOI
Sarp Erturk1
TL;DR: In this article, a multiplication-free one-bit transform (1BT) for low-complexity block-based motion estimation is presented, which can be implemented in integer arithmetic using addition and shifts only, reducing the computational complexity, processing time, and power consumption.
Abstract: A multiplication-free one-bit transform (1BT) for low-complexity block-based motion estimation is presented in this letter. A novel filter kernel is utilized to construct the 1BT of image frames using addition and shift operations only. It is shown that the proposed approach provides the same motion estimation accuracy at macro-block level and even better accuracy for smaller block sizes compared to previously proposed 1BT methods. Because the proposed 1BT approach does not require multiplication operations, it can be implemented in integer arithmetic using addition and shifts only, reducing the computational complexity, processing time, as well as power consumption

Journal ArticleDOI
TL;DR: A robust phase unwrapping algorithm is proposed with applications in radar signal processing and a type of robust CRT is derived from this algorithm.
Abstract: In the conventional Chinese remainder theorem (CRT), a small error in a remainder may cause a large error in the solution of an integer, i.e., CRT is not robust. In this letter, we first propose a robust phase unwrapping algorithm with applications in radar signal processing. Motivated from the phase unwrapping algorithm, we then derive a type of robust CRT

Journal ArticleDOI
TL;DR: This work proposes a simple but very flexible method for solving a generalized TV functional that includes both the lscr2 -TV and lscR1 -TV problems as special cases and is comparable to or faster than any other lsc R2R -TV algorithms of which it is aware.
Abstract: Total variation (TV) regularization has become a popular method for a wide variety of image restoration problems, including denoising and deconvolution. A number of authors have recently noted the advantages of replacing the standard lscr2data fidelity term with an lscr1 norm. We propose a simple but very flexible method for solving a generalized TV functional that includes both the lscr2 -TV and lscr1 -TV problems as special cases. This method offers competitive computational performance for lscr2 -TV and is comparable to or faster than any other lscr1 -TV algorithms of which we are aware.

Journal ArticleDOI
TL;DR: A novel filter based on mathematical morphology for high probability impulse noise removal is presented, which outperforms a number of existing algorithms and is particularly effective for the very highly corrupted images.
Abstract: A novel filter based on mathematical morphology for high probability impulse noise removal is presented. First, an impulse noise detector using mathematical residues is proposed to identify pixels that are contaminated by the salt or pepper noise. Then the image is restored using specialized open-close sequence algorithms that apply only to the noisy pixels. Finally, black and white blocks that degrade the quality of the image will be recovered by a block smart erase method. Experimental results demonstrate that the proposed filter outperforms a number of existing algorithms and is particularly effective for the very highly corrupted images

Journal ArticleDOI
TL;DR: The accuracy in determining the instants of significant excitation and the time complexity of the proposed method is compared with the group delay based approach.
Abstract: This letter proposes a time-effective method for determining the instants of significant excitation in speech signals. The instants of significant excitation correspond to the instants of glottal closure (epochs) in the case of voiced speech, and to some random excitations like onset of burst in the case of nonvoiced speech. The proposed method consists of two phases: the first phase determines the approximate epoch locations using the Hilbert envelope of the linear prediction residual of the speech signal. The second phase determines the accurate locations of the instants of significant excitation by computing the group delay around the approximate epoch locations derived from the first phase. The accuracy in determining the instants of significant excitation and the time complexity of the proposed method is compared with the group delay based approach.

Journal ArticleDOI
TL;DR: In this letter, a new threshold algorithm based on wavelet analysis is applied to smooth noise for a nonlinear time series by using the updated thresholds to different characters of a noisy nonlinear signal.
Abstract: In this letter, a new threshold algorithm based on wavelet analysis is applied to smooth noise for a nonlinear time series. By detailing the signals decomposed onto different scales, we smooth the details by using the updated thresholds to different characters of a noisy nonlinear signal. This method is an improvement of Donoho's wavelet methods to nonlinear signals. The approach has been successfully applied to smoothing the noisy chaotic time series generated by the Lorenz system as well as the observed annual runoff of Yellow River. For the nonlinear dynamical system, an attempt is made to analyze the noise reduced data by using multiresolution analysis, i.e., the false nearest neighbors, correlation integral, and autocorrelation function, to verify the proposed noise smoothing algorithm

Journal ArticleDOI
TL;DR: It is proved that, with respect to a natural definition of secure capacity, and in a suitably asymptotic sense, this conjecture that "secure" steganographic capacity is proportional only to the square root of the number of covers is true.
Abstract: The problems of batch steganography and pooled steganalysis, proposed in , generalize the problems of hiding and detecting hidden data to multiple covers It was conjectured that, given covers of uniform capacity and a quantitative steganalysis method satisfying certain assumptions, "secure" steganographic capacity is proportional only to the square root of the number of covers We now prove that, with respect to a natural definition of secure capacity, and in a suitably asymptotic sense, this conjecture is true This is in sharp contrast to capacity results for noisy channels

Journal ArticleDOI
TL;DR: This letter presents a method of two-dimensional canonical correlation analysis (2D-CCA) where the standard CCA is extended in such a way that relations between two different sets of image data are directly sought without reshaping images into vectors.
Abstract: In this letter, we present a method of two-dimensional canonical correlation analysis (2D-CCA) where we extend the standard CCA in such a way that relations between two different sets of image data are directly sought without reshaping images into vectors. We stress that 2D-CCA dramatically reduces the computational complexity, compared to the standard CCA. We show the useful behavior of 2D-CCA through numerical examples of correspondence learning between face images in different poses and illumination conditions.

Journal ArticleDOI
TL;DR: A novel statistical scheme of fragile watermarking is proposed, in which a set of tailor-made authentication data for each pixel together with some additional test data are embedded into the host image.
Abstract: Capability of accurately locating tampered pixels is desirable in image authentication. We propose a novel statistical scheme of fragile watermarking, in which a set of tailor-made authentication data for each pixel together with some additional test data are embedded into the host image. On the authentication side, examining the pixels and their corresponding authentication data will reveal the exact pattern of the content modification. As long as the tampered area is not too extensive, two distinct probability distributions corresponding to tampered and original pixels can be used to exactly identify the tampered pixels.

Journal ArticleDOI
TL;DR: A new approach for the EMD based on the direct construction of the mean envelope of the signal is presented, achieved through the resolution of a quadratic programming problem with equality and inequality constraints.
Abstract: The empirical mode decomposition (EMD) is an algorithmic construction that aims at decomposing a signal into several modes called intrinsic mode functions. In this letter, we present a new approach for the EMD based on the direct construction of the mean envelope of the signal. The definition of the mean envelope is achieved through the resolution of a quadratic programming problem with equality and inequality constraints. Some numerical experiments conclude this letter, and comparisons are carried out with the classical EMD.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the FMFED algorithm can extract the thin edges and remove the false edges from the image, which leads to its better performance than the Sobel operator, Canny operator, traditional fuzzy edge detection algorithm, and other multilevel fuzzy edge Detection algorithms.
Abstract: To realize the fast and accurate detection of the edges from the blurry images, the fast multilevel fuzzy edge detection (FMFED) algorithm is proposed. The FMFED algorithm first enhances the image contrast by means of the fast multilevel fuzzy enhancement (FMFE) algorithm using the simple transformation function based on two image thresholds. Second, the edges are extracted from the enhanced image by the two-stage edge detection operator that identifies the edge candidates based on the local characteristics of the image and then determines the true edge pixels using the edge detection operator based on the extremum of the gradient values. Experimental results demonstrate that the FMFED algorithm can extract the thin edges and remove the false edges from the image, which leads to its better performance than the Sobel operator, Canny operator, traditional fuzzy edge detection algorithm, and other multilevel fuzzy edge detection algorithms

Journal ArticleDOI
TL;DR: A known-plaintext attack is presented to show that the MHTs used for encryption should be carefully selected to avoid the weak keys problem and two empirical criteria for Huffman table selection are suggested, based on which the stream cipher integrated scheme can be simplified.
Abstract: This letter addresses the security issues of the multimedia encryption schemes using multiple Huffman table (MHT). A known-plaintext attack is presented to show that the MHTs used for encryption should be carefully selected to avoid the weak keys problem. We then propose chosen-plaintext attacks on the basic MHT algorithm as well as the enhanced scheme with random bit insertion. In addition, we suggest two empirical criteria for Huffman table selection, based on which we can simplify the stream cipher integrated scheme, while ensuring a high level of security

Journal ArticleDOI
TL;DR: The experimental results on the AR face database demonstrate the effectiveness of the KLD-based LGBP face recognition method for partially occluded face images.
Abstract: The partial occlusion is one of the key issues in the face recognition community. To resolve the problem of partial occlusion, based on our previous work of local Gabor binary patterns (LGBP) for face recognition, we further propose Kullback-Leibler divergence (KLD)-based LGBP for partial occluded face recognition. The local property of LGBP face recognition is thoroughly used in the method, by introducing KLD between the LGBP feature of the local region and that of the non-occluded local region to estimate the probability of occlusion. The probability is used as the weight of the local region for the final feature matching. The experimental results on the AR face database demonstrate the effectiveness of the KLD-based LGBP face recognition method for partially occluded face images.

Journal ArticleDOI
TL;DR: This letter describes a speaker verification system that uses complementary acoustic features derived from the vocal source excitation and the vocal tract system, and a new feature set, named the wavelet octave coefficients of residues (WOCOR), to capture the spectro-temporal sourceexcitation characteristics embedded in the linear predictive residual signal.
Abstract: This letter describes a speaker verification system that uses complementary acoustic features derived from the vocal source excitation and the vocal tract system. A new feature set, named the wavelet octave coefficients of residues (WOCOR), is proposed to capture the spectro-temporal source excitation characteristics embedded in the linear predictive residual signal. WOCOR is used to supplement the conventional vocal tract-related features, in this case, the Mel-frequency cepstral coefficients (MFCC), for speaker verification. A novel confidence measure-based score fusion technique is applied to integrate WOCOR and MFCC. Speaker verification experiments are carried out on the NIST 2001 database. The equal error rate (EER) attained with the proposed method is 7.67%, in comparison to 9.30% of the conventional MFCC-based system

Journal ArticleDOI
TL;DR: The joint entropy-based new TDE algorithm manifests a potential to outperform the MCCC-based method and is shown to be more robust to noise and reverberation.
Abstract: Time delay estimation (TDE) is a basic technique for numerous applications where there is a need to localize and track a radiating source. The most important TDE algorithms for two sensors are based on the generalized cross-correlation (GCC) method. These algorithms perform reasonably well when reverberation or noise is not too high. In an earlier study by the authors, a more sophisticated approach was proposed. It employs more sensors and takes advantage of their delay redundancy to improve the precision of the time difference of arrival (TDOA) estimate between the first two sensors. The approach is based on the multichannel cross-correlation coefficient (MCCC) and was found more robust to noise and reverberation. In this letter, we show that this approach can also be developed on a basis of joint entropy. For Gaussian signals, we show that, in the search of the TDOA estimate, maximizing MCCC is equivalent to minimizing joint entropy. However, with the generalization of the idea to non-Gaussian signals (e.g., speech), the joint entropy-based new TDE algorithm manifests a potential to outperform the MCCC-based method

Journal ArticleDOI
TL;DR: Experimental results show that the proposed method can yield more comprehensible images for color-deficient viewers while maintaining the naturalness of the recolored images for standard viewers.
Abstract: In this letter, we proposed a new recoloring method for people with protanopic and deuteranopic color deficiencies. We present a color transformation that aims to preserve the color information in the original images while maintaining the recolored images as natural as possible. Two error functions are introduced and combined together to form an objective function using the Lagrange multiplier with a user-specified parameter lambda. This objective function is then minimized to obtain the optimal settings. Experimental results show that the proposed method can yield more comprehensible images for color-deficient viewers while maintaining the naturalness of the recolored images for standard viewers.