scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Signal Processing in 2009"


Journal ArticleDOI
TL;DR: This work proposes iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian plus the original sparsity-inducing regularizer, and proves convergence of the proposed iterative algorithm to a minimum of the objective function.
Abstract: Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.

1,723 citations


Journal ArticleDOI
TL;DR: This paper describes a heuristic, based on convex optimization, that gives a subset selection as well as a bound on the best performance that can be achieved by any selection of k sensor measurements.
Abstract: We consider the problem of choosing a set of k sensor measurements, from a set of m possible or potential sensor measurements, that minimizes the error in estimating some parameters. Solving this problem by evaluating the performance for each of the (m k) possible choices of sensor measurements is not practical unless m and k are small. In this paper, we describe a heuristic, based on convex optimization, for approximately solving this problem. Our heuristic gives a subset selection as well as a bound on the best performance that can be achieved by any selection of k sensor measurements. There is no guarantee that the gap between the performance of the chosen subset and the performance bound is always small; but numerical experiments suggest that the gap is small in many cases. Our heuristic method requires on the order of m 3 operations; for m= 1000 possible sensors, we can carry out sensor selection in a few seconds on a 2-GHz personal computer.

1,251 citations


Journal ArticleDOI
TL;DR: A stylized compressed sensing radar is proposed in which the time-frequency plane is discretized into an N times N grid and the techniques of compressed sensing are employed to reconstruct the target scene.
Abstract: A stylized compressed sensing radar is proposed in which the time-frequency plane is discretized into an N times N grid. Assuming the number of targets K is small (i.e., K Lt N2), then we can transmit a sufficiently ldquoincoherentrdquo pulse and employ the techniques of compressed sensing to reconstruct the target scene. A theoretical upper bound on the sparsity K is presented. Numerical simulations verify that even better performance can be achieved in practice. This novel-compressed sensing approach offers great potential for better resolution over classical radar.

1,113 citations


Journal ArticleDOI
TL;DR: A fast algorithm for overcomplete sparse decomposition, called SL0, is proposed, which tries to directly minimize the l 1 norm.
Abstract: In this paper, a fast algorithm for overcomplete sparse decomposition, called SL0, is proposed. The algorithm is essentially a method for obtaining sparse solutions of underdetermined systems of linear equations, and its applications include underdetermined sparse component analysis (SCA), atomic decomposition on overcomplete dictionaries, compressed sensing, and decoding real field codes. Contrary to previous methods, which usually solve this problem by minimizing the l 1 norm using linear programming (LP) techniques, our algorithm tries to directly minimize the l 1 norm. It is experimentally shown that the proposed algorithm is about two to three orders of magnitude faster than the state-of-the-art interior-point LP solvers, while providing the same (or better) accuracy.

1,033 citations


Journal ArticleDOI
TL;DR: This paper describes how to choose the parameters of the multi-coset sampling so that a unique multiband signal matches the given samples, and develops a theoretical lower bound on the average sampling rate required for blind signal reconstruction, which is twice the minimal rate of known-spectrum recovery.
Abstract: We address the problem of reconstructing a multiband signal from its sub-Nyquist pointwise samples, when the band locations are unknown. Our approach assumes an existing multi-coset sampling. To date, recovery methods for this sampling strategy ensure perfect reconstruction either when the band locations are known, or under strict restrictions on the possible spectral supports. In this paper, only the number of bands and their widths are assumed without any other limitations on the support. We describe how to choose the parameters of the multi-coset sampling so that a unique multiband signal matches the given samples. To recover the signal, the continuous reconstruction is replaced by a single finite-dimensional problem without the need for discretization. The resulting problem is studied within the framework of compressed sensing, and thus can be solved efficiently using known tractable algorithms from this emerging area. We also develop a theoretical lower bound on the average sampling rate required for blind signal reconstruction, which is twice the minimal rate of known-spectrum recovery. Our method ensures perfect reconstruction for a wide class of signals sampled at the minimal rate, and provides a first systematic study of compressed sensing in a truly analog setting. Numerical experiments are presented demonstrating blind sampling and reconstruction with minimal sampling rate.

769 citations


Journal ArticleDOI
TL;DR: Simulation results show that the proposed spectrum sensing schemes can considerably improve system performance, and useful principles for the design of distributed wideband spectrum sensing algorithms in cognitive radio networks are established.
Abstract: Spectrum sensing is an essential functionality that enables cognitive radios to detect spectral holes and to opportunistically use under-utilized frequency bands without causing harmful interference to legacy (primary) networks. In this paper, a novel wideband spectrum sensing technique referred to as multiband joint detection is introduced, which jointly detects the primary signals over multiple frequency bands rather than over one band at a time. Specifically, the spectrum sensing problem is formulated as a class of optimization problems, which maximize the aggregated opportunistic throughput of a cognitive radio system under some constraints on the interference to the primary users. By exploiting the hidden convexity in the seemingly nonconvex problems, optimal solutions can be obtained for multiband joint detection under practical conditions. The situation in which individual cognitive radios might not be able to reliably detect weak primary signals due to channel fading/shadowing is also considered. To address this issue by exploiting the spatial diversity, a cooperative wideband spectrum sensing scheme refereed to as spatial-spectral joint detection is proposed, which is based on a linear combination of the local statistics from multiple spatially distributed cognitive radios. The cooperative sensing problem is also mapped into an optimization problem, for which suboptimal solutions can be obtained through mathematical transformation under conditions of practical interest. Simulation results show that the proposed spectrum sensing schemes can considerably improve system performance. This paper establishes useful principles for the design of distributed wideband spectrum sensing algorithms in cognitive radio networks.

742 citations


Journal ArticleDOI
TL;DR: It is shown analytically that the multitarget multiBernoulli (MeMBer) recursion, proposed by Mahler, has a significant bias in the number of targets and to reduce the cardinality bias, a novel multi Bernoulli approximation to the multi-target Bayes recursion is derived.
Abstract: It is shown analytically that the multitarget multiBernoulli (MeMBer) recursion, proposed by Mahler, has a significant bias in the number of targets. To reduce the cardinality bias, a novel multiBernoulli approximation to the multi-target Bayes recursion is derived. Under the same assumptions as the MeMBer recursion, the proposed recursion is unbiased. In addition, a sequential Monte Carlo (SMC) implementation (for generic models) and a Gaussian mixture (GM) implementation (for linear Gaussian models) are proposed. The latter is also extended to accommodate mildly nonlinear models by linearization and the unscented transform.

741 citations


Journal ArticleDOI
TL;DR: It is shown that A-ND represents the best of both worlds-zero bias and low variance-at the cost of a slow convergence rate; rescaling the weights balances the variance versus the rate of bias reduction (convergence rate).
Abstract: The paper studies average consensus with random topologies (intermittent links) and noisy channels. Consensus with noise in the network links leads to the bias-variance dilemma-running consensus for long reduces the bias of the final average estimate but increases its variance. We present two different compromises to this tradeoff: the A-ND algorithm modifies conventional consensus by forcing the weights to satisfy a persistence condition (slowly decaying to zero;) and the A-NC algorithm where the weights are constant but consensus is run for a fixed number of iterations [^(iota)], then it is restarted and rerun for a total of [^(p)] runs, and at the end averages the final states of the [^(p)] runs (Monte Carlo averaging). We use controlled Markov processes and stochastic approximation arguments to prove almost sure convergence of A-ND to a finite consensus limit and compute explicitly the mean square error (mse) (variance) of the consensus limit. We show that A-ND represents the best of both worlds-zero bias and low variance-at the cost of a slow convergence rate; rescaling the weights balances the variance versus the rate of bias reduction (convergence rate). In contrast, A-NC, because of its constant weights, converges fast but presents a different bias-variance tradeoff. For the same number of iterations [^(iota)][^(p)] , shorter runs (smaller [^(iota)] ) lead to high bias but smaller variance (larger number [^(p)] of runs to average over.) For a static nonrandom network with Gaussian noise, we compute the optimal gain for A-NC to reach in the shortest number of iterations [^(iota)][^(p)] , with high probability (1-delta), (epsiv, delta)-consensus (epsiv residual bias). Our results hold under fairly general assumptions on the random link failures and communication noise.

687 citations


Journal ArticleDOI
TL;DR: In this paper, the wavelet thresholding principle is used in the decomposition modes resulting from applying EMD to a signal, and it is shown that although a direct application of this principle is not feasible in the EMD case, it can be appropriately adapted by exploiting the special characteristics of the E MD decomposition mode.
Abstract: One of the tasks for which empirical mode decomposition (EMD) is potentially useful is nonparametric signal denoising, an area for which wavelet thresholding has been the dominant technique for many years. In this paper, the wavelet thresholding principle is used in the decomposition modes resulting from applying EMD to a signal. We show that although a direct application of this principle is not feasible in the EMD case, it can be appropriately adapted by exploiting the special characteristics of the EMD decomposition modes. In the same manner, inspired by the translation invariant wavelet thresholding, a similar technique adapted to EMD is developed, leading to enhanced denoising performance.

553 citations


Journal ArticleDOI
TL;DR: It is proved that the random consensus value is, in expectation, the average of initial node measurements and that it can be made arbitrarily close to this value in mean squared error sense, under a balanced connectivity model and by trading off convergence speed with accuracy of the computation.
Abstract: Motivated by applications to wireless sensor, peer-to-peer, and ad hoc networks, we study distributed broadcasting algorithms for exchanging information and computing in an arbitrarily connected network of nodes. Specifically, we study a broadcasting-based gossiping algorithm to compute the (possibly weighted) average of the initial measurements of the nodes at every node in the network. We show that the broadcast gossip algorithm converges almost surely to a consensus. We prove that the random consensus value is, in expectation, the average of initial node measurements and that it can be made arbitrarily close to this value in mean squared error sense, under a balanced connectivity model and by trading off convergence speed with accuracy of the computation. We provide theoretical and numerical results on the mean square error performance, on the convergence rate and study the effect of the ldquomixing parameterrdquo on the convergence rate of the broadcast gossip algorithm. The results indicate that the mean squared error strictly decreases through iterations until the consensus is achieved. Finally, we assess and compare the communication cost of the broadcast gossip algorithm to achieve a given distance to consensus through theoretical and numerical results.

516 citations


Journal ArticleDOI
TL;DR: A hierarchical Bayesian model is constituted, with efficient inference via Markov chain Monte Carlo (MCMC) sampling, with performance comparisons to many state-of-the-art compressive-sensing inversion algorithms.
Abstract: Bayesian compressive sensing (CS) is considered for signals and images that are sparse in a wavelet basis. The statistical structure of the wavelet coefficients is exploited explicitly in the proposed model, and, therefore, this framework goes beyond simply assuming that the data are compressible in a wavelet basis. The structure exploited within the wavelet coefficients is consistent with that used in wavelet-based compression algorithms. A hierarchical Bayesian model is constituted, with efficient inference via Markov chain Monte Carlo (MCMC) sampling. The algorithm is fully developed and demonstrated using several natural images, with performance comparisons to many state-of-the-art compressive-sensing inversion algorithms.

Journal ArticleDOI
TL;DR: It has been demonstrated that with appropriate design of the compressive measurements used to define v, the decompressive mapping vrarru may be performed with error with asymptotic properties analogous to those of the best adaptive transform-coding algorithm applied in the basis Psi.
Abstract: Compressive sensing (CS) is a framework whereby one performs N nonadaptive measurements to constitute a vector v isin RN used to recover an approximation u isin RM desired signal u isin RM with N 1 sets of compressive measurements {vi}i=1,L are performed, each of the associated {ui}i=1,Lare recovered one at a time, independently. In many applications the L ldquotasksrdquo defined by the mappings virarrui are not statistically independent, and it may be possible to improve the performance of the inversion if statistical interrelationships are exploited. In this paper, we address this problem within a multitask learning setting, wherein the mapping vrarru for each task corresponds to inferring the parameters (here, wavelet coefficients) associated with the desired signal vi, and a shared prior is placed across all of the L tasks. Under this hierarchical Bayesian modeling, data from all L tasks contribute toward inferring a posterior on the hyperparameters, and once the shared prior is thereby inferred, the data from each of the L individual tasks is then employed to estimate the task-dependent wavelet coefficients. An empirical Bayesian procedure for the estimation of hyperparameters is considered; two fast inference algorithms extending the relevance vector machine (RVM) are developed. Example results on several data sets demonstrate the effectiveness and robustness of the proposed algorithms.

Journal ArticleDOI
TL;DR: The relaxation given in (*) can be solved in polynomial time using semi-definite programming.
Abstract: Let A be an M by N matrix (M 1 - 1/d, and d = Omega(log(1/isin)/isin3) . The relaxation given in (*) can be solved in polynomial time using semi-definite programming.

Journal ArticleDOI
TL;DR: This paper investigates a new model reduction criterion that makes computationally demanding sparsification procedures unnecessary and incorporates the coherence criterion into a new kernel-based affine projection algorithm for time series prediction.
Abstract: Kernel-based algorithms have been a topic of considerable interest in the machine learning community over the last ten years. Their attractiveness resides in their elegant treatment of nonlinear problems. They have been successfully applied to pattern recognition, regression and density estimation. A common characteristic of kernel-based methods is that they deal with kernel expansions whose number of terms equals the number of input data, making them unsuitable for online applications. Recently, several solutions have been proposed to circumvent this computational burden in time series prediction problems. Nevertheless, most of them require excessively elaborate and costly operations. In this paper, we investigate a new model reduction criterion that makes computationally demanding sparsification procedures unnecessary. The increase in the number of variables is controlled by the coherence parameter, a fundamental quantity that characterizes the behavior of dictionaries in sparse approximation problems. We incorporate the coherence criterion into a new kernel-based affine projection algorithm for time series prediction. We also derive the kernel-based normalized LMS algorithm as a particular case. Finally, experiments are conducted to compare our approach to existing methods.

Journal ArticleDOI
TL;DR: This paper presents several cyclic algorithms for the local minimization of ISL-related metrics and presents a number of examples, including the design of sequences that have virtually zero autocorrelation sidelobes in a specified lag interval and of long sequences that could hardly be handled by means of other algorithms previously suggested in the literature.
Abstract: Unimodular (i.e., constant modulus) sequences with good autocorrelation properties are useful in several areas, including communications and radar. The integrated sidelobe level (ISL) of the correlation function is often used to express the goodness of the correlation properties of a given sequence. In this paper, we present several cyclic algorithms for the local minimization of ISL-related metrics. These cyclic algorithms can be initialized with a good existing sequence such as a Golomb sequence, a Frank sequence, or even a (pseudo)random sequence. To illustrate the performance of the proposed algorithms, we present a number of examples, including the design of sequences that have virtually zero autocorrelation sidelobes in a specified lag interval and of long sequences that could hardly be handled by means of other algorithms previously suggested in the literature.

Journal ArticleDOI
TL;DR: It is proved that for Schur-concave objective functions, the optimal source precoding matrix and relay amplifying matrix jointly diagonalize the source-relay-destination channel matrix and convert the multicarrier MIMO relay channel into parallel single-input single-output (SISO) relay channels.
Abstract: In this paper, we develop a unified framework for linear nonregenerative multicarrier multiple-input multiple-output (MIMO) relay communications in the absence of the direct source-destination link. This unified framework classifies most commonly used design objectives such as the minimal mean-square error and the maximal mutual information into two categories: Schur-concave and Schur-convex functions. We prove that for Schur-concave objective functions, the optimal source precoding matrix and relay amplifying matrix jointly diagonalize the source-relay-destination channel matrix and convert the multicarrier MIMO relay channel into parallel single-input single-output (SISO) relay channels. While for Schur-convex objectives, such joint diagonalization occurs after a specific rotation of the source precoding matrix. After the optimal structure of the source and relay matrices is determined, the linear nonregenerative relay design problem boils down to the issue of power loading among the resulting SISO relay channels. We show that this power loading problem can be efficiently solved by an alternating technique. Numerical examples demonstrate the effectiveness of the proposed framework.

Journal ArticleDOI
TL;DR: A Gibbs sampler is proposed to overcome the complexity of evaluating the resulting posterior distribution and estimates the unknown parameters using these generated samples using the joint Bayesian estimator.
Abstract: This paper studies a fully Bayesian algorithm for endmember extraction and abundance estimation for hyperspectral imagery. Each pixel of the hyperspectral image is decomposed as a linear combination of pure endmember spectra following the linear mixing model. The estimation of the unknown endmember spectra is conducted in a unified manner by generating the posterior distribution of abundances and endmember parameters under a hierarchical Bayesian model. This model assumes conjugate prior distributions for these parameters, accounts for nonnegativity and full-additivity constraints, and exploits the fact that the endmember proportions lie on a lower dimensional simplex. A Gibbs sampler is proposed to overcome the complexity of evaluating the resulting posterior distribution. This sampler generates samples distributed according to the posterior distribution and estimates the unknown parameters using these generated samples. The accuracy of the joint Bayesian estimator is illustrated by simulations conducted on synthetic and real AVIRIS images.

Journal ArticleDOI
TL;DR: A fully distributed least mean-square algorithm is developed in this paper, offering simplicity and flexibility while solely requiring single-hop communications among sensors, and stability of the novel D-LMS algorithm is established to guarantee that local sensor estimation error norms remain bounded most of the time.
Abstract: Adaptive algorithms based on in-network processing of distributed observations are well-motivated for online parameter estimation and tracking of (non)stationary signals using ad hoc wireless sensor networks (WSNs). To this end, a fully distributed least mean-square (D-LMS) algorithm is developed in this paper, offering simplicity and flexibility while solely requiring single-hop communications among sensors. The resultant estimator minimizes a pertinent squared-error cost by resorting to i) the alternating-direction method of multipliers so as to gain the desired degree of parallelization and ii) a stochastic approximation iteration to cope with the time-varying statistics of the process under consideration. Information is efficiently percolated across the WSN using a subset of ldquobridgerdquo sensors, which further tradeoff communication cost for robustness to sensor failures. For a linear data model and under mild assumptions aligned with those considered in the centralized LMS, stability of the novel D-LMS algorithm is established to guarantee that local sensor estimation error norms remain bounded most of the time. Interestingly, this weak stochastic stability result extends to the pragmatic setup where intersensor communications are corrupted by additive noise. In the absence of observation and communication noise, consensus is achieved almost surely as local estimates are shown exponentially convergent to the parameter of interest with probability one. Mean-square error performance of D-LMS is also assessed. Numerical simulations: i) illustrate that D-LMS outperforms existing alternatives that rely either on information diffusion among neighboring sensors, or, local sensor filtering; ii) highlight its tracking capabilities; and iii) corroborate the stability and performance analysis results.

Journal ArticleDOI
TL;DR: This paper introduces the complex LLL algorithm for direct application to reducing the basis of a complex lattice which is naturally defined by a complex-valued channel matrix, and derives an upper bound on proximity factors, which not only shows the full diversity of complex L LL reduction-aided detectors, but also characterize the performance gap relative to the lattice decoder.
Abstract: Recently, lattice-reduction-aided detectors have been proposed for multiinput multioutput (MIMO) systems to achieve performance with full diversity like the maximum likelihood receiver. However, these lattice-reduction-aided detectors are based on the traditional Lenstra-Lenstra-Lovasz (LLL) reduction algorithm that was originally introduced for reducing real lattice bases, in spite of the fact that the channel matrices are inherently complex-valued. In this paper, we introduce the complex LLL algorithm for direct application to reducing the basis of a complex lattice which is naturally defined by a complex-valued channel matrix. We derive an upper bound on proximity factors, which not only show the full diversity of complex LLL reduction-aided detectors, but also characterize the performance gap relative to the lattice decoder. Our analysis reveals that the complex LLL algorithm can reduce the complexity by nearly 50% compared to the traditional LLL algorithm, and this is confirmed by simulation. Interestingly, our simulation results suggest that the complex LLL algorithm has practically the same bit-error-rate performance as the traditional LLL algorithm, in spite of its lower complexity.

Journal ArticleDOI
TL;DR: This paper incorporates convex analysis and Craig's criterion to develop a minimum-volume enclosing simplex (MVES) formulation for hyperspectral unmixing, and provides a non-heuristic guarantee of the MVES problem formulation, where the existence of pure pixels is proved to be a sufficient condition for MVES to perfectly identify the true endmembers.
Abstract: Hyperspectral unmixing aims at identifying the hidden spectral signatures (or endmembers) and their corresponding proportions (or abundances) from an observed hyperspectral scene. Many existing hyperspectral unmixing algorithms were developed under a commonly used assumption that pure pixels exist. However, the pure-pixel assumption may be seriously violated for highly mixed data. Based on intuitive grounds, Craig reported an unmixing criterion without requiring the pure-pixel assumption, which estimates the endmembers by vertices of a minimum-volume simplex enclosing all the observed pixels. In this paper, we incorporate convex analysis and Craig's criterion to develop a minimum-volume enclosing simplex (MVES) formulation for hyperspectral unmixing. A cyclic minimization algorithm for approximating the MVES problem is developed using linear programs (LPs), which can be practically implemented by readily available LP solvers. We also provide a non-heuristic guarantee of our MVES problem formulation, where the existence of pure pixels is proved to be a sufficient condition for MVES to perfectly identify the true endmembers. Some Monte Carlo simulations and real data experiments are presented to demonstrate the efficacy of the proposed MVES algorithm over several existing hyperspectral unmixing methods.

Journal ArticleDOI
TL;DR: New computationally efficient cyclic algorithms for MIMO radar waveform synthesis can be used for the design of unimodular MIMo sequences that have very low auto- and cross-correlation sidelobes in a specified lag interval, and of very long sequences that could hardly be handled by other algorithms previously suggested in the literature.
Abstract: A multiple-input multiple-output (MIMO) radar system that transmits orthogonal waveforms via its antennas can achieve a greatly increased virtual aperture compared with its phased-array counterpart. This increased virtual aperture enables many of the MIMO radar advantages, including enhanced parameter identifiability and improved resolution. Practical radar requirements such as unit peak-to-average power ratio and range compression dictate that we use MIMO radar waveforms that have constant modulus and good auto- and cross-correlation properties. We present in this paper new computationally efficient cyclic algorithms for MIMO radar waveform synthesis. These algorithms can be used for the design of unimodular MIMO sequences that have very low auto- and cross-correlation sidelobes in a specified lag interval, and of very long sequences that could hardly be handled by other algorithms previously suggested in the literature. A number of examples are provided to demonstrate the performances of the new waveform synthesis algorithms.

Journal ArticleDOI
TL;DR: Simulation experiments are provided that show the benefits of the proposed cyclostationary approach compared to energy detection, the importance of collaboration among spatially displaced secondary users for overcoming shadowing and fading effects, as well as the reliable performance of the suggested algorithms even in very low signal-to-noise ratio (SNR) regimes and under strict communication rate constraints for collaboration overhead.
Abstract: This paper proposes an energy efficient collaborative cyclostationary spectrum sensing approach for cognitive radio systems. An existing statistical hypothesis test for the presence of cyclostationarity is extended to multiple cyclic frequencies and its asymptotic distributions are established. Collaborative test statistics are proposed for the fusion of local test statistics of the secondary users, and a censoring technique in which only informative test statistics are transmitted to the fusion center (FC) during the collaborative detection is further proposed for improving energy efficiency in mobile applications. Moreover, a technique for numerical approximation of the asymptotic distribution of the censored FC test statistic is proposed. The proposed tests are nonparametric in the sense that no assumptions on data or noise distributions are required. In addition, the tests allow dichotomizing between the desired signal and interference. Simulation experiments are provided that show the benefits of the proposed cyclostationary approach compared to energy detection, the importance of collaboration among spatially displaced secondary users for overcoming shadowing and fading effects, as well as the reliable performance of the proposed algorithms even in very low signal-to-noise ratio (SNR) regimes and under strict communication rate constraints for collaboration overhead.

Journal ArticleDOI
TL;DR: An iterative least squares (LS) procedure to jointly optimize the interpolation, decimation and filtering tasks for reduced-rank adaptive filtering for interference suppression in code-division multiple-access (CDMA) systems is described.
Abstract: We present an adaptive reduced-rank signal processing technique for performing dimensionality reduction in general adaptive filtering problems. The proposed method is based on the concept of joint and iterative interpolation, decimation and filtering. We describe an iterative least squares (LS) procedure to jointly optimize the interpolation, decimation and filtering tasks for reduced-rank adaptive filtering. In order to design the decimation unit, we present the optimal decimation scheme and also propose low-complexity decimation structures. We then develop low-complexity least-mean squares (LMS) and recursive least squares (RLS) algorithms for the proposed scheme along with automatic rank and branch adaptation techniques. An analysis of the convergence properties and issues of the proposed algorithms is carried out and the key features of the optimization problem such as the existence of multiple solutions are discussed. We consider the application of the proposed algorithms to interference suppression in code-division multiple-access (CDMA) systems. Simulations results show that the proposed algorithms outperform the best known reduced-rank schemes with lower complexity.

Journal ArticleDOI
TL;DR: A stochastic approximation version extending DILOC to random environments, i.e., when the communications among nodes is noisy, the communication links among neighbors may fail at random times, and the internodes distances are subject to errors is introduced.
Abstract: The paper introduces DILOC, a distributed, iterative algorithm to locate M sensors (with unknown locations) in Rm, m ges 1, with respect to a minimal number of m + 1 anchors with known locations. The sensors and anchors, nodes in the network, exchange data with their neighbors only; no centralized data processing or communication occurs, nor is there a centralized fusion center to compute the sensors' locations. DILOC uses the barycentric coordinates of a node with respect to its neighbors; these coordinates are computed using the Cayley-Menger determinants, i.e., the determinants of matrices of internode distances. We show convergence of DILOC by associating with it an absorbing Markov chain whose absorbing states are the states of the anchors. We introduce a stochastic approximation version extending DILOC to random environments, i.e., when the communications among nodes is noisy, the communication links among neighbors may fail at random times, and the internodes distances are subject to errors. We show a.s. convergence of the modified DILOC and characterize the error between the true values of the sensors' locations and their final estimates given by DILOC. Numerical studies illustrate DILOC under a variety of deterministic and random operating conditions.

Journal ArticleDOI
TL;DR: A generative model of joint BSS based on the correlation of latent sources within and between datasets using multiset canonical correlation analysis (M-CCA) and its utility in estimating meaningful brain activations from a visuomotor task is proposed.
Abstract: In this paper, we introduce a simple and effective scheme to achieve joint blind source separation (BSS) of multiple datasets using multiset canonical correlation analysis (M-CCA) [J. R. Kettenring, "Canonical analysis of several sets of variables", Biometrika, vol. 58, pp. 433-451, 1971]. We first propose a generative model of joint BSS based on the correlation of latent sources within and between datasets. We specify source separability conditions, and show that, when the conditions are satisfied, the group of corresponding sources from each dataset can be jointly extracted by M-CCA through maximization of correlation among the extracted sources. We compare source separation performance of the M-CCA scheme with other joint BSS methods and demonstrate the superior performance of the M-CCA scheme in achieving joint BSS for a large number of datasets, group of corresponding sources with heterogeneous correlation values, and complex-valued sources with circular and non-circular distributions. We apply M-CCA to analysis of functional magnetic resonance imaging (fMRI) data from multiple subjects and show its utility in estimating meaningful brain activations from a visuomotor task.

Journal ArticleDOI
TL;DR: Experimental results demonstrate the effectiveness of the proposed generic framework compared to existing algorithms, including iterative reweighted least-squares methods, and several algorithms in the literature dealing with nonconvex penalties are particular instances of the algorithm.
Abstract: This paper considers the problem of recovering a sparse signal representation according to a signal dictionary. This problem could be formalized as a penalized least-squares problem in which sparsity is usually induced by a lscr1-norm penalty on the coefficients. Such an approach known as the Lasso or Basis Pursuit Denoising has been shown to perform reasonably well in some situations. However, it was also proved that nonconvex penalties like the pseudo lscrq-norm with q < 1 or smoothly clipped absolute deviation (SCAD) penalty are able to recover sparsity in a more efficient way than the Lasso. Several algorithms have been proposed for solving the resulting nonconvex least-squares problem. This paper proposes a generic algorithm to address such a sparsity recovery problem for some class of nonconvex penalties. Our main contribution is that the proposed methodology is based on an iterative algorithm which solves at each iteration a convex weighted Lasso problem. It relies on the family of nonconvex penalties which can be decomposed as a difference of convex functions (DC). This allows us to apply DC programming which is a generic and principled way for solving nonsmooth and nonconvex optimization problem. We also show that several algorithms in the literature dealing with nonconvex penalties are particular instances of our algorithm. Experimental results demonstrate the effectiveness of the proposed generic framework compared to existing algorithms, including iterative reweighted least-squares methods.

Journal ArticleDOI
TL;DR: A novel data acquisition and imaging method is presented for stepped-frequency continuous-wave ground penetrating radars (SFCW GPRs) and it is shown that if the target space is sparse, it is enough to make measurements at only a small number of random frequencies to construct an image of thetarget space by solving a convex optimization problem which enforces sparsity through lscr 1 minimization.
Abstract: A novel data acquisition and imaging method is presented for stepped-frequency continuous-wave ground penetrating radars (SFCW GPRs). It is shown that if the target space is sparse, i.e., a small number of point like targets, it is enough to make measurements at only a small number of random frequencies to construct an image of the target space by solving a convex optimization problem which enforces sparsity through lscr 1 minimization. This measurement strategy greatly reduces the data acquisition time at the expense of higher computational costs. Imaging results for both simulated and experimental GPR data exhibit less clutter than the standard migration methods and are robust to noise and random spatial sampling. The images also have increased resolution where closely spaced targets that cannot be resolved by the standard migration methods can be resolved by the proposed method.

Journal ArticleDOI
TL;DR: This paper presents a new algorithm for detection of the number of sources via a sequence of hypothesis tests, and theoretically analyze the consistency and detection performance of the proposed algorithm, showing its superiority compared to the standard minimum description length (MDL)-based estimator.
Abstract: Detection of the number of signals embedded in noise is a fundamental problem in signal and array processing. This paper focuses on the non-parametric setting where no knowledge of the array manifold is assumed. First, we present a detailed statistical analysis of this problem, including an analysis of the signal strength required for detection with high probability, and the form of the optimal detection test under certain conditions where such a test exists. Second, combining this analysis with recent results from random matrix theory, we present a new algorithm for detection of the number of sources via a sequence of hypothesis tests. We theoretically analyze the consistency and detection performance of the proposed algorithm, showing its superiority compared to the standard minimum description length (MDL)-based estimator. A series of simulations confirm our theoretical analysis.

Journal ArticleDOI
TL;DR: This paper introduces a simple and computationally efficient spectrum sensing scheme for Orthogonal Frequency Division Multiplexing (OFDM) based primary user signal using its autocorrelation coefficient and shows that the log likelihood ratio test (LLRT) statistic is the maximum likelihood estimate of the autoc orrelation coefficient in the low signal-to-noise ratio (SNR) regime.
Abstract: This paper introduces a simple and computationally efficient spectrum sensing scheme for Orthogonal Frequency Division Multiplexing (OFDM) based primary user signal using its autocorrelation coefficient. Further, it is shown that the log likelihood ratio test (LLRT) statistic is the maximum likelihood estimate of the autocorrelation coefficient in the low signal-to-noise ratio (SNR) regime. Performance of the local detector is studied for the additive white Gaussian noise (AWGN) and multipath channels using theoretical analysis. Obtained results are verified in simulation. The performance of the local detector in the face of shadowing is studied by simulations. A sequential detection (SD) scheme where many secondary users cooperate to detect the same primary user is proposed. User cooperation provides diversity gains as well as facilitates using simpler local detectors. The sequential detection reduces the delay and the amount of data needed in identification of the underutilized spectrum. The decision statistics from individual detectors are combined at the fusion center (FC). The statistical properties of the decision statistics are established. The performance of the scheme is studied through theory and validated by simulations. A comparison of the SD scheme with the Neyman-Pearson fixed sample size (FSS) test for the same false alarm and missed detection probabilities is also carried out.

Journal ArticleDOI
TL;DR: This paper studies the robust beamforming design for a multi-antenna cognitive radio (CR) network, which transmits to multiple secondary users (SUs) and coexists with a primary network of multiple users, and proposes iterative algorithms for obtaining the robust optimal beamforming solution.
Abstract: This paper studies the robust beamforming design for a multi-antenna cognitive radio (CR) network, which transmits to multiple secondary users (SUs) and coexists with a primary network of multiple users. We aim to maximize the minimum of the received signal-to-interference-plus-noise ratios (SINRs) of the SUs, subject to the constraints of the total SU transmit power and the received interference power at the primary users (PUs) by optimizing the beamforming vectors at the SU transmitter based on imperfect channel state information (CSI). To model the uncertainty in CSI, we consider a bounded region for both cases of channel matrices and channel covariance matrices. As such, the optimization is done while satisfying the interference constraints for all possible CSI error realizations. We shall first derive equivalent conditions for the interference constraints and then convert the problems into the form of semi-definite programming (SDP) with the aid of rank relaxation, which leads to iterative algorithms for obtaining the robust optimal beamforming solution. Results demonstrate the achieved robustness and the performance gain over conventional approaches and that the proposed algorithms can obtain the exact robust optimal solution with high probability.