scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Signal Processing Letters in 2009"


Journal ArticleDOI
TL;DR: In this article, a general approximation approach on l 0 norm, a typical metric of system sparsity, is proposed and integrated into the cost function of the LMS algorithm, which is equivalent to add a zero attractor in the iterations, by which the convergence rate of small coefficients, that dominate the sparse system, can be effectively improved.
Abstract: In order to improve the performance of least mean square (LMS) based system identification of sparse systems, a new adaptive algorithm is proposed which utilizes the sparsity property of such systems. A general approximating approach on l 0 norm-a typical metric of system sparsity, is proposed and integrated into the cost function of the LMS algorithm. This integration is equivalent to add a zero attractor in the iterations, by which the convergence rate of small coefficients, that dominate the sparse system, can be effectively improved. Moreover, using partial updating method, the computational complexity is reduced. The simulations demonstrate that the proposed algorithm can effectively improve the performance of LMS-based identification algorithms on sparse system.

343 citations


Journal ArticleDOI
TL;DR: Numerical results show that the proposed PF scheduler provides a superior fairness performance with a modest loss in throughput, as long as the user average SINRs are fairly uniform.
Abstract: The challenge of scheduling user transmissions on the downlink of a long term evolution (LTE) cellular communication system is addressed. A maximum rate algorithm which does not consider fairness among users was proposed in . Here, a multiuser scheduler with proportional fairness (PF) is proposed. Numerical results show that the proposed PF scheduler provides a superior fairness performance with a modest loss in throughput, as long as the user average SINRs are fairly uniform. A suboptimal PF scheduler is also proposed, which has a much lower complexity at the cost of some throughput degradation.

270 citations


Journal ArticleDOI
TL;DR: Stein's unbiased risk estimate (SURE) is used to monitor the mean square error of the NLM algorithm for restoration of an image corrupted by additive white Gaussian noise and an explicit analytical expression for SURE is derived in the setting of NLM that can be incorporated in the implementation at low computational cost.
Abstract: Non-local means (NLM) provides a powerful framework for denoising. However, there are a few parameters of the algorithm-most notably, the width of the smoothing kernel-that are data-dependent and difficult to tune. Here, we propose to use Stein's unbiased risk estimate (SURE) to monitor the mean square error (MSE) of the NLM algorithm for restoration of an image corrupted by additive white Gaussian noise. The SURE principle allows to assess the MSE without knowledge of the noise-free signal. We derive an explicit analytical expression for SURE in the setting of NLM that can be incorporated in the implementation at low computational cost. Finally, we present experimental results that confirm the optimality of the proposed parameter selection.

252 citations


Journal ArticleDOI
TL;DR: Modifications of CSP for subject-to-subject transfer are presented, where a linear combination of covariance matrices of subjects in consideration are exploited, leading to composite CSP.
Abstract: Common spatial pattern (CSP) is a popular feature extraction method for electroencephalogram (EEG) classification. Most of existing CSP-based methods exploit covariance matrices on a subject-by-subject basis so that inter-subject information is neglected. In this paper we present modifications of CSP for subject-to-subject transfer, where we exploit a linear combination of covariance matrices of subjects in consideration. We develop two methods to determine a composite covariance matrix that is a weighted sum of covariance matrices involving subjects, leading to composite CSP. Numerical experiments on dataset IVa in BCI competition III confirm that our composite CSP methods improve classification performance over the standard CSP (on a subject-by-subject basis), especially in the case of subjects with fewer number of training samples.

220 citations


Journal ArticleDOI
TL;DR: A two-stage algorithm, called switching-based adaptive weighted mean filter, is proposed to remove salt-and-pepper noise from the corrupted images by replacing each noisy pixel with the weighted mean of its noise-free neighbors in the filtering window.
Abstract: A two-stage algorithm, called switching-based adaptive weighted mean filter, is proposed to remove salt-and-pepper noise from the corrupted images. First, the directional difference based noise detector is used to identify the noisy pixels by comparing the minimum absolute value of four mean differences between the current pixel and its neighbors in four directional windows with a predefined threshold. Then, the adaptive weighted mean filter is adopted to remove the detected impulses by replacing each noisy pixel with the weighted mean of its noise-free neighbors in the filtering window. Numerous simulations demonstrate that the proposed filter outperforms many other existing algorithms in terms of effectiveness in noise detection, image restoration and computational efficiency.

183 citations


Journal ArticleDOI
TL;DR: Empirical results indicate that incremented-rank PF is significantly more successful than NNM at recovering low-rank matrices, in addition to being faster.
Abstract: Algorithms to construct/recover low-rank matrices satisfying a set of linear equality constraints have important applications in many signal processing contexts. Recently, theoretical guarantees for minimum-rank matrix recovery have been proven for nuclear norm minimization (NNM), which can be solved using standard convex optimization approaches. While nuclear norm minimization is effective, it can be computationally demanding. In this work, we explore the use of the powerfactorization (PF) algorithm as a tool for rank-constrained matrix recovery. Empirical results indicate that incremented-rank PF is significantly more successful than NNM at recovering low-rank matrices, in addition to being faster.

180 citations


Journal ArticleDOI
TL;DR: This letter shows that the optimal collaborative-relay beamforming (CRB) solution achieves the full diversity of a MISO antenna system and develops a distributed algorithm that allows each individual relay to learn its own weight, based on the Karush-Kuhn-Tucker (KKT) analysis.
Abstract: This letter studies the collaborative use of amplify-and-forward (AF) relays to form a virtual multiple-input single-output (MISO) beamforming system with the aid of perfect channel state information (CSI) in a flat-fading channel. In particular, we optimize the relay weights jointly to maximize the received signal-to-noise ratio (SNR) at the destination terminal with both individual and total power constraints at the relays. We show that the optimal collaborative-relay beamforming (CRB) solution achieves the full diversity of a MISO antenna system. Another main contribution of this letter is a distributed algorithm that allows each individual relay to learn its own weight, based on the Karush-Kuhn-Tucker (KKT) analysis.

167 citations


Journal ArticleDOI
TL;DR: A statistical reverberation model is proposed that takes the energy contribution of the direct-path into account and is then used to derive a more general LRSV estimator, which in a particular case reduces to an existing LRSS estimator.
Abstract: In speech communication systems the received microphone signals are degraded by room reverberation and ambient noise that decrease the fidelity and intelligibility of the desired speaker. Reverberant speech can be separated into two components, viz. early speech and late reverberant speech. Recently, various algorithms have been developed to suppress late reverberant speech. One of the main challenges is to develop an estimator for the so-called late reverberant spectral variance (LRSV) which is required by most of these algorithms. In this letter a statistical reverberation model is proposed that takes the energy contribution of the direct-path into account. This model is then used to derive a more general LRSV estimator, which in a particular case reduces to an existing LRSV estimator. Experimental results show that the developed estimator is advantageous in case the source-microphone distance is smaller than the critical distance.

163 citations


Journal ArticleDOI
TL;DR: This paper reduces the complexity of compressed imaging by factor of 106 for megapixel images by using a two-dimensional separable sensing operator and shows that applying this method requires only a reasonable amount of additional samples.
Abstract: Compressive imaging (CI) is a natural branch of compressed sensing (CS). Although a number of CI implementations have started to appear, the design of efficient CI system still remains a challenging problem. One of the main difficulties in implementing CI is that it involves huge amounts of data, which has far-reaching implications for the complexity of the optical design, calibration, data storage and computational burden. In this paper, we solve these problems by using a two-dimensional separable sensing operator. By so doing, we reduce the complexity by factor of 106 for megapixel images. We show that applying this method requires only a reasonable amount of additional samples.

159 citations


Journal ArticleDOI
TL;DR: The derivation of detection and false-alarm probabilities for energy detectors in cognitive radio networks when a sensing node of the secondary system has correlated multiple antennas is described.
Abstract: This paper describes the derivation of detection and false-alarm probabilities for energy detectors in cognitive radio networks when a sensing node of the secondary system has correlated multiple antennas. The sensing performance degradation due to the antenna correlation is then investigated based on the performance analysis. The conclusions of the analysis are verified by numerical simulation results.

141 citations


Journal ArticleDOI
TL;DR: This paper introduces a nonparametric background model based on small spatial neighborhood to improve discrimination sensitivity and applies a Markov model to change labels to improve spatial coherence of the detections.
Abstract: Background subtraction is a powerful mechanism for detecting change in a sequence of images that finds many applications. The most successful background subtraction methods apply probabilistic models to background intensities evolving in time; nonparametric and mixture-of-Gaussians models are but two examples. The main difficulty in designing a robust background subtraction algorithm is the selection of a detection threshold. In this paper, we adapt this threshold to varying video statistics by means of two statistical models. In addition to a nonparametric background model, we introduce a foreground model based on small spatial neighborhood to improve discrimination sensitivity. We also apply a Markov model to change labels to improve spatial coherence of the detections. The proposed methodology is applicable to other background models as well.

Journal ArticleDOI
TL;DR: The use of a zero-frequency resonator is proposed to extract the characteristics of excitation source from speech signals by filtering out most of the time-varying vocal-tract information and the regions of glottal activity and the strengths of exciting from the speech signal are in close agreement with those observed from the simultaneously recorded electro-glotto-graph signals.
Abstract: The objective of this work is to characterize certain important features of excitation of speech, namely, detecting the regions of glottal activity and estimating the strength of excitation in each glottal cycle. The proposed method is based on the assumption that the excitation to the vocal-tract system can be approximated by a sequence of impulses of varying strengths. The effect due to an impulse in the time-domain is spread uniformly across the frequency-domain including at zero-frequency. We propose the use of a zero-frequency resonator to extract the characteristics of excitation source from speech signals by filtering out most of the time-varying vocal-tract information. The regions of glottal activity and the strengths of excitation estimated from the speech signal are in close agreement with those observed from the simultaneously recorded electro-glotto-graph signals. The performance of the proposed glottal activity detection is evaluated under different noisy environments at varying levels of degradation.

Journal ArticleDOI
TL;DR: Compared with the previous works, it is shown that the suitable G-LSB-M can further reduce the ENMPP and lead to more secure steganographic schemes.
Abstract: Recently, a significant improvement of the well-known least significant bit (LSB) matching steganography has been proposed, reducing the changes to the cover image for the same amount of embedded secret data. When the embedding rate is 1, this method decreases the expected number of modification per pixel (ENMPP) from 0.5 to 0.375. In this letter, we propose the so-called generalized LSB matching (G-LSB-M) scheme, which generalizes this method and LSB matching. The lower bound of ENMPP for G-LSB-M is investigated, and a construction of G-LSB-M is presented by using the sum and difference covering set of finite cyclic group. Compared with the previous works, we show that the suitable G-LSB-M can further reduce the ENMPP and lead to more secure steganographic schemes. Experimental results illustrate clearly the better resistance to steganalysis of G-LSB-M.

Journal ArticleDOI
Deyun Wei1, Qiwen Ran1, Yuan-Min Li1, Jing Ma1, Liying Tan1 
TL;DR: A new convolution structure for the LCT is introduced that preserves the convolution theorem for the Fourier transform and is also easy to implement in the designing of filters.
Abstract: The linear canonical transform (LCT) plays an important role in many fields of optics and signal processing. Many properties for this transform are already known, however, the convolution theorems don't have the elegance and simplicity comparable to that of the Fourier transform (FT), which states that the Fourier transform of the convolution of two functions is the product of their Fourier transforms. The purpose of this letter is to introduce a new convolution structure for the LCT that preserves the convolution theorem for the Fourier transform and is also easy to implement in the designing of filters. Some of well-known results about the convolution theorem in FT domain, fractional Fourier transform (FRFT) domain are shown to be special cases of our achieved results.

Journal ArticleDOI
TL;DR: A novel least-mean-square LMS algorithm for filtering speech sounds in the adaptive noise cancellation (ANC) problem based on the minimization of the squared Euclidean norm of the difference weight vector under a stability constraint defined over the a posteriori estimation error is proposed.
Abstract: In this letter, we propose a novel least-mean-square (LMS) algorithm for filtering speech sounds in the adaptive noise cancellation (ANC) problem. It is based on the minimization of the squared Euclidean norm of the difference weight vector under a stability constraint defined over the a posteriori estimation error. To this purpose, the Lagrangian methodology has been used in order to propose a nonlinear adaptation rule defined in terms of the product of differential inputs and errors which means a generalization of the normalized (N)LMS algorithm. The proposed method yields better tracking ability in this context as shown in the experiments which are carried out on the AURORA 2 and 3 speech databases. They provide an extensive performance evaluation along with an exhaustive comparison to standard LMS algorithms with almost the same computational load, including the NLMS and other recently reported LMS algorithms such as the modified (M)-NLMS, the error nonlinearity (EN)-LMS, or the normalized data nonlinearity (NDN)-LMS adaptation.

Journal ArticleDOI
TL;DR: A depth reconstruction filter and depth down/up sampling techniques to improve depth coding performance and achieve better rendering quality are proposed.
Abstract: A depth image represents three-dimensional (3-D) scene information and is commonly used for depth image-based rendering (DIBR) to support 3-D video and free-viewpoint video applications. The virtual view is generally rendered by the DIBR technique and its quality depends highly on the quality of depth image. Thus, efficient depth coding is crucial to realize the 3-D video system. In this letter, we propose a depth reconstruction filter and depth down/up sampling techniques to improve depth coding performance. Experimental results demonstrate that the proposed methods reduce the bit-rate for depth coding and achieve better rendering quality.

Journal ArticleDOI
TL;DR: Two criteria that can be used to design periodic correlations of an arbitrary sequence, and which lead to rather different results in the aperiodic correlation case, are shown to be identical in the periodic case.
Abstract: Sequences with impulse-like correlations are at the core of several radar and communication applications. Two criteria that can be used to design such sequences, and which lead to rather different results in the aperiodic correlation case, are shown to be identical in the periodic case. Furthermore, two simplified versions of these two criteria, which similarly yield completely different sequences in the aperiodic case, are also shown to be equivalent. A corollary of these unexpected equivalences is that the periodic correlations of an arbitrary sequence must satisfy an intriguing identity, which is also presented in this letter.

Journal ArticleDOI
TL;DR: This letter introduces the GMM-UBM mean interval (GUMI) concept based on the Bhattacharyya distance and leads to a new kernel for SVM classifier, which allows the information not only from the mean but also from the covariance.
Abstract: Gaussian mixture model (GMM) and support vector machine (SVM) have become popular classifiers in text-independent speaker recognition. A GMM-supervector characterizes a speaker's voice with the parameters of GMM, which include mean vectors, covariance matrices, and mixture weights. GMM-supervector SVM benefits from both GMM and SVM frameworks to achieve the state-of-the-art performance. Conventional Kullback-Leibler (KL) kernel in GMM-supervector SVM classifier limits the adaptation of GMM to mean value and leaves covariance unchanged. In this letter, we introduce the GMM-UBM mean interval (GUMI) concept based on the Bhattacharyya distance. This leads to a new kernel for SVM classifier. Comparing with the KL kernel, the new kernel allows us to exploit the information not only from the mean but also from the covariance. We demonstrate the effectiveness of the new kernel on the 2006 National Institute of Standards and Technology (NIST) speaker recognition evaluation (SRE) dataset.

Journal ArticleDOI
TL;DR: It is shown that the underlying estimation problem is inefficient and that the maximum likelihood estimate yields a bias and a mean-square error (MSE) that both increase exponentially with the noise power.
Abstract: In source localization, one estimates the location of a source using a variety of relative position information. Many algorithms use certain powers of distances to effect localization. In practice, exact distance measurement is not directly available and must be estimated from information such as received signal strength (RSS), time of arrival, or time difference of arrival. This letter considers bias and variance issues in estimating powers of distances from RSS affected by practical log-normal shadowing. We show that the underlying estimation problem is inefficient and that the maximum likelihood estimate yields a bias and a mean-square error (MSE) that both increase exponentially with the noise power. We then characterize the class of unbiased estimates and show that there is only one estimator in this class, but that its MSE also grows exponentially with the noise power. Finally, we provide the linear minimum mean-square error (MMSE) estimate and show that its bias and MSE are both bounded in the noise power.

Journal ArticleDOI
Ran-Zan Wang1
TL;DR: The basis matrices for constructing the proposed n-level RIVC with small values of n=2, 3, 4 are introduced, and the results from two experiments are presented.
Abstract: This letter presents a novel visual cryptography scheme, called region incrementing visual cryptography (RIVC), for sharing visual secrets with multiple secrecy levels in a single image. In the proposed n-level RIVC scheme, the content of an image S is designated to multiple regions associated with n secret levels, and encoded to n+1 shares with the following features: (a) each share cannot obtain any of the secrets in S, (b) any t (2lestlesn+1) shares can be used to reveal t-1 levels of secrets, (c) the number and locations of not-yet-revealed secrets are unknown to users, (d) all secrets in S can be disclosed when all of the n+1 shares are available, and (e) the secrets are recognized by visually inspecting correctly stacked shares without computation. The basis matrices for constructing the proposed n-level RIVC with small values of n=2, 3, 4 are introduced, and the results from two experiments are presented.

Journal ArticleDOI
TL;DR: A general analytical model is developed to analyze the performance of the adaptive decode-and-forward cooperative networks with best-relay selection and shows that the best- Relay selection not only reduces the number of required channels but also can maintain a full diversity order.
Abstract: Cooperative diversity networks have recently been proposed as a way to form virtual antenna arrays without using collocated multiple antennas. In this paper, we consider adaptive decode-and-forward cooperative diversity system where a source node communicates with a destination node directly and indirectly (through multiple relays). In this letter, we investigate the performance of the best-relay selection scheme where the best relay only participates in the relaying. Therefore, two channels only are needed in this case (one for the direct link and the other one for the best indirect link) regardless of the total number of relays. The best relay is selected as the relay node that can achieve the highest signal-to-noise ratio at the destination node. We developed a general analytical model to analyze the performance of the adaptive decode-and-forward cooperative networks with best-relay selection. In particular, exact closed-form expressions for the error probability and Shannon capacity are derived over independent and nonidentical Rayleigh fading channels. Results show that the best-relay selection not only reduces the number of required channels but also can maintain a full diversity order.

Journal ArticleDOI
TL;DR: An algebraic norm-maximizing (ANOMAX) transmit strategy is derived by finding the relay amplification matrix which maximizes the weighted sum of the Frobenius norms of the effective channels and discusses the implications of this solution on the resulting signal to noise ratios.
Abstract: Two-way relaying is a promising scheme to achieve the ubiquitous mobile access to a reliable high data rate service, which is targeted for future mobile communication systems. In this contribution, we investigate two-way relaying with an amplify and forward relay, where the relay as well as the terminals are equipped with multiple antennas. Assuming that the terminals possess channel knowledge, the bidirectional two-way relaying channel is decoupled into two parallel effective single-user MIMO channels by subtracting the self-interference at the terminals. Thereby, any single-user MIMO technique can be applied to transmit the data. We derive an algebraic norm-maximizing (ANOMAX) transmit strategy by finding the relay amplification matrix which maximizes the weighted sum of the Frobenius norms of the effective channels and discuss the implications of this solution on the resulting signal to noise ratios. Finally, we compare ANOMAX to other existing transmission strategies via numerical computer simulations.

Journal ArticleDOI
TL;DR: It is shown that MIAA can outperform an existing competitive approach, and this at a much lower computational cost.
Abstract: We introduce a missing data recovery methodology based on a weighted least squares iterative adaptive approach (IAA). The proposed method is referred to as the missing-data IAA (MIAA) and it can be used for uniform or nonuniform sampling as well as for arbitrary data missing patterns. MIAA uses the IAA spectrum estimates to retrieve the missing data, by means of either a frequency domain or a time domain approach. Numerical examples are presented to show the effectiveness of MIAA for missing data reconstruction. In particular, we show that MIAA can outperform an existing competitive approach, and this at a much lower computational cost.

Journal ArticleDOI
TL;DR: A method for the instantaneous frequency estimation of a monocomponent nonlinear frequency modulated (FM) signal based on the pseudo Wigner-Ville distribution with an adaptive window width with an additional criterion for a proper window width selection is presented.
Abstract: A method for the instantaneous frequency (IF) estimation of a monocomponent nonlinear frequency modulated (FM) signal based on the pseudo Wigner-Ville distribution (PWVD) with an adaptive window width is presented. In order to improve the IF estimation accuracy, the original sliding pair-wise intersection of confidence intervals (SPICI) rule has been modified. An additional criterion for a proper window width selection is introduced, which takes into account the amount of overlap between the current and the previous confidence interval relative to the current interval length. The presented results show that the proposed method outperforms the original SPICI-based method by up to 42% in terms of the mean absolute error and up to 73% in terms of the mean squared error. It is also less sensitive to the window widths set selection than the original method.

Journal ArticleDOI
TL;DR: This work performs alignment by learning two explicit mappings which project the point-pairs from the original coupled manifolds into the embeddings of the common manifold (CM) and treats HRM as target manifold and employs the manifold regularization to guarantee that the local geometry of CM is more consistent with that of HRM than LRM is.
Abstract: Many learning-based super-resolution methods are based on the manifold assumption, which claims that point-pairs from the low-resolution representation manifold (LRM) and the corresponding high-resolution representation manifold (HRM) possess similar local geometry. However, the manifold assumption does not hold well on the original coupled manifolds (i.e., LRM and HRM) due to the nonisometric one-to-multiple mappings from low-resolution (LR) image patches to high-resolution (HR) ones. To overcome this limitation, we propose a solution from the perspective of manifold alignment. In this context, we perform alignment by learning two explicit mappings which project the point-pairs from the original coupled manifolds into the embeddings of the common manifold (CM). For the task of SR reconstruction, we treat HRM as target manifold and employ the manifold regularization to guarantee that the local geometry of CM is more consistent with that of HRM than LRM is. After alignment, we carry out the SR reconstruction based on neighbor embedding between the new couple of the CM and the target HRM. Besides, we extend our method by aligning the multiple coupled subsets instead of the whole coupled manifolds to address the issue of the global nonlinearity. Experimental results on face image super-resolution verify the effectiveness of our method.

Journal ArticleDOI
TL;DR: The proposed scheme exploits the time-domain signal properties of SFBC MIMO-OFDM systems to achieve a low-complexity architecture for candidate signal generation.
Abstract: The multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) system with space-frequency block coding (SFBC) is an attractive technique due to its robustness for time selective fading channels. However, the SFBC MIMO-OFDM system also inherits from OFDM systems the drawback of high peak-to-average power ratio (PAPR) of the transmitted signal. The selected mapping (SLM) method is a major scheme for PAPR reduction. Unfortunately, computational complexity of the traditional SLM scheme is relatively high since it requires a number of inverse fast Fourier transforms (IFFTs). In this letter, a low-complexity PAPR reduction scheme is proposed for SFBC MIMO-OFDM systems, needing only one IFFT. The proposed scheme exploits the time-domain signal properties of SFBC MIMO-OFDM systems to achieve a low-complexity architecture for candidate signal generation.

Journal ArticleDOI
TL;DR: A nonparametric estimator for the differential entropy of a multidimensional distribution, given a limited set of data points, by a recursive rectilinear partitioning, which is several orders of magnitude faster than other estimators.
Abstract: We describe a nonparametric estimator for the differential entropy of a multidimensional distribution, given a limited set of data points, by a recursive rectilinear partitioning. The estimator uses an adaptive partitioning method and runs in Theta(N log N) time, with low memory requirements. In experiments using known distributions, the estimator is several orders of magnitude faster than other estimators, with only modest increase in bias and variance.

Journal ArticleDOI
TL;DR: The exact relation between continuous and discrete LCTs is presented, which generalizes the corresponding relation for Fourier transforms, and is expressed in terms of a new definition of the discrete L CT (DLCT), which is independent of the sampling interval.
Abstract: Linear canonical transforms (LCTs) are a family of integral transforms with wide application in optical, acoustical, electromagnetic, and other wave propagation problems. The Fourier and fractional Fourier transforms are special cases of LCTs. We present the exact relation between continuous and discrete LCTs (which generalizes the corresponding relation for Fourier transforms), and also express it in terms of a new definition of the discrete LCT (DLCT), which is independent of the sampling interval. This provides the foundation for approximately computing the samples of the LCT of a continuous signal with the DLCT. The DLCT in this letter is analogous to the DFT and approximates the continuous LCT in the same sense that the DFT approximates the continuous Fourier transform. We also define the bicanonical width product which is a generalization of the time-bandwidth product.

Journal ArticleDOI
TL;DR: Numerical comparison with the state-of-the-art algorithms shows that the proposed algorithm is favorable when the design matrix is poorly conditioned or dense and very large.
Abstract: We propose an efficient algorithm for sparse signal reconstruction problems. The proposed algorithm is an augmented Lagrangian method based on the dual problem. It is efficient when the number of unknown variables is much larger than the number of observations because of the dual formulation. Moreover, the primal variable is explicitly updated and the sparsity in the solution is exploited. Numerical comparison with the state-of-the-art algorithms shows that the proposed algorithm is favorable when the design matrix is poorly conditioned or dense and very large.

Journal ArticleDOI
TL;DR: This work develops a parametric model that accounts for the measurements at multiple frequencies including the Doppler shift and proposes an algorithm to adaptively design the parameters for the next transmitting waveform.
Abstract: We address the problem of detecting a target moving in clutter environment using an orthogonal frequency division multiplexing (OFDM) radar. The broadband OFDM signal provides frequency diversity to improve the performance of the system. First, we develop a parametric model that accounts for the measurements at multiple frequencies including the Doppler shift. Then, we present a statistical detection test and evaluate its performance characteristics. Based on this, we propose an algorithm to adaptively design the parameters for the next transmitting waveform. Numerical examples illustrate our analytical results, demonstrating the achieved performance improvement due to the OFDM signaling method and adaptive waveform design.