# Showing papers in "IEEE Transactions on Signal Processing in 2006"

••

TL;DR: A novel algorithm for adapting dictionaries in order to achieve sparse signal representations, the K-SVD algorithm, an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data.

Abstract: In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field has concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a prespecified set of linear transforms or adapting the dictionary to a set of training signals. Both of these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method-the K-SVD algorithm-generalizing the K-means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results both on synthetic tests and in applications on real image data

8,905 citations

••

TL;DR: Under linear, Gaussian assumptions on the target dynamics and birth process, the posterior intensity at any time step is a Gaussian mixture and closed-form recursions for propagating the means, covariances, and weights of the constituent Gaussian components of the posteriorintensity are derived.

Abstract: A new recursive algorithm is proposed for jointly estimating the time-varying number of targets and their states from a sequence of observation sets in the presence of data association uncertainty, detection uncertainty, noise, and false alarms. The approach involves modelling the respective collections of targets and measurements as random finite sets and applying the probability hypothesis density (PHD) recursion to propagate the posterior intensity, which is a first-order statistic of the random finite set of targets, in time. At present, there is no closed-form solution to the PHD recursion. This paper shows that under linear, Gaussian assumptions on the target dynamics and birth process, the posterior intensity at any time step is a Gaussian mixture. More importantly, closed-form recursions for propagating the means, covariances, and weights of the constituent Gaussian components of the posterior intensity are derived. The proposed algorithm combines these recursions with a strategy for managing the number of Gaussian components to increase efficiency. This algorithm is extended to accommodate mildly nonlinear target dynamics using approximation strategies from the extended and unscented Kalman filters

1,805 citations

••

New York University

^{1}, New Jersey Institute of Technology^{2}, Lehigh University^{3}, University of Delaware^{4}, Alcatel-Lucent^{5}TL;DR: The optimal detector in the Neyman–Pearson sense is developed and analyzed for the statistical MIMO radar and it is shown that the optimal detector consists of noncoherent processing of the receiver sensors' outputs and that for cases of practical interest, detection performance is superior to that obtained through coherent processing.

Abstract: Inspired by recent advances in multiple-input multiple-output (MIMO) communications, this proposal introduces the statistical MIMO radar concept To the authors' knowledge, this is the first time that the statistical MIMO is being proposed for radar The fundamental difference between statistical MIMO and other radar array systems is that the latter seek to maximize the coherent processing gain, while statistical MIMO radar capitalizes on the diversity of target scattering to improve radar performance Coherent processing is made possible by highly correlated signals at the receiver array, whereas in statistical MIMO radar, the signals received by the array elements are uncorrelated Radar targets generally consist of many small elemental scatterers that are fused by the radar waveform and the processing at the receiver, to result in echoes with fluctuating amplitude and phase It is well known that in conventional radar, slow fluctuations of the target radar cross section (RCS) result in target fades that degrade radar performance By spacing the antenna elements at the transmitter and at the receiver such that the target angular spread is manifested, the MIMO radar can exploit the spatial diversity of target scatterers opening the way to a variety of new techniques that can improve radar performance This paper focuses on the application of the target spatial diversity to improve detection performance The optimal detector in the Neyman–Pearson sense is developed and analyzed for the statistical MIMO radar It is shown that the optimal detector consists of noncoherent processing of the receiver sensors' outputs and that for cases of practical interest, detection performance is superior to that obtained through coherent processing An optimal detector invariant to the signal and noise levels is also developed and analyzed In this case as well, statistical MIMO radar provides great improvements over other types of array radars

1,413 citations

••

TL;DR: This paper considers the problem of downlink transmit beamforming for wireless transmission and downstream precoding for digital subscriber wireline transmission, in the context of common information broadcasting or multicasting applications wherein channel state information (CSI) is available at the transmitter.

Abstract: This paper considers the problem of downlink transmit beamforming for wireless transmission and downstream precoding for digital subscriber wireline transmission, in the context of common information broadcasting or multicasting applications wherein channel state information (CSI) is available at the transmitter. Unlike the usual "blind" isotropic broadcasting scenario, the availability of CSI allows transmit optimization. A minimum transmission power criterion is adopted, subject to prescribed minimum received signal-to-noise ratios (SNRs) at each of the intended receivers. A related max-min SNR "fair" problem formulation is also considered subject to a transmitted power constraint. It is proven that both problems are NP-hard; however, suitable reformulation allows the successful application of semidefinite relaxation (SDR) techniques. SDR yields an approximate solution plus a bound on the optimum value of the associated cost/reward. SDR is motivated from a Lagrangian duality perspective, and its performance is assessed via pertinent simulations for the case of Rayleigh fading wireless channels. We find that SDR typically yields solutions that are within 3-4 dB of the optimum, which is often good enough in practice. In several scenarios, SDR generates exact solutions that meet the associated bound on the optimum value. This is illustrated using measured very-high-bit-rate Digital Subscriber line (VDSL) channel data, and far-field beamforming for a uniform linear transmit antenna array.

1,345 citations

••

TL;DR: This paper relates the general Volterra representation to the classical Wiener, Hammerstein, Wiener-Hammerstein, and parallel Wiener structures, and describes some state-of-the-art predistortion models based on memory polynomials, and proposes a new generalizedMemory polynomial that achieves the best performance to date.

Abstract: Conventional radio-frequency (RF) power amplifiers operating with wideband signals, such as wideband code-division multiple access (WCDMA) in the Universal Mobile Telecommunications System (UMTS) must be backed off considerably from their peak power level in order to control out-of-band spurious emissions, also known as "spectral regrowth." Adapting these amplifiers to wideband operation therefore entails larger size and higher cost than would otherwise be required for the same power output. An alternative solution, which is gaining widespread popularity, is to employ digital baseband predistortion ahead of the amplifier to compensate for the nonlinearity effects, hence allowing it to run closer to its maximum output power while maintaining low spectral regrowth. Recent improvements to the technique have included memory effects in the predistortion model, which are essential as the bandwidth increases. In this paper, we relate the general Volterra representation to the classical Wiener, Hammerstein, Wiener-Hammerstein, and parallel Wiener structures, and go on to describe some state-of-the-art predistortion models based on memory polynomials. We then propose a new generalized memory polynomial that achieves the best performance to date, as demonstrated herein with experimental results obtained from a testbed using an actual 30-W, 2-GHz power amplifier

1,305 citations

••

TL;DR: This paper shows that the configuration with spatially orthogonal signal transmission is equivalent to additional virtual sensors which extend the array aperture with virtual spatial tapering and provides higher performance in target detection, angular estimation accuracy, and angular resolution.

Abstract: In this paper, we propose a new space-time coding configuration for target detection and localization by radar or sonar systems. In common active array systems, the transmitted signal is usually coherent between the different elements of the array. This configuration does not allow array processing in the transmit mode. However, space-time coding of the transmitted signals allows to digitally steer the beam pattern in the transmit in addition to the received signal. The ability to steer the transmitted beam pattern, helps to avoid beam shape loss. We show that the configuration with spatially orthogonal signal transmission is equivalent to additional virtual sensors which extend the array aperture with virtual spatial tapering. These virtual sensors can be used to form narrower beams with lower sidelobes and, therefore, provide higher performance in target detection, angular estimation accuracy, and angular resolution. The generalized likelihood ratio test for target detection and the maximum likelihood and Cramer-Rao bound for target direction estimation are derived for an arbitrary signal coherence matrix. It is shown that the optimal performance is achieved for orthogonal transmitted signals. Target detection and localization performances are evaluated and studied theoretically and via simulations

990 citations

••

TL;DR: The proposed precoder design is general, and as a special case, it solves the transmit rank-one beamforming problem and can significantly outperform existing linear precoders.

Abstract: In this paper, the problem of designing linear precoders for fixed multiple-input-multiple-output (MIMO) receivers is considered. Two different design criteria are considered. In the first, the transmitted power is minimized subject to signal-to-interference-plus-noise-ratio (SINR) constraints. In the second, the worst case SINR is maximized subject to a power constraint. It is shown that both problems can be solved using standard conic optimization packages. In addition, conditions are developed for the optimal precoder for both of these problems, and two simple fixed-point iterations are proposed to find the solutions that satisfy these conditions. The relation to the well-known uplink-downlink duality in the context of joint transmit beamforming and power control is also explored. The proposed precoder design is general, and as a special case, it solves the transmit rank-one beamforming problem. Simulation results in a multiuser system show that the resulting precoders can significantly outperform existing linear precoders.

987 citations

••

TL;DR: This paper considers the popular linear least squares and minimum mean-square-error approaches and proposes new scaled LS (SLS) and relaxed MMSE techniques which require less knowledge of the channel second-order statistics and/or have better performance than the conventional LS and MMSE channel estimators.

Abstract: In this paper, we study the performance of multiple-input multiple-output channel estimation methods using training sequences. We consider the popular linear least squares (LS) and minimum mean-square-error (MMSE) approaches and propose new scaled LS (SLS) and relaxed MMSE techniques which require less knowledge of the channel second-order statistics and/or have better performance than the conventional LS and MMSE channel estimators. The optimal choice of training signals is investigated for the aforementioned techniques. In the case of multiple LS channel estimates, the best linear unbiased estimation (BLUE) scheme for their linear combining is developed and studied.

924 citations

••

TL;DR: Simulations show that the predictions made by the proved theorems tend to be very conservative; this is consistent with some recent advances in probabilistic analysis based on random matrix theory.

Abstract: The sparse representation of a multiple-measurement vector (MMV) is a relatively new problem in sparse representation. Efficient methods have been proposed. Although many theoretical results that are available in a simple case-single-measurement vector (SMV)-the theoretical analysis regarding MMV is lacking. In this paper, some known results of SMV are generalized to MMV. Some of these new results take advantages of additional information in the formulation of MMV. We consider the uniqueness under both an lscr0-norm-like criterion and an lscr1-norm-like criterion. The consequent equivalence between the lscr0-norm approach and the lscr1-norm approach indicates a computationally efficient way of finding the sparsest representation in a redundant dictionary. For greedy algorithms, it is proven that under certain conditions, orthogonal matching pursuit (OMP) can find the sparsest representation of an MMV with computational efficiency, just like in SMV. Simulations show that the predictions made by the proved theorems tend to be very conservative; this is consistent with some recent advances in probabilistic analysis based on random matrix theory. The connections will be discussed

821 citations

••

TL;DR: A class of maximum-likelihood estimators that require transmitting just one bit per sensor to achieve an estimation variance close to that of the sample mean estimator of the deterministic mean-location parameter estimation when only quantized versions of the original observations are available.

Abstract: We study deterministic mean-location parameter estimation when only quantized versions of the original observations are available, due to bandwidth constraints. When the dynamic range of the parameter is small or comparable with the noise variance, we introduce a class of maximum-likelihood estimators that require transmitting just one bit per sensor to achieve an estimation variance close to that of the (clairvoyant) sample mean estimator. When the dynamic range is comparable or larger than the noise standard deviation, we show that an optimum quantization step exists to achieve the best possible variance for a given bandwidth constraint. We will also establish that in certain cases the sample mean estimator formed by quantized observations is preferable for complexity reasons. We finally touch upon algorithm implementation issues and guarantee that all the numerical maximizations required by the proposed estimators are concave.

578 citations

••

TL;DR: This work proposes two low-complexity suboptimal user selection algorithms for multiuser MIMO systems with block diagonalization that aim to select a subset of users such that the total throughput is nearly maximized.

Abstract: Block diagonalization (BD) is a precoding technique that eliminates interuser interference in downlink multiuser multiple-input multiple-output (MIMO) systems. With the assumptions that all users have the same number of receive antennas and utilize all receive antennas when scheduled for transmission, the number of simultaneously supportable users with BD is limited by the ratio of the number of base station transmit antennas to the number of user receive antennas. In a downlink MIMO system with a large number of users, the base station may select a subset of users to serve in order to maximize the total throughput. The brute-force search for the optimal user set, however, is computationally prohibitive. We propose two low-complexity suboptimal user selection algorithms for multiuser MIMO systems with BD. Both algorithms aim to select a subset of users such that the total throughput is nearly maximized. The first user selection algorithm greedily maximizes the total throughput, whereas the criterion of the second algorithm is based on the channel energy. We show that both algorithms have linear complexity in the total number of users and achieve around 95% of the total throughput of the complete search method in simulations

••

TL;DR: The robust Hinfin filtering problem is studied for stochastic uncertain discrete time-delay systems with missing measurements and filters such that, for all possible missing observations and all admissible parameter uncertainties, the filtering error system is exponentially mean-square stable.

Abstract: In this paper, the robust Hinfin filtering problem is studied for stochastic uncertain discrete time-delay systems with missing measurements. The missing measurements are described by a binary switching sequence satisfying a conditional probability distribution. We aim to design filters such that, for all possible missing observations and all admissible parameter uncertainties, the filtering error system is exponentially mean-square stable, and the prescribed Hinfin performance constraint is met. In terms of certain linear matrix inequalities (LMIs), sufficient conditions for the solvability of the addressed problem are obtained. When these LMIs are feasible, an explicit expression of a desired robust Hinfin filter is also given. An optimization problem is subsequently formulated by optimizing the Hinfin filtering performances. Finally, a numerical example is provided to demonstrate the effectiveness and applicability of the proposed design approach

••

TL;DR: This paper studies the mean-square performance of a convex combination of two transversal filters and shows how the universality of the scheme can be exploited to design filters with improved tracking performance.

Abstract: Combination approaches provide an interesting way to improve adaptive filter performance. In this paper, we study the mean-square performance of a convex combination of two transversal filters. The individual filters are independently adapted using their own error signals, while the combination is adapted by means of a stochastic gradient algorithm in order to minimize the error of the overall structure. General expressions are derived that show that the method is universal with respect to the component filters, i.e., in steady-state, it performs at least as well as the best component filter. Furthermore, when the correlation between the a priori errors of the components is low enough, their combination is able to outperform both of them. Using energy conservation relations, we specialize the results to a combination of least mean-square filters operating both in stationary and in nonstationary scenarios. We also show how the universality of the scheme can be exploited to design filters with improved tracking performance.

••

TL;DR: A new generalized correlation measure is developed that includes the information of both the distribution and that of the time structure of a stochastic process.

Abstract: With an abundance of tools based on kernel methods and information theoretic learning, a void still exists in incorporating both the time structure and the statistical distribution of the time series in the same functional measure. In this paper, a new generalized correlation measure is developed that includes the information of both the distribution and that of the time structure of a stochastic process. It is shown how this measure can be interpreted from a kernel method as well as from an information theoretic learning points of view, demonstrating some relevant properties. To underscore the effectiveness of the new measure, a simple blind equalization problem is considered using a coded signal.

••

TL;DR: The proposed power scheduling scheme suggests that the sensors with bad channels or poor observation qualities should decrease their quantization resolutions or simply become inactive in order to save power.

Abstract: We consider the optimal power scheduling problem for the decentralized estimation of a noise-corrupted deterministic signal in an inhomogeneous sensor network. Sensor observations are first quantized into discrete messages, then transmitted to a fusion center where a final estimate is generated. Supposing that the sensors use a universal decentralized quantization/estimation scheme and an uncoded quadrature amplitude modulated (QAM) transmission strategy, we determine the optimal quantization and transmit power levels at local sensors so as to minimize the total transmit power, while ensuring a given mean squared error (mse) performance. The proposed power scheduling scheme suggests that the sensors with bad channels or poor observation qualities should decrease their quantization resolutions or simply become inactive in order to save power. For the remaining active sensors, their optimal quantization and transmit power levels are determined jointly by individual channel path losses, local observation noise variance, and the targeted mse performance. Numerical examples show that in inhomogeneous sensing environment, significant energy savings is possible when compared to the uniform quantization strategy.

••

TL;DR: An algorithm for estimating the mixing matrix that can be viewed as an extension of the DUET and the TIFROM methods is first developed and a necessary and sufficient condition for recoverability of a source vector is obtained.

Abstract: This paper discusses underdetermined (i.e., with more sources than sensors) blind source separation (BSS) using a two-stage sparse representation approach. The first challenging task of this approach is to estimate precisely the unknown mixing matrix. In this paper, an algorithm for estimating the mixing matrix that can be viewed as an extension of the DUET and the TIFROM methods is first developed. Standard clustering algorithms (e.g., K-means method) also can be used for estimating the mixing matrix if the sources are sufficiently sparse. Compared with the DUET, the TIFROM methods, and standard clustering algorithms, with the authors' proposed method, a broader class of problems can be solved, because the required key condition on sparsity of the sources can be considerably relaxed. The second task of the two-stage approach is to estimate the source matrix using a standard linear programming algorithm. Another main contribution of the work described in this paper is the development of a recoverability analysis. After extending the results in , a necessary and sufficient condition for recoverability of a source vector is obtained. Based on this condition and various types of source sparsity, several probability inequalities and probability estimates for the recoverability issue are established. Finally, simulation results that illustrate the effectiveness of the theoretical results are presented.

••

TL;DR: Algorithms and studied interesting tradeoffs that emerge even in the simplest distributed setup of estimating a scalar location parameter in the presence of zero-mean additive white Gaussian noise of known variance, derive distributed estimators based on binary observations along with their fundamental error-variance limits for more pragmatic signal models.

Abstract: Wireless sensor networks (WSNs) deployed to perform surveillance and monitoring tasks have to operate under stringent energy and bandwidth limitations. These motivate well distributed estimation scenarios where sensors quantize and transmit only one, or a few bits per observation, for use in forming parameter estimators of interest. In a companion paper, we developed algorithms and studied interesting tradeoffs that emerge even in the simplest distributed setup of estimating a scalar location parameter in the presence of zero-mean additive white Gaussian noise of known variance. Herein, we derive distributed estimators based on binary observations along with their fundamental error-variance limits for more pragmatic signal models: i) known univariate but generally non-Gaussian noise probability density functions (pdfs); ii) known noise pdfs with a finite number of unknown parameters; iii) completely unknown noise pdfs; and iv) practical generalizations to multivariate and possibly correlated pdfs. Estimators utilizing either independent or colored binary observations are developed and analyzed. Corroborating simulations present comparisons with the clairvoyant sample-mean estimator based on unquantized sensor observations, and include a motivating application entailing distributed parameter estimation where a WSN is used for habitat monitoring

••

TL;DR: This paper proposes a new likelihood ratio (LR)-based fusion rule which requires only the knowledge of channel statistics instead of instantaneous CSI, and shows that when the channel SNR is low, this fusion rule reduces to an equal gain combiner (EGC), which explains why EGC is a very good choice with low or medium SNR.

Abstract: In this paper, we revisit the problem of fusing decisions transmitted over fading channels in a wireless sensor network. Previous development relies on instantaneous channel state information (CSI). However, acquiring channel information may be too costly for resource constrained sensor networks. In this paper, we propose a new likelihood ratio (LR)-based fusion rule which requires only the knowledge of channel statistics instead of instantaneous CSI. Based on the assumption that all the sensors have the same detection performance and the same channel signal-to-noise ratio (SNR), we show that when the channel SNR is low, this fusion rule reduces to a statistic in the form of an equal gain combiner (EGC), which explains why EGC is a very good choice with low or medium SNR; at high-channel SNR, it is equivalent to the Chair-Varshney fusion rule. Performance evaluation shows that the new fusion rule exhibits only slight performance degradation compared with the optimal LR-based fusion rule using instantaneous CSI.

••

TL;DR: This paper develops efficient adaptive sequential and batch-sequential methods for an early detection of attacks that lead to changes in network traffic, such as denial-of-service attacks, worm-based attacks, port-scanning, and man-in-the-middle attacks.

Abstract: Large-scale computer network attacks in their final stages can readily be identified by observing very abrupt changes in the network traffic. In the early stage of an attack, however, these changes are hard to detect and difficult to distinguish from usual traffic fluctuations. Rapid response, a minimal false-alarm rate, and the capability to detect a wide spectrum of attacks are the crucial features of intrusion detection systems. In this paper, we develop efficient adaptive sequential and batch-sequential methods for an early detection of attacks that lead to changes in network traffic, such as denial-of-service attacks, worm-based attacks, port-scanning, and man-in-the-middle attacks. These methods employ a statistical analysis of data from multiple layers of the network protocol to detect very subtle traffic changes. The algorithms are based on change-point detection theory and utilize a thresholding of test statistics to achieve a fixed rate of false alarms while allowing us to detect changes in statistical models as soon as possible. There are three attractive features of the proposed approach. First, the developed algorithms are self-learning, which enables them to adapt to various network loads and usage patterns. Secondly, they allow for the detection of attacks with a small average delay for a given false-alarm rate. Thirdly, they are computationally simple and thus can be implemented online. Theoretical frameworks for detection procedures are presented. We also give the results of the experimental study with the use of a network simulator testbed as well as real-life testing for TCP SYN flooding attacks

••

Bell Labs

^{1}TL;DR: It is shown that the time occupied in frequency-duplex CSI transfer is generally less than one might expect and falls as the number of antennas increases, and the advantages of having more antennas at the base station extend from having network gains to learning the channel information.

Abstract: Knowledge of accurate and timely channel state information (CSI) at the transmitter is becoming increasingly important in wireless communication systems. While it is often assumed that the receiver (whether base station or mobile) needs to know the channel for accurate power control, scheduling, and data demodulation, it is now known that the transmitter (especially the base station) can also benefit greatly from this information. For example, recent results in multiantenna multiuser systems show that large throughput gains are possible when the base station uses multiple antennas and a known channel to transmit distinct messages simultaneously and selectively to many single-antenna users. In time-division duplex systems, where the base station and mobiles share the same frequency band for transmission, the base station can exploit reciprocity to obtain the forward channel from pilots received over the reverse channel. Frequency-division duplex systems are more difficult because the base station transmits and receives on different frequencies and therefore cannot use the received pilot to infer anything about the multiantenna transmit channel. Nevertheless, we show that the time occupied in frequency-duplex CSI transfer is generally less than one might expect and falls as the number of antennas increases. Thus, although the total amount of channel information increases with the number of antennas at the base station, the burden of learning this information at the base station paradoxically decreases. Thus, the advantages of having more antennas at the base station extend from having network gains to learning the channel information. We quantify our gains using linear analog modulation which avoids digitizing and coding the CSI and therefore can convey information very rapidly and can be readily analyzed. The old paradigm that it is not worth the effort to learn channel information at the transmitter should be revisited since the effort decreases and the gain increases with the number of antennas.

••

TL;DR: This paper considers a wireless communication system with multiple transmit and receive antennas, i.e., a multiple-input-multiple-output (MIMO) channel, to design the transmitter according to an imperfect channel estimate, where the errors are explicitly taken into account to obtain a robust design under the maximin or worst case philosophy.

Abstract: This paper considers a wireless communication system with multiple transmit and receive antennas, i.e., a multiple-input-multiple-output (MIMO) channel. The objective is to design the transmitter according to an imperfect channel estimate, where the errors are explicitly taken into account to obtain a robust design under the maximin or worst case philosophy. The robust transmission scheme is composed of an orthogonal space-time block code (OSTBC), whose outputs are transmitted through the eigenmodes of the channel estimate with an appropriate power allocation among them. At the receiver, the signal is detected assuming a perfect channel knowledge. The optimization problem corresponding to the design of the power allocation among the estimated eigenmodes, whose goal is the maximization of the signal-to-noise ratio (SNR), is transformed to a simple convex problem that can be easily solved. Different sources of errors are considered in the channel estimate, such as the Gaussian noise from the estimation process and the errors from the quantization of the channel estimate, among others. For the case of Gaussian noise, the robust power allocation admits a closed-form expression. Finally, the benefits of the proposed design are evaluated and compared with the pure OSTBC and nonrobust approaches.

••

TL;DR: A MUSIC-like algorithm is presented, allowing estimation of wave's DOAs and polarization parameters, and it results in a reduction by half of memory requirements for representation of data covariance model and reduces the computational effort, for equivalent performance.

Abstract: This paper considers the problem of direction of arrival (DOA) and polarization parameters estimation in the case of multiple polarized sources impinging on a vector-sensor array. The quaternion model is used, and a data covariance model is proposed using quaternion formalism. A comparison between long vector orthogonality and quaternion vector orthogonality is also performed, and its implications for signal subspace estimation are discussed. Consequently, a MUSIC-like algorithm is presented, allowing estimation of wave's DOAs and polarization parameters. The algorithm is tested in numerical simulations, and performance analysis is conducted. When compared with other MUSIC-like algorithms for vector-sensor array, the newly proposed algorithm results in a reduction by half of memory requirements for representation of data covariance model and reduces the computational effort, for equivalent performance. This paper also illustrates a compact and elegant way of dealing with multicomponent complex-valued data.

••

TL;DR: This paper deals with design and performance analysis of transmit beamformers for multiple-input multiple-output (MIMO) systems based on bandwidth-limited information that is fed back from the receiver to the transmitter.

Abstract: This paper deals with design and performance analysis of transmit beamformers for multiple-input multiple-output (MIMO) systems based on bandwidth-limited information that is fed back from the receiver to the transmitter. By casting the design of transmit beamforming based on limited-rate feedback as an equivalent sphere vector quantization (SVQ) problem, multiantenna beamformed transmissions through independent and identically distributed (i.i.d.) Rayleigh fading channels are first considered. The rate-distortion function of the vector source is upper-bounded, and the operational rate-distortion performance achieved by the generalized Lloyd's algorithm is lower-bounded. Although different in nature, the two bounds yield asymptotically equivalent performance analysis results. The average signal-to-noise ratio (SNR) performance is also quantified. Finally, beamformer codebook designs are studied for correlated Rayleigh fading channels, and a low-complexity codebook design that achieves near-optimal performance is derived.

••

TL;DR: A common base is provided for the first time to analyze and compare Gaussian filters with respect to accuracy, efficiency and stability factor and to help design more efficient filters by employing better numerical integration methods.

Abstract: This paper proposes a numerical-integration perspective on the Gaussian filters. A Gaussian filter is approximation of the Bayesian inference with the Gaussian posterior probability density assumption being valid. There exists a variation of Gaussian filters in the literature that derived themselves from very different backgrounds. From the numerical-integration viewpoint, various versions of Gaussian filters are only distinctive from each other in their specific treatments of approximating the multiple statistical integrations. A common base is provided for the first time to analyze and compare Gaussian filters with respect to accuracy, efficiency and stability factor. This study is expected to facilitate the selection of appropriate Gaussian filters in practice and to help design more efficient filters by employing better numerical integration methods

••

TL;DR: Experimental results shows that the proposed watermarking scheme is inaudible and robust against various signal processing such as noise adding, resampling, requantization, random cropping, and MPEG-1 Layer III (MP3) compression.

Abstract: Synchronization attack is one of the key issues of digital audio watermarking. In this correspondence, a blind digital audio watermarking scheme against synchronization attack using adaptive quantization is proposed. The features of the proposed scheme are as follows: 1) a kind of more steady synchronization code and a new embedded strategy are adopted to resist the synchronization attack more effectively; 2) he multiresolution characteristics of discrete wavelet transform (DWT) and the energy-compression characteristics of discrete cosine transform (DCT) are combined to improve the transparency of digital watermark; 3) the watermark is embedded into the low frequency components by adaptive quantization according to human auditory masking; and 4) the scheme can extract the watermark without the help of the original digital audio signal. Experiment results shows that the proposed watermarking scheme is inaudible and robust against various signal processing such as noise adding, resampling, requantization, random cropping, and MPEG-1 Layer III (MP3) compression

••

TL;DR: This correspondence addresses the problem of locating an acoustic source using a sensor network in a distributed manner without transmitting the full data set to a central point for processing by applying a distributed version of the projection-onto-convex-sets (POCS) method.

Abstract: This correspondence addresses the problem of locating an acoustic source using a sensor network in a distributed manner, i.e., without transmitting the full data set to a central point for processing. This problem has been traditionally addressed through the maximum-likelihood framework or nonlinear least squares. These methods, even though asymptotically optimal under certain conditions, pose a difficult global optimization problem. It is shown that the associated objective function may have multiple local optima and saddle points, and hence any local search method might stagnate at a suboptimal solution. In this correspondence, we formulate the problem as a convex feasibility problem and apply a distributed version of the projection-onto-convex-sets (POCS) method. We give a closed-form expression for the projection phase, which usually constitutes the heaviest computational aspect of POCS. Conditions are given under which, when the number of samples increases to infinity or in the absence of measurement noise, the convex feasibility problem has a unique solution at the true source location. In general, the method converges to a limit point or a limit cycle in the neighborhood of the true location. Simulation results show convergence to the global optimum with extremely fast convergence rates compared to the previous methods

••

TL;DR: This paper derives and analyzes distributed state estimators of dynamical stochastic processes, whereby the low communication cost is effected by requiring the transmission of a single bit per observation.

Abstract: When dealing with decentralized estimation, it is important to reduce the cost of communicating the distributed observations-a problem receiving revived interest in the context of wireless sensor networks. In this paper, we derive and analyze distributed state estimators of dynamical stochastic processes, whereby the low communication cost is effected by requiring the transmission of a single bit per observation. Following a Kalman filtering (KF) approach, we develop recursive algorithms for distributed state estimation based on the sign of innovations (SOI). Even though SOI-KF can afford minimal communication overhead, we prove that in terms of performance and complexity it comes very close to the clairvoyant KF which is based on the analog-amplitude observations. Reinforcing our conclusions, we show that the SOI-KF applied to distributed target tracking based on distance-only observations yields accurate estimates at low communication cost

••

TL;DR: This paper examines the asymptotic performance of MUSIC-like algorithms for estimating directions of arrival (DOA) of narrowband complex noncircular sources using closed-form expressions of the covariance of the asylptotic distribution of different projection matrices to provide a unifying framework for investigating the ascyptoticperformance of arbitrary subspace-based algorithms.

Abstract: This paper examines the asymptotic performance of MUSIC-like algorithms for estimating directions of arrival (DOA) of narrowband complex noncircular sources. Using closed-form expressions of the covariance of the asymptotic distribution of different projection matrices, it provides a unifying framework for investigating the asymptotic performance of arbitrary subspace-based algorithms valid for Gaussian or non-Gaussian and complex circular or noncircular sources. We also derive different robustness properties from the asymptotic covariance of the estimated DOA given by such algorithms. These results are successively applied to four algorithms: to two attractive MUSIC-like algorithms previously introduced in the literature, to an extension of these algorithms, and to an optimally weighted MUSIC algorithm proposed in this paper. Numerical examples illustrate the performance of the studied algorithms compared to the asymptotically minimum variance (AMV) algorithms introduced as benchmarks

••

TL;DR: A new direction-of-arrival (DOA) estimation algorithm for wideband sources called test of orthogonality of projected subspaces (TOPS), which fills a gap between coherent and incoherent methods.

Abstract: This paper introduces a new direction-of-arrival (DOA) estimation algorithm for wideband sources called test of orthogonality of projected subspaces (TOPS). This new technique estimates DOAs by measuring the orthogonal relation between the signal and the noise subspaces of multiple frequency components of the sources. TOPS can be used with arbitrary shaped one-dimensional (1-D) or two-dimensional (2-D) arrays. Unlike other coherent wideband methods, such as the coherent signal subspace method (CSSM) and WAVES, the new method does not require any preprocessing for initial values. The performance of those wideband techniques and incoherent MUSIC is compared with that of the new method through computer simulations. The simulations show that this new technique performs better than others in mid signal-to-noise ratio (SNR) ranges, while coherent methods work best in low SNR and incoherent methods work best in high SNR. Thus, TOPS fills a gap between coherent and incoherent methods.

••

TL;DR: A signal intensity based maximum-likelihood target location estimator that uses quantized data is proposed for wireless sensor networks (WSNs) and is much more accurate than the heuristic weighted average methods and can reach the CRLB even with a relatively small amount of data.

Abstract: A signal intensity based maximum-likelihood (ML) target location estimator that uses quantized data is proposed for wireless sensor networks (WSNs). The signal intensity received at local sensors is assumed to be inversely proportional to the square of the distance from the target. The ML estimator and its corresponding Crameacuter-Rao lower bound (CRLB) are derived. Simulation results show that this estimator is much more accurate than the heuristic weighted average methods, and it can reach the CRLB even with a relatively small amount of data. In addition, the optimal design method for quantization thresholds, as well as two heuristic design methods, are presented. The heuristic design methods, which require minimum prior information about the system, prove to be very robust under various situations