scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Signal Processing in 2004"


Journal Article•DOI•
TL;DR: While the proposed algorithms are suboptimal, they lead to simpler transmitter and receiver structures and allow for a reasonable tradeoff between performance and complexity.
Abstract: The use of space-division multiple access (SDMA) in the downlink of a multiuser multiple-input, multiple-output (MIMO) wireless communications network can provide a substantial gain in system throughput. The challenge in such multiuser systems is designing transmit vectors while considering the co-channel interference of other users. Typical optimization problems of interest include the capacity problem - maximizing the sum information rate subject to a power constraint-or the power control problem-minimizing transmitted power such that a certain quality-of-service metric for each user is met. Neither of these problems possess closed-form solutions for the general multiuser MIMO channel, but the imposition of certain constraints can lead to closed-form solutions. This paper presents two such constrained solutions. The first, referred to as "block-diagonalization," is a generalization of channel inversion when there are multiple antennas at each receiver. It is easily adapted to optimize for either maximum transmission rate or minimum power and approaches the optimal solution at high SNR. The second, known as "successive optimization," is an alternative method for solving the power minimization problem one user at a time, and it yields superior results in some (e.g., low SNR) situations. Both of these algorithms are limited to cases where the transmitter has more antennas than all receive antennas combined. In order to accommodate more general scenarios, we also propose a framework for coordinated transmitter-receiver processing that generalizes the two algorithms to cases involving more receive than transmit antennas. While the proposed algorithms are suboptimal, they lead to simpler transmitter and receiver structures and allow for a reasonable tradeoff between performance and complexity.

3,291 citations


Journal Article•DOI•
TL;DR: The results demonstrate that there exist ideal binary time-frequency masks that can separate several speech signals from one mixture and show that the W-disjoint orthogonality of speech can be approximate in the case where two anechoic mixtures are provided.
Abstract: Binary time-frequency masks are powerful tools for the separation of sources from a single mixture. Perfect demixing via binary time-frequency masks is possible provided the time-frequency representations of the sources do not overlap: a condition we call W-disjoint orthogonality. We introduce here the concept of approximate W-disjoint orthogonality and present experimental results demonstrating the level of approximate W-disjoint orthogonality of speech in mixtures of various orders. The results demonstrate that there exist ideal binary time-frequency masks that can separate several speech signals from one mixture. While determining these masks blindly from just one mixture is an open problem, we show that we can approximate the ideal masks in the case where two anechoic mixtures are provided. Motivated by the maximum likelihood mixing parameter estimators, we define a power weighted two-dimensional (2-D) histogram constructed from the ratio of the time-frequency representations of the mixtures that is shown to have one peak for each source with peak location corresponding to the relative attenuation and delay mixing parameters. The histogram is used to create time-frequency masks that partition one of the mixtures into the original sources. Experimental results on speech mixtures verify the technique. Example demixing results can be found online at http://alum.mit.edu/www/rickard/bss.html.

1,543 citations


Journal Article•DOI•
TL;DR: This paper adapts SBL to the signal processing problem of basis selection from overcomplete dictionaries, proving several results about the SBL cost function that elucidate its general behavior and providing solid theoretical justification for this application.
Abstract: Sparse Bayesian learning (SBL) and specifically relevance vector machines have received much attention in the machine learning literature as a means of achieving parsimonious representations in the context of regression and classification. The methodology relies on a parameterized prior that encourages models with few nonzero weights. In this paper, we adapt SBL to the signal processing problem of basis selection from overcomplete dictionaries, proving several results about the SBL cost function that elucidate its general behavior and provide solid theoretical justification for this application. Specifically, we have shown that SBL retains a desirable property of the /spl lscr//sub 0/-norm diversity measure (i.e., the global minimum is achieved at the maximally sparse solution) while often possessing a more limited constellation of local minima. We have also demonstrated that the local minima that do exist are achieved at sparse solutions. Later, we provide a novel interpretation of SBL that gives us valuable insight into why it is successful in producing sparse representations. Finally, we include simulation studies comparing sparse Bayesian learning with basis pursuit and the more recent FOCal Underdetermined System Solver (FOCUSS) class of basis selection algorithms. These results indicate that our theoretical insights translate directly into improved performance.

1,339 citations


Journal Article•DOI•
TL;DR: A nonlinear version of the recursive least squares (RLS) algorithm that uses a sequential sparsification process that admits into the kernel representation a new input sample only if its feature space image cannot be sufficiently well approximated by combining the images of previously admitted samples.
Abstract: We present a nonlinear version of the recursive least squares (RLS) algorithm. Our algorithm performs linear regression in a high-dimensional feature space induced by a Mercer kernel and can therefore be used to recursively construct minimum mean-squared-error solutions to nonlinear least-squares problems that are frequently encountered in signal processing applications. In order to regularize solutions and keep the complexity of the algorithm bounded, we use a sequential sparsification process that admits into the kernel representation a new input sample only if its feature space image cannot be sufficiently well approximated by combining the images of previously admitted samples. This sparsification procedure allows the algorithm to operate online, often in real time. We analyze the behavior of the algorithm, compare its scaling properties to those of support vector machines, and demonstrate its utility in solving two signal processing problems-time-series prediction and channel equalization.

1,011 citations


Journal Article•DOI•
TL;DR: This paper presents two extensions to the coded cooperation framework, which increase the diversity of coded cooperation in the fast-fading scenario via ideas borrowed from space-time codes and investigates the application of turbo codes to this framework.
Abstract: When mobiles cannot support multiple antennas due to size or other constraints, conventional space-time coding cannot be used to provide uplink transmit diversity. To address this limitation, the concept of cooperation diversity has been introduced, where mobiles achieve uplink transmit diversity by relaying each other's messages. A particularly powerful variation of this principle is coded cooperation. Instead of a simple repetition relay, coded cooperation partitions the codewords of each mobile and transmits portions of each codeword through independent fading channels. This paper presents two extensions to the coded cooperation framework. First, we increase the diversity of coded cooperation in the fast-fading scenario via ideas borrowed from space-time codes. We calculate bounds for the bit- and block-error rates to demonstrate the resulting gains. Second, since cooperative coding contains two code components, it is natural to apply turbo codes to this framework. We investigate the application of turbo codes in coded cooperation and demonstrate the resulting gains via error bounds and simulations.

956 citations


Journal Article•DOI•
TL;DR: In this article, a reproducing kernel Hilbert space was proposed for online learning in a wide range of problems such as classification, regression, and novelty detection, and worst-case loss bounds were derived.
Abstract: Kernel-based algorithms such as support vector machines have achieved considerable success in various problems in batch setting, where all of the training data is available in advance. Support vector machines combine the so-called kernel trick with the large margin idea. There has been little use of these methods in an online setting suitable for real-time applications. In this paper, we consider online learning in a reproducing kernel Hilbert space. By considering classical stochastic gradient descent within a feature space and the use of some straightforward tricks, we develop simple and computationally efficient algorithms for a wide range of problems such as classification, regression, and novelty detection. In addition to allowing the exploitation of the kernel trick in an online setting, we examine the value of large margins for classification in the online setting with a drifting target. We derive worst-case loss bounds, and moreover, we show the convergence of the hypothesis to the minimizer of the regularized risk functional. We present some experimental results that support the theory as well as illustrating the power of the new algorithms for online novelty detection.

925 citations


Journal Article•DOI•
K.C. Ho1, Wenwei Xu1•
TL;DR: The estimated accuracy of the source position and velocity is shown to achieve the Crame/spl acute/r-Rao lower bound for Gaussian TDOA and FDOA noise at moderate noise level before the thresholding effect occurs.
Abstract: This paper proposes an algebraic solution for the position and velocity of a moving source using the time differences of arrival (TDOAs) and frequency differences of arrival (FDOAs) of a signal received at a number of receivers. The method employs several weighted least-squares minimizations only and does not require initial solution guesses to obtain a location estimate. It does not have the initialization and local convergence problem as in the conventional linear iterative method. The estimated accuracy of the source position and velocity is shown to achieve the Crame/spl acute/r-Rao lower bound for Gaussian TDOA and FDOA noise at moderate noise level before the thresholding effect occurs. Simulations are included to examine the algorithm's performance and compare it with the Taylor-series iterative method.

543 citations


Journal Article•DOI•
Philip Schniter1•
TL;DR: This work proposes a novel two-stage equalizer whose complexity (apart from the FFT) is linear in the OFDM symbol length, and results indicate that the equalizer has significant performance and complexity advantages over the classical linear MMSE estimator in doubly selective channels.
Abstract: Orthogonal frequency division multiplexing (OFDM) systems may experience significant inter-carrier interference (ICI) when used in time- and frequency-selective, or doubly selective, channels. In such cases, the classical symbol estimation schemes, e.g., minimum mean-squared error (MMSE) and zero-forcing (ZF) estimation, require matrix inversion that is prohibitively complex for large symbol lengths. An analysis of the ICI generation mechanism leads us to propose a novel two-stage equalizer whose complexity (apart from the FFT) is linear in the OFDM symbol length. The first stage applies optimal linear preprocessing to restrict ICI support, and the second stage uses iterative MMSE estimation to estimate finite-alphabet frequency-domain symbols. Simulation results indicate that our equalizer has significant performance and complexity advantages over the classical linear MMSE estimator in doubly selective channels.

542 citations


Journal Article•DOI•
TL;DR: It is shown that the CWLS estimator yields better performance than the LS method and achieves both the Crame/spl acute/r-Rao lower bound and the optimal circular error probability at sufficiently high signal-to-noise ratio conditions.
Abstract: Localization of mobile phones is of considerable interest in wireless communications. In this correspondence, two algorithms are developed for accurate mobile location using the time-of-arrival measurements of the signal from the mobile station received at three or more base stations. The first algorithm is an unconstrained least squares (LS) estimator that has implementation simplicity. The second algorithm solves a nonconvex constrained weighted least squares (CWLS) problem for improving estimation accuracy. It is shown that the CWLS estimator yields better performance than the LS method and achieves both the Crame/spl acute/r-Rao lower bound and the optimal circular error probability at sufficiently high signal-to-noise ratio conditions.

531 citations


Journal Article•DOI•
TL;DR: An efficient, hybrid Fourier-wavelet regularized deconvolution (ForWaRD) algorithm that performs noise regularization via scalar shrinkage in both the Fourier and wavelet domains is proposed and it is found that signals with more economical wavelet representations require less Fourier shrinkage.
Abstract: We propose an efficient, hybrid Fourier-wavelet regularized deconvolution (ForWaRD) algorithm that performs noise regularization via scalar shrinkage in both the Fourier and wavelet domains. The Fourier shrinkage exploits the Fourier transform's economical representation of the colored noise inherent in deconvolution, whereas the wavelet shrinkage exploits the wavelet domain's economical representation of piecewise smooth signals and images. We derive the optimal balance between the amount of Fourier and wavelet regularization by optimizing an approximate mean-squared error (MSE) metric and find that signals with more economical wavelet representations require less Fourier shrinkage. ForWaRD is applicable to all ill-conditioned deconvolution problems, unlike the purely wavelet-based wavelet-vaguelette deconvolution (WVD); moreover, its estimate features minimal ringing, unlike the purely Fourier-based Wiener deconvolution. Even in problems for which the WVD was designed, we prove that ForWaRD's MSE decays with the optimal WVD rate as the number of samples increases. Further, we demonstrate that over a wide range of practical sample-lengths, ForWaRD improves on WVD's performance.

480 citations


Journal Article•DOI•
TL;DR: The complete MIMO OFDM processing is implemented in a system with three transmit and three receive antennas, and its performance is evaluated with both simulations and experimental test results.
Abstract: The combination of multiple-input multiple-output (MIMO) signal processing with orthogonal frequency division multiplexing (OFDM) is regarded as a promising solution for enhancing the data rates of next-generation wireless communication systems operating in frequency-selective fading environments. To realize this extension of OFDM with MIMO, a number of changes are required in the baseband signal processing. An overview is given of the necessary changes, including time and frequency synchronization, channel estimation, synchronization tracking, and MIMO detection. As a test case, the OFDM-based wireless local area network (WLAN) standard IEEE 802.11a is considered, but the results are applicable more generally. The complete MIMO OFDM processing is implemented in a system with three transmit and three receive antennas, and its performance is evaluated with both simulations and experimental test results. Results from measurements with this MIMO OFDM system in a typical office environment show, on average, a doubling of the system throughput, compared with a single antenna OFDM system. An average expected tripling of the throughput was most likely not achieved due to coupling between the transmitter and receiver branches.

Journal Article•DOI•
TL;DR: Two alternative fusion schemes, namely, the maximum ratio combining statistic and a two-stage approach using the Chair-Varshney fusion rule, are proposed that alleviate requirements and are shown to be the low and high signal-to-noise ratio (SNR) equivalents of the likelihood-based fusion rule.
Abstract: Information fusion by utilizing multiple distributed sensors is studied in this work. Extending the classical parallel fusion structure by incorporating the fading channel layer that is omnipresent in wireless sensor networks, we derive the likelihood ratio based fusion rule given fixed local decision devices. This optimum fusion rule, however, requires perfect knowledge of the local decision performance indices as well as the fading channel. To address this issue, two alternative fusion schemes, namely, the maximum ratio combining statistic and a two-stage approach using the Chair-Varshney fusion rule, are proposed that alleviate these requirements and are shown to be the low and high signal-to-noise ratio (SNR) equivalents of the likelihood-based fusion rule. To further robustify the fusion rule and motivated by the maximum ratio combining statistics, we also propose a statistic analogous to an equal gain combiner that requires minimum a priori information. Performance evaluation is performed both analytically and through simulation.

Journal Article•DOI•
TL;DR: This paper proposes a fast antenna selection algorithm for wireless multiple-input multiple-output (MIMO) systems that achieves almost the same outage capacity as the optimal selection technique while having lower computational complexity than the existing nearly optimal antenna selection methods.
Abstract: Multiple antenna wireless communication systems have recently attracted significant attention due to their higher capacity and better immunity to fading as compared to systems that employ a single-sensor transceiver. Increasing the number of transmit and receive antennas enables to improve system performance at the price of higher hardware costs and computational burden. For systems with a large number of antennas, there is a strong motivation to develop techniques with reduced hardware and computational costs. An efficient approach to achieve this goal is the optimal antenna subset selection. In this paper, we propose a fast antenna selection algorithm for wireless multiple-input multiple-output (MIMO) systems. Our algorithm achieves almost the same outage capacity as the optimal selection technique while having lower computational complexity than the existing nearly optimal antenna selection methods. The optimality of the proposed technique is established in several important specific cases. A QR decomposition-based interpretation of our algorithm is provided that sheds a new light on the optimal antenna selection problem.

Journal Article•DOI•
Hai Deng1•
TL;DR: The proposed algorithm integrates a statistical simulated annealing algorithm with the traditional iterative code selection method and is demonstrated to be effective for the design of polyphase signals used in ONRS.
Abstract: Orthogonal netted radar systems (ONRS) can fundamentally improve radar performance by using a group of specially designed orthogonal signals. A novel hybrid algorithm is proposed to numerically optimize such orthogonal polyphase code sets. The proposed algorithm integrates a statistical simulated annealing algorithm with the traditional iterative code selection method and is demonstrated to be effective for the design of polyphase signals used in ONRS. Some of the design results are presented and discussed. The effect of Doppler frequency shift on the performance of the designed signals is also investigated.

Journal Article•DOI•
TL;DR: In this paper, a delay-dependent approach to robust H/sub/spl infin// filtering is proposed for linear discrete-time uncertain systems with multiple delays in the state.
Abstract: A delay-dependent approach to robust H/sub /spl infin// filtering is proposed for linear discrete-time uncertain systems with multiple delays in the state. The uncertain parameters are supposed to reside in a polytope and the attention is focused on the design of robust filters guaranteeing a prescribed H/sub /spl infin// noise attenuation level. The proposed filter design methodology incorporates some recently appeared results, such as Moon's new version of the upper bound for the inner product of two vectors and de Oliveira's idea of parameter-dependent stability, which greatly reduce the overdesign introduced in the derivation process. In addition to the full-order filtering problem, the challenging reduced-order case is also addressed by using different linearization procedures. Both full- and reduced-order filters can be obtained from the solution of convex optimization problems in terms of linear matrix inequalities, which can be solved via efficient interior-point algorithms. Numerical examples have been presented to illustrate the feasibility and advantages of the proposed methodologies.

Journal Article•DOI•
TL;DR: This paper provides a complete analysis of a norm constrained Capon beamforming (NCCB) approach, which uses a norm constraint on the weight vector to improve the robustness against array steering vector errors and noise and provides a natural extension of the SCB to the case of uncertain steering vectors.
Abstract: The standard Capon beamformer (SCB) is known to have better resolution and much better interference rejection capability than the standard data-independent beamformer when the array steering vector is accurately known. However, the major problem of the SCB is that it lacks robustness in the presence of array steering vector errors. In this paper, we will first provide a complete analysis of a norm constrained Capon beamforming (NCCB) approach, which uses a norm constraint on the weight vector to improve the robustness against array steering vector errors and noise. Our analysis of NCCB is thorough and sheds more light on the choice of the norm constraint than what was commonly known. We also provide a natural extension of the SCB, which has been obtained via covariance matrix fitting, to the case of uncertain steering vectors by enforcing a double constraint on the array steering vector, viz. a constant norm constraint and a spherical uncertainty set constraint, which we refer to as the doubly constrained robust Capon beamformer (DCRCB). NCCB and DCRCB can both be efficiently computed at a comparable cost with that of the SCB. Performance comparisons of NCCB, DCRCB, and several other adaptive beamformers via a number of numerical examples are also presented.

Journal Article•DOI•
TL;DR: This work considers the uplink of a multiuser system where the transmitters as well as the receiver are equipped with multiple antennas and proposes algorithms to find the jointly optimum linear precoder at each transmitter and linear decoders at the receiver.
Abstract: We consider the uplink of a multiuser system where the transmitters as well as the receiver are equipped with multiple antennas. Each user multiplexes its symbols by a linear precoder through its transmit antennas. We work with the system-wide mean squared error as the performance measure and propose algorithms to find the jointly optimum linear precoders at each transmitter and linear decoders at the receiver. We first work with the case where the number of symbols to be transmitted by each user is given. We then investigate how the symbol rate should be chosen for each user with optimum transmitters and receivers. The convergence analysis of the algorithms is given, and numerical evidence that supports the analysis is presented.

Journal Article•DOI•
TL;DR: Although counter-intuitive, it is shown surprisingly that, through the use of coding with side information principles, this reversal of order is indeed possible in some settings of interest without loss of either optimal coding efficiency or perfect secrecy.
Abstract: When it is desired to transmit redundant data over an insecure and bandwidth-constrained channel, it is customary to first compress the data and then encrypt it. In this paper, we investigate the novelty of reversing the order of these steps, i.e., first encrypting and then compressing, without compromising either the compression efficiency or the information-theoretic security. Although counter-intuitive, we show surprisingly that, through the use of coding with side information principles, this reversal of order is indeed possible in some settings of interest without loss of either optimal coding efficiency or perfect secrecy. We show that in certain scenarios our scheme requires no more randomness in the encryption key than the conventional system where compression precedes encryption. In addition to proving the theoretical feasibility of this reversal of operations, we also describe a system which implements compression of encrypted data.

Journal Article•DOI•
TL;DR: A unified treatment of the mean-square error, tracking, and transient performances of a family of affine projection algorithms based on energy conservation arguments and does not restrict the regressors to specific models or to a Gaussian distribution.
Abstract: Affine projection algorithms are useful adaptive filters whose main purpose is to speed the convergence of LMS-type filters. Most analytical results on affine projection algorithms assume special regression models or Gaussian regression data. The available analysis also treat different affine projection filters separately. This paper provides a unified treatment of the mean-square error, tracking, and transient performances of a family of affine projection algorithms. The treatment relies on energy conservation arguments and does not restrict the regressors to specific models or to a Gaussian distribution. Simulation results illustrate the analysis and the derived performance expressions.

Journal Article•DOI•
TL;DR: The performance of oversampling before applying PAR reduction is analyzed, and results show that this is necessary to sufficiently handle the analog PAR problem, and the new active-set method proposed here converges very quickly toward a minimum-PAR solution at a lower computational cost.
Abstract: Common to all orthogonal frequency division multiplexing (OFDM) systems is a large peak-to-average-power ratio (PAR), which can lead to low power efficiency and nonlinear distortion at the transmit power amplifier. Tone reservation uses other unused or reserved tones to design a peak-cancelling signal that lowers the PAR of a transmit OFDM block. In contrast to previous methods, the new active-set method proposed here converges very quickly toward a minimum-PAR solution at a lower computational cost. An efficient real-baseband algorithm is well suited for discrete multitone (DMT) modulation over twisted-pair copper wiring, where some subchannels may have an insufficient SNR to reliably send data. The real PAR problem occurs in the analog signal before the power amplifier, and results focus on this figure of merit. The performance of oversampling before applying PAR reduction is analyzed, and results show that this is necessary to sufficiently handle the analog PAR problem. An extension of the real-baseband technique can be applied to complex-baseband signals to help reduce PAR in wireless and broadcast systems. By sacrificing 11 out of 256 OFDM tones (4.3%) for tone reservation, over 3 dB of analog PAR reduction can be obtained for a wireless system.

Journal Article•DOI•
TL;DR: This work addresses optimal estimation of correlated multiple-input multiple-output (MIMO) channels using pilot signals, assuming knowledge of the second-order channel statistics at the transmitter and designing the transmitted signal to optimize two criteria: MMSE and the conditional mutual information between the MIMO channel and the received signal.
Abstract: We address optimal estimation of correlated multiple-input multiple-output (MIMO) channels using pilot signals, assuming knowledge of the second-order channel statistics at the transmitter. Assuming a block fading channel model and minimum mean square error (MMSE) estimation at the receiver, we design the transmitted signal to optimize two criteria: MMSE and the conditional mutual information between the MIMO channel and the received signal. Our analysis is based on the recently proposed virtual channel representation, which corresponds to beamforming in fixed virtual directions and exposes the structure and the true degrees of freedom in the correlated channel. However, our design framework is applicable to more general channel models, which include known channel models, such as the transmit and receive correlated model, as special cases. We show that optimal signaling is in a block form, where the block length depends on the signal-to-noise ratio (SNR) as well as the channel correlation matrix. The block signal corresponds to transmitting beams in successive symbol intervals along fixed virtual transmit angles, whose powers are determined by (nonidentical) water filling solutions based on the optimization criteria. Our analysis shows that these water filling solutions identify exactly which virtual transmit angles are important for channel estimation. In particular, at low SNR, the block length reduces to one, and all the power is transmitted on the beam corresponding to the strongest transmit angle, whereas at high SNR, the block length has a maximum length equal to the number of active virtual transmit angles, and the power is assigned equally to all active transmit angles. Consequently, from a channel estimation viewpoint, a faster fading rate can be tolerated at low SNRs relative to higher SNRs.

Journal Article•DOI•
TL;DR: This paper focuses on the geodesic-minimal-spanning-tree (GMST) method, which uses the overall lengths of the MSTs to simultaneously estimate manifold dimension and entropy.
Abstract: In the manifold learning problem, one seeks to discover a smooth low dimensional surface, i.e., a manifold embedded in a higher dimensional linear vector space, based on a set of measured sample points on the surface. In this paper, we consider the closely related problem of estimating the manifold's intrinsic dimension and the intrinsic entropy of the sample points. Specifically, we view the sample points as realizations of an unknown multivariate density supported on an unknown smooth manifold. We introduce a novel geometric approach based on entropic graph methods. Although the theory presented applies to this general class of graphs, we focus on the geodesic-minimal-spanning-tree (GMST) to obtaining asymptotically consistent estimates of the manifold dimension and the Re/spl acute/nyi /spl alpha/-entropy of the sample density on the manifold. The GMST approach is striking in its simplicity and does not require reconstruction of the manifold or estimation of the multivariate density of the samples. The GMST method simply constructs a minimal spanning tree (MST) sequence using a geodesic edge matrix and uses the overall lengths of the MSTs to simultaneously estimate manifold dimension and entropy. We illustrate the GMST approach on standard synthetic manifolds as well as on real data sets consisting of images of faces.

Journal Article•DOI•
TL;DR: It is shown that SVMs provide a significant improvement in performance on a static pattern classification task based on the Deterding vowel data and an application of SVMs to large vocabulary speech recognition is described.
Abstract: Recent work in machine learning has focused on models, such as the support vector machine (SVM), that automatically control generalization and parameterization as part of the overall optimization process. In this paper, we show that SVMs provide a significant improvement in performance on a static pattern classification task based on the Deterding vowel data. We also describe an application of SVMs to large vocabulary speech recognition and demonstrate an improvement in error rate on a continuous alphadigit task (OGI Alphadigits) and a large vocabulary conversational speech task (Switchboard). Issues related to the development and optimization of an SVM/HMM hybrid system are discussed.

Journal Article•DOI•
TL;DR: The paper develops a design procedure to obtain finite impulse response (FIR) filters that satisfy the numerous constraints imposed and have vanishing moments, compact support, a high degree of smoothness, and are nearly shift-invariant.
Abstract: This paper introduces the double-density dual-tree discrete wavelet transform (DWT), which is a DWT that combines the double-density DWT and the dual-tree DWT, each of which has its own characteristics and advantages. The transform corresponds to a new family of dyadic wavelet tight frames based on two scaling functions and four distinct wavelets. One pair of the four wavelets are designed to be offset from the other pair of wavelets so that the integer translates of one wavelet pair fall midway between the integer translates of the other pair. Simultaneously, one pair of wavelets are designed to be approximate Hilbert transforms of the other pair of wavelets so that two complex (approximately analytic) wavelets can be formed. Therefore, they can be used to implement complex and directional wavelet transforms. The paper develops a design procedure to obtain finite impulse response (FIR) filters that satisfy the numerous constraints imposed. This design procedure employs a fractional-delay allpass filter, spectral factorization, and filterbank completion. The solutions have vanishing moments, compact support, a high degree of smoothness, and are nearly shift-invariant.

Journal Article•DOI•
TL;DR: This paper provides a partial CSI model for orthogonal frequency division multiplexed (OFDM) transmissions over multi-input multi-output (MIMO) frequency-selective fading channels and develops an adaptive MIMO-OFDM transmitter, relying on the available partial CSI at the transmitter to maximize the transmission rate.
Abstract: Relative to designs assuming no channel knowledge at the transmitter, considerably improved communications become possible when adapting the transmitter to the intended propagation channel. As perfect knowledge is rarely available, transmitter designs based on partial (statistical) channel state information (CSI) are of paramount importance not only because they are more practical but also because they encompass the perfect- and no-knowledge paradigms. In this paper, we first provide a partial CSI model for orthogonal frequency division multiplexed (OFDM) transmissions over multi-input multi-output (MIMO) frequency-selective fading channels. We then develop an adaptive MIMO-OFDM transmitter by applying an adaptive two-dimensional (2-D) coder-beamformer we derived recently on each OFDM subcarrier, along with an adaptive power and bit loading scheme across OFDM subcarriers. Relying on the available partial CSI at the transmitter, our objective is to maximize the transmission rate, while guaranteeing a prescribed error performance, under the constraint of fixed transmit-power. Numerical results confirm that the adaptive 2-D space-time coder-beamformer (with two basis beams as the two "strongest" eigenvectors of the channel's correlation matrix perceived at the transmitter) combined with adaptive OFDM (power and bit loaded with M-ary quadrature amplitude modulated (QAM) constellations) improves the transmission rate considerably.

Journal Article•DOI•
Andreas F. Molisch1•
TL;DR: A geometry-based model is proposed that includes the propagation effects that are critical for MIMO performance: i) single scattering around the BS and MS, ii) scattering by far clusters, iii) double-scattering, iv) waveguiding, and v) diffraction by roof edges.
Abstract: This paper derives a generic model for the multiple-input multiple-output (MIMO) wireless channel. The model incorporates important effects, including i) interdependency of directions-of-arrival and directions-of-departure, ii) large delay and angle dispersion by propagation via far clusters, and iii) rank reduction of the transfer function matrix. We propose a geometry-based model that includes the propagation effects that are critical for MIMO performance: i) single scattering around the BS and MS, ii) scattering by far clusters, iii) double-scattering, iv) waveguiding, and v) diffraction by roof edges. The required parameters for the complete definition of the model are enumerated, and typical parameter values in macro and microcellular environments are discussed.

Journal Article•DOI•
TL;DR: In this article, a fast algorithm for estimating the parameters of a quadratic frequency modulated (FM) signal is proposed. But the algorithm requires only one-dimensional (1-D) maximizations, which can only be realized with an exhaustive 3D grid search.
Abstract: This paper describes a fast algorithm that can be used for estimating the parameters of a quadratic frequency modulated (FM) signal. The proposed algorithm is fast in that it requires only one-dimensional (1-D) maximizations. The optimal maximum likelihood method, by contrast, requires a three-dimensional (3-D) maximization, which can only be realized with an exhaustive 3-D grid search. Asymptotic statistical results are derived for all the estimated parameters. The amplitude estimate is seen to be optimal, whereas the phase parameters are, in general, suboptimal. Of the four phase parameter estimates, two approach optimality as the signal-to-noise ratio (SNR) tends to infinity. The other two have mean-square errors that are within 50% of the theoretical lower bounds for high SNR. Simulations are provided to support the theoretical results. Extensions to multiple components and higher order FM signals are also discussed.

Journal Article•DOI•
TL;DR: It is shown that along with the optimized irregular LDPC codes, a turbo iterative receiver that consists of a soft maximum a posteriori (MAP) demodulator and a belief-propagation LDPC decoder can perform within 1 dB from the ergodic capacity of the MIMO OFDM systems under consideration.
Abstract: We consider the performance analysis and design optimization of low-density parity check (LDPC) coded multiple-input multiple-output (MIMO) orthogonal frequency-division multiplexing (OFDM) systems for high data rate wireless transmission. The tools of density evolution with mixture Gaussian approximations are used to optimize irregular LDPC codes and to compute minimum operational signal-to-noise ratios (SNRs) for ergodic MIMO OFDM channels. In particular, the optimization is done for various MIMO OFDM system configurations, which include a different number of antennas, different channel models, and different demodulation schemes; the optimized performance is compared with the corresponding channel capacity. It is shown that along with the optimized irregular LDPC codes, a turbo iterative receiver that consists of a soft maximum a posteriori (MAP) demodulator and a belief-propagation LDPC decoder can perform within 1 dB from the ergodic capacity of the MIMO OFDM systems under consideration. It is also shown that compared with the optimal MAP demodulator-based receivers, the receivers employing a low-complexity linear minimum mean-square-error soft-interference-cancellation (LMMSE-SIC) demodulator have a small performance loss (< 1dB) in spatially uncorrelated MIMO channels but suffer extra performance loss in MIMO channels with spatial correlation. Finally, from the LDPC profiles that already are optimized for ergodic channels, we heuristically construct small block-size irregular LDPC codes for outage MIMO OFDM channels; as shown from simulation results, the irregular LDPC codes constructed here are helpful in expediting the convergence of the iterative receivers.

Journal Article•DOI•
TL;DR: This paper develops a new method for multiple variable regression estimation based on Support Vector Machines: a state-of-the-art technique within the machine learning community for regression estimation, and shows how this new method can be efficiently applied.
Abstract: This paper addresses the problem of multiple-input multiple-output (MIMO) frequency nonselective channel estimation. We develop a new method for multiple variable regression estimation based on Support Vector Machines (SVMs): a state-of-the-art technique within the machine learning community for regression estimation. We show how this new method, which we call M-SVR, can be efficiently applied. The proposed regression method is evaluated in a MIMO system under a channel estimation scenario, showing its benefits in comparison to previous proposals when nonlinearities are present in either the transmitter or the receiver sides of the MIMO system.

Journal Article•DOI•
TL;DR: It is shown that while the R PP-1 scheme performs better at high SNR and for slowly varying channels, the superimposed scheme outperforms RPP-1 in the other regimes, demonstrating the potential for using superimposed training in relatively fast time-varying environments.
Abstract: Two major training techniques for wireless channels are time-division multiplexed (TDM) training and superimposed training. For the TDM schemes with regular periodic placements (RPPs), the closed-form expression for the steady-state minimum mean square error (MMSE) of the channel estimate is obtained as a function of placement for Gauss-Markov flat fading channels. We then show that among all periodic placements, the single pilot RPP scheme (RPP-1) minimizes the maximum steady-state channel MMSE. For binary phase-shift keying (BPSK) and quadrature phase-shift keying (QPSK) signaling, we further show that the optimal placement that minimizes the maximum uncoded bit error rate (BER) is also RPP-1. We next compare the MMSE and BER performance under the superimposed training scheme with those under the optimal TDM scheme. It is shown that while the RPP-1 scheme performs better at high SNR and for slowly varying channels, the superimposed scheme outperforms RPP-1 in the other regimes. This demonstrates the potential for using superimposed training in relatively fast time-varying environments.