scispace - formally typeset
Search or ask a question
Author

Dirk Slock

Other affiliations: Stanford University
Bio: Dirk Slock is an academic researcher from Institut Eurécom. The author has contributed to research in topics: MIMO & Communication channel. The author has an hindex of 40, co-authored 397 publications receiving 7521 citations. Previous affiliations of Dirk Slock include Stanford University.


Papers
More filters
Journal ArticleDOI
TL;DR: The per-user channel correlation model requires the development of a novel deterministic equivalent of the empirical Stieltjes transform of large dimensional random matrices with generalized variance profile, and deterministic SINR approximations enable us to solve various practical optimization problems.
Abstract: In this paper, we study the sum rate performance of zero-forcing (ZF) and regularized ZF (RZF) precoding in large MISO broadcast systems under the assumptions of imperfect channel state information at the transmitter and per-user channel transmit correlation. Our analysis assumes that the number of transmit antennas M and the number of single-antenna users K are large while their ratio remains bounded. We derive deterministic approximations of the empirical signal-to-interference plus noise ratio (SINR) at the receivers, which are tight as M, K → ∞. In the course of this derivation, the per-user channel correlation model requires the development of a novel deterministic equivalent of the empirical Stieltjes transform of large dimensional random matrices with generalized variance profile. The deterministic SINR approximations enable us to solve various practical optimization problems. Under sum rate maximization, we derive 1) for RZF the optimal regularization parameter; 2) for ZF the optimal number of users; 3) for ZF and RZF the optimal power allocation scheme; and 4) the optimal amount of feedback in large FDD/TDD multiuser systems. Numerical simulations suggest that the deterministic approximations are accurate even for small M, K.

648 citations

Journal ArticleDOI
TL;DR: It is argued that location information can aid in addressing several of the key challenges in 5G, complementary to existing and planned technological developments.
Abstract: Fifth-generation (5G) networks will be the first generation to benefit from location information that is sufficiently precise to be leveraged in wireless network design and optimization. We argue that location information can aid in addressing several of the key challenges in 5G, complementary to existing and planned technological developments. These challenges include an increase in traffic and number of devices, robustness for mission-critical services, and a reduction in total energy consumption and latency. This article gives a broad overview of the growing research area of location-aware communications across different layers of the protocol stack. We highlight several promising trends, tradeoffs, and pitfalls.

424 citations

Proceedings ArticleDOI
Dirk Slock1
19 Apr 1994
TL;DR: It is shown that in the OS case FIR ZF equalizers exist for a FIR channel, and zero-forcing (ZF) equalization corresponds to a perfect-reconstruction filter bank.
Abstract: Equalization for digital communications constitutes a very particular blind deconvolution problem in that the received signal is cyclostationary. Oversampling (OS) (w.r.t. the symbol rate) of the cyclostationary received signal leads to a stationary vector-valued signal (polyphase representation (PR)). OS also leads to a fractionally-spaced channel model and equalizer. In the PR, channel and equalizer can be considered as an analysis and synthesis filter bank. Zero-forcing (ZF) equalization corresponds to a perfect-reconstruction filter bank. We show that in the OS case FIR ZF equalizers exist for a FIR channel. In the PR, the multichannel linear prediction of the noiseless received signal becomes singular eventually, reminiscent of the single-channel prediction of a sum of sinusoids. As a result, the channel can be identified from the received signal second-order statistics by linear prediction in the noise-free case, and by using the Pisarenko method when there is additive noise. In the given data case, MUSIC (subspace) or ML techniques can be applied. >

423 citations

Journal ArticleDOI
Dirk Slock1
TL;DR: It is shown that the normalized least mean square (NLMS) algorithm is a potentially faster converging algorithm compared to the LMS algorithm where the design of the adaptive filter is based on the usually quite limited knowledge of its input signal statistics.
Abstract: It is shown that the normalized least mean square (NLMS) algorithm is a potentially faster converging algorithm compared to the LMS algorithm where the design of the adaptive filter is based on the usually quite limited knowledge of its input signal statistics. A very simple model for the input signal vectors that greatly simplifies analysis of the convergence behavior of the LMS and NLMS algorithms is proposed. Using this model, answers can be obtained to questions for which no answers are currently available using other (perhaps more realistic) models. Examples are given to illustrate that even quantitatively, the answers obtained can be good approximations. It is emphasized that the convergence of the NLMS algorithm can be speeded up significantly by employing a time-varying step size. The optimal step-size sequence can be specified a priori for the case of a white input signal with arbitrary distribution. >

418 citations

Journal ArticleDOI
TL;DR: A solution is proposed to the long-standing problem of the numerical instability of fast recursive least squares transversal filter (FTF) algorithms with exponential weighting, an important class of algorithms for adaptive filtering.
Abstract: A solution is proposed to the long-standing problem of the numerical instability of fast recursive least squares transversal filter (FTF) algorithms with exponential weighting, an important class of algorithms for adaptive filtering. A framework for the analysis of the error propagation in FTF algorithms is first developed; within this framework, it is shown that the computationally most efficient 7N form is exponentially unstable. However, by introducing redundancy into this algorithm, feedback of numerical errors becomes possible; a judicious choice of the feedback gains then leads to a numerically stable FTF algorithm with a complexity of 8N multiplications and additions per time recursion. The results are presented for the complex multichannel joint-process filtering problem. >

320 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The article consists of background material and of the basic problem formulation, and introduces spectral-based algorithmic solutions to the signal parameter estimation problem and contrast these suboptimal solutions to parametric methods.
Abstract: The quintessential goal of sensor array signal processing is the estimation of parameters by fusing temporal and spatial information, captured via sampling a wavefield with a set of judiciously placed antenna sensors. The wavefield is assumed to be generated by a finite number of emitters, and contains information about signal parameters characterizing the emitters. A review of the area of array processing is given. The focus is on parameter estimation methods, and many relevant problems are only briefly mentioned. We emphasize the relatively more recent subspace-based methods in relation to beamforming. The article consists of background material and of the basic problem formulation. Then we introduce spectral-based algorithmic solutions to the signal parameter estimation problem. We contrast these suboptimal solutions to parametric methods. Techniques derived from maximum likelihood principles as well as geometric arguments are covered. Later, a number of more specialized research topics are briefly reviewed. Then, we look at a number of real-world problems for which sensor array processing methods have been applied. We also include an example with real experimental data involving closely spaced emitters and highly correlated signals, as well as a manufacturing application example.

4,410 citations

Book
01 Mar 1995
TL;DR: Wavelets and Subband Coding offered a unified view of the exciting field of wavelets and their discrete-time cousins, filter banks, or subband coding and developed the theory in both continuous and discrete time.
Abstract: First published in 1995, Wavelets and Subband Coding offered a unified view of the exciting field of wavelets and their discrete-time cousins, filter banks, or subband coding. The book developed the theory in both continuous and discrete time, and presented important applications. During the past decade, it filled a useful need in explaining a new view of signal processing based on flexible time-frequency analysis and its applications. Since 2007, the authors now retain the copyright and allow open access to the book.

2,793 citations

Journal ArticleDOI
TL;DR: In this paper, the tradeoff between the energy efficiency and spectral efficiency of a single-antenna system is quantified for a channel model that includes small-scale fading but not large scale fading, and it is shown that the use of moderately large antenna arrays can improve the spectral and energy efficiency with orders of magnitude compared to a single antenna system.
Abstract: A multiplicity of autonomous terminals simultaneously transmits data streams to a compact array of antennas. The array uses imperfect channel-state information derived from transmitted pilots to extract the individual data streams. The power radiated by the terminals can be made inversely proportional to the square-root of the number of base station antennas with no reduction in performance. In contrast if perfect channel-state information were available the power could be made inversely proportional to the number of antennas. Lower capacity bounds for maximum-ratio combining (MRC), zero-forcing (ZF) and minimum mean-square error (MMSE) detection are derived. An MRC receiver normally performs worse than ZF and MMSE. However as power levels are reduced, the cross-talk introduced by the inferior maximum-ratio receiver eventually falls below the noise level and this simple receiver becomes a viable option. The tradeoff between the energy efficiency (as measured in bits/J) and spectral efficiency (as measured in bits/channel use/terminal) is quantified for a channel model that includes small-scale fading but not large-scale fading. It is shown that the use of moderately large antenna arrays can improve the spectral and energy efficiency with orders of magnitude compared to a single-antenna system.

2,770 citations

Journal ArticleDOI
TL;DR: How many antennas per UT are needed to achieve η% of the ultimate performance limit with infinitely many antennas and how many more antennas are needed with MF and BF to achieve the performance of minimum mean-square error (MMSE) detection and regularized zero-forcing (RZF), respectively are derived.
Abstract: We consider the uplink (UL) and downlink (DL) of non-cooperative multi-cellular time-division duplexing (TDD) systems, assuming that the number N of antennas per base station (BS) and the number K of user terminals (UTs) per cell are large. Our system model accounts for channel estimation, pilot contamination, and an arbitrary path loss and antenna correlation for each link. We derive approximations of achievable rates with several linear precoders and detectors which are proven to be asymptotically tight, but accurate for realistic system dimensions, as shown by simulations. It is known from previous work assuming uncorrelated channels, that as N→∞ while K is fixed, the system performance is limited by pilot contamination, the simplest precoders/detectors, i.e., eigenbeamforming (BF) and matched filter (MF), are optimal, and the transmit power can be made arbitrarily small. We analyze to which extent these conclusions hold in the more realistic setting where N is not extremely large compared to K. In particular, we derive how many antennas per UT are needed to achieve η% of the ultimate performance limit with infinitely many antennas and how many more antennas are needed with MF and BF to achieve the performance of minimum mean-square error (MMSE) detection and regularized zero-forcing (RZF), respectively.

2,433 citations

Posted Content
TL;DR: It is shown that the use of moderately large antenna arrays can improve the spectral and energy efficiency with orders of magnitude compared to a single-antenna system.
Abstract: A multiplicity of autonomous terminals simultaneously transmits data streams to a compact array of antennas. The array uses imperfect channel-state information derived from transmitted pilots to extract the individual data streams. The power radiated by the terminals can be made inversely proportional to the square-root of the number of base station antennas with no reduction in performance. In contrast if perfect channel-state information were available the power could be made inversely proportional to the number of antennas. Lower capacity bounds for maximum-ratio combining (MRC), zero-forcing (ZF) and minimum mean-square error (MMSE) detection are derived. A MRC receiver normally performs worse than ZF and MMSE. However as power levels are reduced, the cross-talk introduced by the inferior maximum-ratio receiver eventually falls below the noise level and this simple receiver becomes a viable option. The tradeoff between the energy efficiency (as measured in bits/J) and spectral efficiency (as measured in bits/channel use/terminal) is quantified. It is shown that the use of moderately large antenna arrays can improve the spectral and energy efficiency with orders of magnitude compared to a single-antenna system.

2,421 citations