TL;DR: Signal Detection in Discrete Time and Signal Estimation in Continuous Time: Elements of Hypothesis Testing and Elements of Parameter Estimation.
Abstract: Preface I. Introduction II. Elements of Hypothesis Testing III. Signal Detection in Discrete Time IV. Elements of Parameter Estimation V. Elements of Signal Estimation VI. Signal Detection in Continuous Time VII. Signal Estimation in Continuous Time References Index
TL;DR: This work compute a lower bound on the capacity of a channel that is learned by training, and maximize the bound as a function of the received signal-to-noise ratio (SNR), fading coherence time, and number of transmitter antennas.
Abstract: Multiple-antenna wireless communication links promise very high data rates with low error probabilities, especially when the wireless channel response is known at the receiver. In practice, knowledge of the channel is often obtained by sending known training symbols to the receiver. We show how training affects the capacity of a fading channel-too little training and the channel is improperly learned, too much training and there is no time left for data transmission before the channel changes. We compute a lower bound on the capacity of a channel that is learned by training, and maximize the bound as a function of the received signal-to-noise ratio (SNR), fading coherence time, and number of transmitter antennas. When the training and data powers are allowed to vary, we show that the optimal number of training symbols is equal to the number of transmit antennas-this number is also the smallest training interval length that guarantees meaningful estimates of the channel matrix. When the training and data powers are instead required to be equal, the optimal number of symbols may be larger than the number of antennas. We show that training-based schemes can be optimal at high SNR, but suboptimal at low SNR.
TL;DR: The Shannon capacity of a fading channel with channel side information at the transmitter and receiver, and at the receiver alone is obtained, analogous to water-pouring in frequency for time-invariant frequency-selective fading channels.
Abstract: We obtain the Shannon capacity of a fading channel with channel side information at the transmitter and receiver, and at the receiver alone. The optimal power adaptation in the former case is "water-pouring" in time, analogous to water-pouring in frequency for time-invariant frequency-selective fading channels. Inverting the channel results in a large capacity penalty in severe fading.
"Rate Gap Analysis for Rate-Adaptive..." refers background in this paper
TL;DR: Lower and upper bounds of mutual information under channel estimation error and tight lower bounds of ergodic and outage capacities and optimal transmitter power allocation strategies that achieve the bounds under perfect feedback are studied.
Abstract: In this correspondence, we investigate the effect of channel estimation error on the capacity of multiple-input-multiple-output (MIMO) fading channels. We study lower and upper bounds of mutual information under channel estimation error, and show that the two bounds are tight for Gaussian inputs. Assuming Gaussian inputs we also derive tight lower bounds of ergodic and outage capacities and optimal transmitter power allocation strategies that achieve the bounds under perfect feedback. For the ergodic capacity, the optimal strategy is a modified waterfilling over the spatial (antenna) and temporal (fading) domains. This strategy is close to optimum under small feedback delays, but when the delay is large, equal powers should be allocated across spatial dimensions. For the outage capacity, the optimal scheme is a spatial waterfilling and temporal truncated channel inversion. Numerical results show that some capacity gain is obtained by spatial power allocation. Temporal power adaptation, on the other hand, gives negligible gain in terms of ergodic capacity, but greatly enhances outage performance.
Q1. What is the distribution of A conditioned on Ht?
The distribution of A conditioned on Ht is a non-central chi-squared distribution with 2Nr degrees of freedom and centrality parameter δ = 2μσ2t ‖Ĥ‖2.
Q2. What is the way to denote the perfect CSI rate?
Since the mutual information lower bound (outage upper bound) is used for calculating the rate, the authors havePout > P (outage/Ht). (6)Let the perfect CSI (Ht = Hr = H) rate be denoted by Rideal.
Q3. What is the CSI rate for the MEB scheme?
For the MEB scheme, the perfect CSI rate is:Rideal = log(1 + SNR‖H̆‖2), (7) where SNR = Pd/σ2n, and H̆ = Hu, u is the singular vector corresponding to the maximum singular-value of H, the actual channel matrix.
Q4. What is the correlation model assumed in (14)?
The correlation model assumed in (14) obeys Paley-Wiener condition (this can occur when F is absolutely log integrable in its support) which is:π∫ −π log(S(ejω))dω > −
Q5. What is the Doppler process in noise?
For the Doppler process in noise, the power spectral density is given by:S(ejω) = ⎧⎪⎪⎨ ⎪⎪⎩ P 2t F (e jw) (Pt + σ2n)2 +Ptσ 2 n(Pt + σ2n)2 |ω| < ωmPtσ 2 n(Pt + σ2n)2 ωm < |ω| < π,(14) Specifically, for the Jakes correlation model F (ejω) has the following form:F (ejω) = 2ωm√ 1 − ( ωωm)2 (15)for |ω| < ωm.
Q6. What is the slope of the rate gap growth for the constant Pout case?
The authors observe that the rate gap growth for very high SNR is linear in the constant Pout case and the slope increases when Pout = 0.05 is increased toPout = 0.08.
Q7. What is the difference between the two?
In the above result, note that while the prediction error goes to zero, the rate at which the prediction error tends to zero does not matter.