Topic
Codebook
About: Codebook is a research topic. Over the lifetime, 8492 publications have been published within this topic receiving 115995 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: In this paper, the authors quantitatively analyzed the performance of the channel-statistics-based codebook and showed that the required number of feedback bits to ensure a constant rate gap only scales linearly with the rank of the correlation matrix.
Abstract: The channel feedback overhead for massive multiple-input multiple-output systems with a large number of base station (BS) antennas is very high since the number of feedback bits of traditional codebooks scales linearly with the number of BS antennas. To reduce the feedback overhead, an effective codebook based on channel statistics has been designed, where the required number of feedback bits only scales linearly with the rank of the channel correlation matrix. However, this attractive conclusion was only proved under a particular channel assumption in the literature. To provide a rigorous theoretical proof under a general channel assumption, in this paper, we quantitatively analyze the performance of the channel-statistics-based codebook. Specifically, we first introduce the rate gap between the ideal case of perfect channel state information at the transmitter and the practical case of limited channel feedback, where we find that the rate gap depends on the quantization error of the codebook. Then, we derive an upper bound of the quantization error, based on which we prove that the required number of feedback bits to ensure a constant rate gap only scales linearly with the rank of the channel correlation matrix. Finally, numerical results are provided to verify this conclusion.
32 citations
•
05 Dec 2007TL;DR: In this article, a codebook for channel state information is generated by generating a random codebook, partitioning channel information into a set of nearest neighbors for each codebook entry based on a distance metric, and updating the codebook by finding a centroid for each partition.
Abstract: Systems and methods are disclosed to generate a codebook for channel state information by generating a random codebook; partitioning channel state information into a set of nearest neighbors for each codebook entry based on a distance metric; and updating the codebook by finding a centroid for each partition.
32 citations
••
01 Aug 2002TL;DR: The existence of universal mixture codebooks are demonstrated, and it is shown that it is possible to universally encode memoryless sources with redundancy of approximately (d/2) log n bits, where d is the dimension of the simplex of probability distributions on the reproduction alphabet.
Abstract: We characterize the best achievable performance of lossy compression algorithms operating on arbitrary random sources, and with respect to general distortion measures. Direct and converse coding theorems are given for variable-rate codes operating at a fixed distortion level, emphasizing: (a) nonasymptotic results, (b) optimal or near-optimal redundancy bounds, and (c) results with probability one. This development is based in part on the observation that there is a precise correspondence between compression algorithms and probability measures on the reproduction alphabet. This is analogous to the Kraft inequality in lossless data compression. In the case of stationary ergodic sources our results reduce to the classical coding theorems. As an application of these general results, we examine the performance of codes based on mixture codebooks for discrete memoryless sources. A mixture codebook (or Bayesian codebook) is a random codebook generated from a mixture over some class of reproduction distributions. We demonstrate the existence of universal mixture codebooks, and show that it is possible to universally encode memoryless sources with redundancy of approximately (d/2) log n bits, where d is the dimension of the simplex of probability distributions on the reproduction alphabet.
32 citations
••
TL;DR: This work reviews recent results showing that strongly correlated population codes can be explained using minimal models that rely on low order relations among cells, and how such models allow for mapping the semantic organization of the neural codebook and stimulus space, and decoding.
32 citations
•
10 Jun 2011TL;DR: In this paper, a method for reporting uplink control information (UCI) on a user equipment (UE) is described, where a first precoding matrix indicator (PMI) corresponding to a multiple-user multiple-input and multiple-output (MU-MIMO) downlink transmission is generated using a first codebook set.
Abstract: A method for reporting uplink control information (UCI) on a user equipment (UE) is described. A first precoding matrix indicator (PMI) corresponding to a multiple-user multiple-input and multiple-output (MU-MIMO) downlink transmission is generated using a first codebook set. A second PMI corresponding to the MU-MIMO downlink transmission is generated using a second codebook set. The first PMI and the second PMI are sent to an eNode B in a channel state information (CSI) report.
32 citations