scispace - formally typeset
Search or ask a question
Topic

Codebook

About: Codebook is a research topic. Over the lifetime, 8492 publications have been published within this topic receiving 115995 citations.


Papers
More filters
Patent
13 Dec 2010
TL;DR: In this article, a covariance matrix at time t (R) is calculated by the mobile as a function of a received downlink signal, which is normalized and quantized using multiple codebook entries plus at least one constant for quantization.
Abstract: A method and apparatus for providing channel feedback is provided herein. During operation a covariance matrix at time t (R) is calculated by the mobile as a function of a received downlink signal. In order to reduce overhead, R is normalized and quantized by the mobile using multiple codebook entries plus at least one constant for quantization. The mobile then transmits the normalized and quantized covariance matrix back to the base station as bit values indicating the selected entries from the codebook plus bit values corresponding to the at least one constant. The base unit then uses the covariance matrix estimate to determine appropriate channel beamforming weights, and instructs transmit beamforming circuitry to use the appropriate weights.

90 citations

Proceedings ArticleDOI
11 Apr 1988
TL;DR: The application of simulated annealing to the design of a codebook for a vector quantizer (VQ) that is used to code images are studied and the mean-squared-error (MSE) is used as the distortion measure during the design.
Abstract: The application of simulated annealing to the design of a codebook for a vector quantizer (VQ) that is used to code images are studied. The traditional method for VQ codebook design is to use the generalized Lloyd algorithm (GLA), an iterative optimization procedure where an initial codebook is continually refined so that each iteration reduces the distortion involved in coding a given training set. However, this algorithm easily gets trapped in local minima of the distortion, resulting in a suboptimal codebook. Simulated annealing is a procedure that uses randomness in a search algorithm and tends to skirt relatively poor local minima in favor of better ones. The mean-squared-error (MSE) is used as the distortion measure during the design, and coded images are evaluated both subjectively and in terms of the peak-signal-to-noise ratio. >

90 citations

Posted Content
TL;DR: A new algorithm, called sequential OMP, is presented that illustrates that iterative detection combined with power ordering or power shaping can significantly improve the high SNR performance and provides insight into the roles of power control and multiuser detection on random-access signalling.
Abstract: This paper considers a simple on-off random multiple access channel, where n users communicate simultaneously to a single receiver over m degrees of freedom Each user transmits with probability lambda, where typically lambda n < m << n, and the receiver must detect which users transmitted We show that when the codebook has iid Gaussian entries, detecting which users transmitted is mathematically equivalent to a certain sparsity detection problem considered in compressed sensing Using recent sparsity results, we derive upper and lower bounds on the capacities of these channels We show that common sparsity detection algorithms, such as lasso and orthogonal matching pursuit (OMP), can be used as tractable multiuser detection schemes and have significantly better performance than single-user detection These methods do achieve some near-far resistance but--at high signal-to-noise ratios (SNRs)--may achieve capacities far below optimal maximum likelihood detection We then present a new algorithm, called sequential OMP, that illustrates that iterative detection combined with power ordering or power shaping can significantly improve the high SNR performance Sequential OMP is analogous to successive interference cancellation in the classic multiple access channel Our results thereby provide insight into the roles of power control and multiuser detection on random-access signalling

90 citations

Journal ArticleDOI
Hsueh-Ming Hang1, B.G. Haskell1
TL;DR: Interpolative vector quantization has been devised to alleviate the visible block structure of coded images plus the sensitive codebook problems produced by a simple vector quantizer.
Abstract: Interpolative vector quantization has been devised to alleviate the visible block structure of coded images plus the sensitive codebook problems produced by a simple vector quantizer. In addition, the problem of selecting color components for color picture vector quantization is discussed. Computer simulations demonstrate the success of this coding technique for color image compression at approximately 0.3 b/pel. Some background information on vector quantization is provided. >

90 citations

Journal ArticleDOI
TL;DR: This paper uses a multi-armed bandit framework to develop the online learning algorithms for beam pair selection and refinement, which uses the upper confidence bound with a newly proposed risk-aware feature, while the beam refinement uses a modified optimistic optimization algorithm.
Abstract: Accurate beam alignment is essential for the beam-based millimeter wave communications. The conventional beam sweeping solutions often have large overhead, which is unacceptable for mobile applications, such as a vehicle to everything. The learning-based solutions that leverage the sensor data (e.g., position) to identify the good beam directions are one approach to reduce the overhead. Most existing solutions, though, are supervised learning, where the training data are collected beforehand. In this paper, we use a multi-armed bandit framework to develop the online learning algorithms for beam pair selection and refinement. The beam pair selection algorithm learns coarse beam directions in some predefined beam codebook, e.g., in discrete angles, separated by the 3 dB beamwidths. The beam refinement fine-tunes the identified directions to match the peak of the power angular spectrum at that position. The beam pair selection uses the upper confidence bound with a newly proposed risk-aware feature, while the beam refinement uses a modified optimistic optimization algorithm. The proposed algorithms learn to recommend the good beam pairs quickly. When using $16\times 16$ arrays at both transmitter and receiver, it can achieve, on average, 1-dB gain over the exhaustive search (over $271\times 271$ beam pairs) on the unrefined codebook within 100 time steps with a training budget of only 30 beam pairs.

90 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
88% related
Wireless network
122.5K papers, 2.1M citations
88% related
Network packet
159.7K papers, 2.2M citations
87% related
Wireless
133.4K papers, 1.9M citations
87% related
Wireless sensor network
142K papers, 2.4M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023217
2022495
2021237
2020383
2019432
2018364