scispace - formally typeset
Search or ask a question
Topic

Codebook

About: Codebook is a research topic. Over the lifetime, 8492 publications have been published within this topic receiving 115995 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: This work proposes a low-complexity, near-optimal algorithm developed from a cross-entropy optimization framework and results reveal that the algorithm achieves near-Optimal performance at a much lower complexity than does the optimal ESA.
Abstract: Hybrid beamforming architecture, consisting of a low-dimensional baseband digital beamforming component and a high-dimensional analog beamforming component, has received considerable attention in the context of millimeter-wave massive multiple-input multiple-output systems. This is because it can achieve an effective compromise between hardware complexity and system performance. To avoid accurate estimation of the channel, a codebook-based technique is widely used in analog beamforming components, wherein a transmitter and receiver jointly examine an analog precoder and analog combiner pair according to predesigned codebooks, without using a priori channel information. However, identifying an optimal analog precoder and analog combiner pair using the exhaustive search algorithm (ESA) incurs exponential complexity, causing the number of radio frequency chains to proliferate and hindering the resolution of the phase shifters, which cannot be solved even for highly reasonable system parameters. To reduce the search complexity while maximizing the achievable rate, we propose a low-complexity, near-optimal algorithm developed from a cross-entropy optimization framework. Our simulation results reveal that our algorithm achieves near-optimal performance at a much lower complexity than does the optimal ESA.

42 citations

01 Jan 2008
TL;DR: The essential advantage of the proposed VQ approach over state-of-the-art similarity measures is that the proposed audio similarity metric forms a normed vector space, allowing for more powerful search strategies, e.g. KD-Trees or Local Sensitive Hashing, making content-based audio similarity available for even larger music archives.
Abstract: Modeling audio signals by the long-term statistical distribution of their local spectral features - often denoted as bag of frames approach (BOF) - is a popular and powerful method to describe audio content. While modeling the distribution of local spectral features by semi-parametric distributions (e.g. Gaussian Mixture Models) has been studied intensively, we investigate a non-parametric variant based on vector quantization (VQ) in this paper. The essential advantage of the proposed VQ approach over stateof-the-art similarity measures is that the proposed audio similarity metric forms a normed vector space. This allows for more powerful search strategies, e.g. KD-Trees or Local Sensitive Hashing (LSH), making content-based audio similarity available for even larger music archives. Standard VQ approaches are known to be computationally very expensive; to counter this problem, we propose a multi-level clustering architecture. Additionally, we show that the multi-level vector quantization approach (ML-VQ), in contrast to standard VQ approaches, is comparable to state-ofthe-art frame-level similarity measures in terms of quality. Another important finding w.r.t. the ML-VQ approach is that, in contrast to GMM models of songs, our approach does not seem to suffer from the recently discovered hub problem.

42 citations

Patent
David E. Penna1
18 Sep 2002
TL;DR: In this paper, variable length decoding of DCT coefficients in MPEG video data is performed using a standard processor ( 400 ) and a small look-up table (LUT 530 ), which performs an integer to floating point conversion on a portion of the received bitstream (BS).
Abstract: Variable length decoding of DCT coefficients in MPEG video data is performed using a standard processor ( 400 ) and a small look-up table (LUT 530 ). The processor performs ( 520 ) an integer to floating point conversion on a portion the received bitstream (BS). By this step, lengthy codewords with many leading zeros, which are common in the codebook, are represented in a compressed form by the exponent and mantissa fields (EXP, MAN) of the floating point result (FP). The relevant bits are extracted and used as an index (IX) to address the LUT. This avoids cumbersome bit-oriented logic, while also avoiding a very large LUT that would otherwise be required to represent the same codebook. The entire LUT may thus reside in cache memory ( 410 ). In a VLIW processor implementation, decoding of one token is pipelined with the inverse scan and inverse quantisation step of the preceding token(s).

42 citations

Proceedings ArticleDOI
19 Mar 2008
TL;DR: Simulation results show that compared to previously known beamforming schemes, this technique significantly improves the BER performance in spatio-temporally correlated channels.
Abstract: We propose a new scheme for limited feedback in MIMO systems. We consider transmit beamforming and receiver maximal ratio combining as a base for our work, and propose a novel beamforming codebook to exploit the inherent correlation of the channel. This novel beamforming codebook, unlike the conventional beamforming codebooks, adaptively changes with the channel matrix. Moreover, the adaptive approach is independent of the channel model and can be applied to any general MIMO channel with temporal and spatial correlations. Simulation results show that compared to previously known beamforming schemes, this technique significantly improves the BER performance in spatio-temporally correlated channels.

42 citations

Proceedings ArticleDOI
23 Jun 2008
TL;DR: This work proposes a novel and efficient algorithm that allows joint alignment of a large number of samples, and does not rely on the assumption that pixels are independent, and applies it to learn sparse bases for natural images that discount domain deformations and hence significantly decrease the complexity of codebooks while maintaining the same generative power.
Abstract: Joint data alignment is often regarded as a data simplification process. This idea is powerful and general, but raises two delicate issues. First, one must make sure that the use full information about the data is preserved by the alignment process. This is especially important when data are affected by non-invertible transformations, such as those originating from continuous domain deformations in a discrete image lattice. We propose a formulation that explicitly avoids this pitfall. Second, one must choose an appropriate measure of data complexity. We show that standard concepts such as entropy might not be optimal for the task, and we propose alternative measures that reflect the regularity of the codebook space. We also propose a novel and efficient algorithm that allows joint alignment of a large number of samples (tens of thousands of image patches), and does not rely on the assumption that pixels are independent. This is done for the case where the data is postulated to live in an affine subspaces of the embedding space of the raw data. We apply our scheme to learn sparse bases for natural images that discount domain deformations and hence significantly decrease the complexity of codebooks while maintaining the same generative power.

42 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
88% related
Wireless network
122.5K papers, 2.1M citations
88% related
Network packet
159.7K papers, 2.2M citations
87% related
Wireless
133.4K papers, 1.9M citations
87% related
Wireless sensor network
142K papers, 2.4M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023217
2022495
2021237
2020383
2019432
2018364