scispace - formally typeset
Search or ask a question
Topic

Codebook

About: Codebook is a research topic. Over the lifetime, 8492 publications have been published within this topic receiving 115995 citations.


Papers
More filters
Proceedings ArticleDOI
11 May 2003
TL;DR: A codebook design method for quantized versions of maximum ratio transmission, equal gain transmission, and generalized selection diversity with maximum ratio combining at the receiver is presented and systems using the beamforming codebooks are shown to have a diversity order of the product of the number of transmit and thenumber of receive antennas.
Abstract: Multiple-input multiple-output (MIMO) wireless systems provides capacity much larger than that provided by traditional single-input single-output (SISO) wireless systems. Beamforming is a low complexity technique that increases the receive signal-to-noise ratio (SNR), however, it requires channel knowledge. Since in practice channel knowledge at the transmitter is difficult to realize, we propose a technique where the receiver designs the beamforming vector and sends it to the transmitter by transmitting a label in a finite set, or codebook, of beamforming vectors. A codebook design method for quantized versions of maximum ratio transmission, equal gain transmission, and generalized selection diversity with maximum ratio combining at the receiver is presented. The codebook design criterion exploits the quantization problem's relationship with Grassmannian line packing. Systems using the beamforming codebooks are shown to have a diversity order of the product of the number of transmit and the number of receive antennas. Monte Carlo simulations compare the performance of systems using this new codebook method with the performance of systems using previously proposed quantized and unquantized systems.

439 citations

Proceedings ArticleDOI
24 Oct 2004
TL;DR: A new fast algorithm for background modeling and subtraction that can handle scenes containing moving backgrounds or illumination variations (shadows and highlights), and it achieves robust detection for compressed videos.
Abstract: We present a new fast algorithm for background modeling and subtraction. Sample background values at each pixel are quantized into codebooks which represent a compressed form of background model for a long image sequence. This allows us to capture structural background variation due to periodic-like motion over a long period of time under limited memory. Our method can handle scenes containing moving backgrounds or illumination variations (shadows and highlights), and it achieves robust detection for compressed videos. We compared our method with other multimode modeling techniques.

412 citations

Journal ArticleDOI
TL;DR: This exploratory study was conducted to demonstrate a rigorous approach to reaching saturation through two-stage establishment of a codebook used for thematic analysis through inductive analysis and refinement of the coding system.
Abstract: Reaching a saturation point in thematic analysis is important to validity in qualitative studies, yet the process of achieving saturation is often left ambiguous. The lack of information about the process creates uncertainty in the timing of recruitment closure. This exploratory study was conducted to demonstrate a rigorous approach to reaching saturation through two-stage establishment of a codebook used for thematic analysis. The codebook development involved inductive analysis with six interviews, followed by a refinement of the coding system by applying them to an additional 33 interviews. These findings are discussed in relation to plausible pattern in code occurrence rate and suggested sample sizes for thematic analysis. Read More: http://www.amsciepub.com/doi/abs/10.2466/03.CP.3.4

382 citations

Journal ArticleDOI
TL;DR: An efficient method is proposed to obtain a good initial codebook that can accelerate the convergence of the generalized Lloyd algorithm and achieve a better local minimum as well.
Abstract: The generalized Lloyd algorithm plays an important role in the design of vector quantizers (VQ) and in feature clustering for pattern recognition. In the VQ context, this algorithm provides a procedure to iteratively improve a codebook and results in a local minimum that minimizes the average distortion function. We propose an efficient method to obtain a good initial codebook that can accelerate the convergence of the generalized Lloyd algorithm and achieve a better local minimum as well. >

374 citations

Journal ArticleDOI
TL;DR: A novel general purpose BIQA method based on high order statistics aggregation (HOSA), requiring only a small codebook, which has been extensively evaluated on ten image databases with both simulated and realistic image distortions, and shows highly competitive performance to the state-of-the-art BIZA methods.
Abstract: Blind image quality assessment (BIQA) research aims to develop a perceptual model to evaluate the quality of distorted images automatically and accurately without access to the non-distorted reference images. The state-of-the-art general purpose BIQA methods can be classified into two categories according to the types of features used. The first includes handcrafted features which rely on the statistical regularities of natural images. These, however, are not suitable for images containing text and artificial graphics. The second includes learning-based features which invariably require large codebook or supervised codebook updating procedures to obtain satisfactory performance. These are time-consuming and not applicable in practice. In this paper, we propose a novel general purpose BIQA method based on high order statistics aggregation (HOSA), requiring only a small codebook. HOSA consists of three steps. First, local normalized image patches are extracted as local features through a regular grid, and a codebook containing 100 codewords is constructed by K-means clustering. In addition to the mean of each cluster, the diagonal covariance and coskewness (i.e., dimension-wise variance and skewness) of clusters are also calculated. Second, each local feature is softly assigned to several nearest clusters and the differences of high order statistics (mean, variance and skewness) between local features and corresponding clusters are softly aggregated to build the global quality aware image representation. Finally, support vector regression is adopted to learn the mapping between perceptual features and subjective opinion scores. The proposed method has been extensively evaluated on ten image databases with both simulated and realistic image distortions, and shows highly competitive performance to the state-of-the-art BIQA methods.

371 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
88% related
Wireless network
122.5K papers, 2.1M citations
88% related
Network packet
159.7K papers, 2.2M citations
87% related
Wireless
133.4K papers, 1.9M citations
87% related
Wireless sensor network
142K papers, 2.4M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023217
2022495
2021237
2020383
2019432
2018364