scispace - formally typeset
Search or ask a question
Topic

Codebook

About: Codebook is a research topic. Over the lifetime, 8492 publications have been published within this topic receiving 115995 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The pixel-based classification is adopted for refining the results from the block-based background subtraction, which can further classify pixels as foreground, shadows, and highlights and can provide a high precision and efficient processing speed to meet the requirements of real-time moving object detection.
Abstract: Moving object detection is an important and fundamental step for intelligent video surveillance systems because it provides a focus of attention for post-processing. A multilayer codebook-based background subtraction (MCBS) model is proposed for video sequences to detect moving objects. Combining the multilayer block-based strategy and the adaptive feature extraction from blocks of various sizes, the proposed method can remove most of the nonstationary (dynamic) background and significantly increase the processing efficiency. Moreover, the pixel-based classification is adopted for refining the results from the block-based background subtraction, which can further classify pixels as foreground, shadows, and highlights. As a result, the proposed scheme can provide a high precision and efficient processing speed to meet the requirements of real-time moving object detection.

99 citations

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a new algorithm for both vector quantizer design and clustering analysis as an alternative to the conventional K-means algorithm, which converges to a better locally optimal codebook with an accelerated convergence speed.
Abstract: The K-means algorithm is widely used in vector quantizer (VQ) design and clustering analysis In VQ context, this algorithm iteratively updates an initial codebook and converges to a locally optimal codebook in certain conditions It iteratively satisfies each of the two necessary conditions for an optimal quantizer; the nearest neighbor condition for the partition and centroid condition for the codevectors In this letter, we propose a new algorithm for both vector quantizer design and clustering analysis as an alternative to the conventional K-means algorithm The algorithm is almost the same as the K-means algorithm except for a modification at codebook updating step It does not satisfy the centroid condition iteratively, but asymptotically satisfies it as the number of iterations increases Experimental results show that the algorithm converges to a better locally optimal codebook with an accelerated convergence speed

98 citations

Proceedings ArticleDOI
08 Mar 1994
TL;DR: Use of the various compensation algorithms in consort produces a reduction of error rates for SPHINX-II by as much as 40 percent relative to the rate achieved with cepstral mean normalization alone, in both development test sets and in the context of the 1993 ARPA CSR evaluations.
Abstract: This paper describes a series of cepstral-based compensation procedures that render the SPHINX-II system more robust with respect to acoustical environment. The first algorithm, phone-dependent cepstral compensation, is similar in concept to the previously-described MFCDCN method, except that cepstral compensation vectors are selected according to the current phonetic hypothesis, rather than on the basis of SNR or VQ codeword identity. We also describe two procedures to accomplish adaptation of the VQ codebook for new environments, as well as the use of reduced-bandwidth frequency analysis to process telephone-bandwidth speech. Use of the various compensation algorithms in consort produces a reduction of error rates for SPHINX-II by as much as 40 percent relative to the rate achieved with cepstral mean normalization alone, in both development test sets and in the context of the 1993 ARPA CSR evaluations.

98 citations

Patent
Yang Gao1, Huan-Yu Su1
18 Sep 1998

98 citations

Journal ArticleDOI
TL;DR: This paper proposes a heuristic approach to design a hierarchical codebook exploiting beam widening with the multi-RF-chain sub-array (BMW-MS) technique and proposes a metric, termed generalized detection probability (GDP), to evaluate the quality of an arbitrary codeword.
Abstract: In this paper, we study hierarchical codebook design for channel estimation in millimeter-wave (mmWave) communications with a hybrid precoding structure. Due to the limited saturation power of the mmWave power amplifier, we consider the per-antenna power constraint (PAPC). We first propose a metric, termed generalized detection probability (GDP), to evaluate the quality of an arbitrary codeword . This metric not only enables an optimization approach for mmWave codebook design, but also can be used to compare the performance of two different codewords/codebooks. To the best of our knowledge, GDP is the first such metric, particularly for mmWave codebook design. We then propose a heuristic approach to design a hierarchical codebook exploiting beam widening with the multi-RF-chain sub-array (BMW-MS) technique. To obtain crucial parameters of BMW-MS, we provide two solutions, namely, a low-complexity search (LCS) solution to optimize the GDP metric and a closed-form (CF) solution to pursue a flat beam pattern. Performance comparisons show that BMW-MS/LCS and BMW-MS/CF achieve very close performances, and they outperform the existing alternatives under the PAPC.

98 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
88% related
Wireless network
122.5K papers, 2.1M citations
88% related
Network packet
159.7K papers, 2.2M citations
87% related
Wireless
133.4K papers, 1.9M citations
87% related
Wireless sensor network
142K papers, 2.4M citations
86% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023217
2022495
2021237
2020383
2019432
2018364