Topic
Codebook
About: Codebook is a research topic. Over the lifetime, 8492 publications have been published within this topic receiving 115995 citations.
Papers published on a yearly basis
Papers
More filters
•
06 Apr 2010TL;DR: In this article, a first precoding matrix W1 is selected from a first codebook comprising sets of rank specific precoding matrices, such that the selected first and second matrix form a joint precoder specific to a desired rank.
Abstract: A first precoding matrix W1 is selected from a first codebook comprising sets of rank specific precoding matrices. The first codebook is characterized by there being fewer precoding matrices associated with higher ranks than associated with lower ranks, and characterized by precoding matrices associated with ranks above a certain rank all being diagonal matrices. The selected first precoding matrix W1 is used to select a rank-specific second precoding matrix W2 from a second codebook, such that the selected first and second precoding matrices form a joint precoder specific to a desired rank. The second codebook is characterized by differently sized precoding matrices associated with each of N total ranks, in which N is an integer greater than one. Information on the joint precoder is reported to a network node over an uplink transmission channel.
40 citations
••
TL;DR: A variety of techniques are described to improve sparse superposition codes with approximate message passing (AMP) decoding, and these include an iterative algorithm for SPARC power allocation, guidelines for choosing codebook parameters, and estimating a critical decoding parameter online instead of precomputation.
Abstract: Sparse superposition codes are a recent class of codes introduced by Barron and Joseph for efficient communication over the AWGN channel. With an appropriate power allocation, these codes have been shown to be asymptotically capacity-achieving with computationally feasible decoding. However, a direct implementation of the capacity-achieving construction does not give good finite length error performance. In this paper, we consider sparse superposition codes with approximate message passing (AMP) decoding, and describe a variety of techniques to improve their finite length performance. These include an iterative algorithm for SPARC power allocation, guidelines for choosing codebook parameters, and estimating a critical decoding parameter online instead of precomputation. We also show how partial outer codes can be used in conjunction with AMP decoding to obtain a steep waterfall in the error performance curves. We compare the error performance of AMP-decoded sparse superposition codes with coded modulation using LDPC codes from the WiMAX standard.
40 citations
••
27 Aug 2005TL;DR: This paper introduces Particle Swarm Optimization (PSO) cluster method to build high quality codebook for image compression and sets the result of LBG algorithm to initialize global best particle by which it can speed the convergence of PSO.
Abstract: VQ coding is a powerful technique in digital image compression. Conversional methods such as classic LBG algorithm always generate local optimal codebook. In this paper, we introduce Particle Swarm Optimization (PSO) cluster method to build high quality codebook for image compression. We also set the result of LBG algorithm to initialize global best particle by which it can speed the convergence of PSO. Both image encoding and decoding process are simulated in our experiments. Results show that the algorithm is reliable and the reconstructed images get higher quality to images reconstructed by other methods.
39 citations
••
TL;DR: The authors develop three new methods of assigning indices to a vector quantization codebook and formulate these assignments as labels of nodes of a full-search progressive transmission tree, which gives intermediate signal-to-noise ratios (SNRs) that are close to those obtained with tree-structured vector quantification.
Abstract: The authors study codeword index assignment to allow for progressive image transmission of fixed rate full-search vector quantization (VQ). They develop three new methods of assigning indices to a vector quantization codebook and formulate these assignments as labels of nodes of a full-search progressive transmission tree. The tree is used to design intermediate codewords for the decoder so that full-search VQ has a successive approximation character. The binary representation for the path through the tree represents the progressive transmission code. The methods of designing the tree that they apply are the generalized Lloyd algorithm, minimum cost perfect matching from optimization theory, and a method of principal component partitioning. Their empirical results show that the final method gives intermediate signal-to-noise ratios (SNRs) that are close to those obtained with tree-structured vector quantization, yet they have higher final SNRs. >
39 citations
•
05 Sep 1997TL;DR: In this paper, the same parameter is repeatedly used in an unvoiced frame inherently devoid of pitch, thus producing an extraneous feeling, which can be prevented from occurring by evading repeated use of excitation vectors having the same waveform shape.
Abstract: If the same parameter is repeatedly used in an unvoiced frame inherently devoid of pitch, there is produced a pitch of the frame length period, thus producing an extraneous feeling. This can be prevented from occurring by evading repeated use of excitation vectors having the same waveform shape. To this end, when decoding an encoded speech signal obtained on waveform encoding an encoding-unit-based time-axis speech signal obtained on splitting an input speech signal in terms of a pre-set encoding unit on the time axis, input data is checked by CRC by a CRC and bad frame masking circuit 281, which processes a frame corrupted with an error with bad frame masking of repeatedly using parameters of a directly previous frame. If the error-corrupted frame is unvoiced, an unvoiced speech synthesis unit 220 adds the noise to an excitation vector from a noise codebook or randomly selects the excitation vector of the noise codebook.
39 citations