scispace - formally typeset
Search or ask a question

Showing papers on "Codebook published in 2003"


Journal ArticleDOI
TL;DR: A quantized maximum signal-to-noise ratio (SNR) beamforming technique is proposed where the receiver only sends the label of the best beamforming vector in a predetermined codebook to the transmitter.
Abstract: Transmit beamforming and receive combining are simple methods for exploiting the significant diversity that is available in multiple-input multiple-output (MIMO) wireless systems. Unfortunately, optimal performance requires either complete channel knowledge or knowledge of the optimal beamforming vector; both are hard to realize. In this article, a quantized maximum signal-to-noise ratio (SNR) beamforming technique is proposed where the receiver only sends the label of the best beamforming vector in a predetermined codebook to the transmitter. By using the distribution of the optimal beamforming vector in independent and identically distributed Rayleigh fading matrix channels, the codebook design problem is solved and related to the problem of Grassmannian line packing. The proposed design criterion is flexible enough to allow for side constraints on the codebook vectors. Bounds on the codebook size are derived to guarantee full diversity order. Results on the density of Grassmannian line packings are derived and used to develop bounds on the codebook size given a capacity or SNR loss. Monte Carlo simulations are presented that compare the probability of error for different quantization strategies.

1,542 citations


Journal ArticleDOI
TL;DR: It is shown that good beamformers are good packings of two-dimensional subspaces in a 2t-dimensional real Grassmannian manifold with chordal distance as the metric.
Abstract: We study a multiple-antenna system where the transmitter is equipped with quantized information about instantaneous channel realizations. Assuming that the transmitter uses the quantized information for beamforming, we derive a universal lower bound on the outage probability for any finite set of beamformers. The universal lower bound provides a concise characterization of the gain with each additional bit of feedback information regarding the channel. Using the bound, it is shown that finite information systems approach the perfect information case as (t-1)2/sup -B/t-1/, where B is the number of feedback bits and t is the number of transmit antennas. The geometrical bounding technique, used in the proof of the lower bound, also leads to a design criterion for good beamformers, whose outage performance approaches the lower bound. The design criterion minimizes the maximum inner product between any two beamforming vectors in the beamformer codebook, and is equivalent to the problem of designing unitary space-time codes under certain conditions. Finally, we show that good beamformers are good packings of two-dimensional subspaces in a 2t-dimensional real Grassmannian manifold with chordal distance as the metric.

981 citations


Proceedings ArticleDOI
11 May 2003
TL;DR: A codebook design method for quantized versions of maximum ratio transmission, equal gain transmission, and generalized selection diversity with maximum ratio combining at the receiver is presented and systems using the beamforming codebooks are shown to have a diversity order of the product of the number of transmit and thenumber of receive antennas.
Abstract: Multiple-input multiple-output (MIMO) wireless systems provides capacity much larger than that provided by traditional single-input single-output (SISO) wireless systems. Beamforming is a low complexity technique that increases the receive signal-to-noise ratio (SNR), however, it requires channel knowledge. Since in practice channel knowledge at the transmitter is difficult to realize, we propose a technique where the receiver designs the beamforming vector and sends it to the transmitter by transmitting a label in a finite set, or codebook, of beamforming vectors. A codebook design method for quantized versions of maximum ratio transmission, equal gain transmission, and generalized selection diversity with maximum ratio combining at the receiver is presented. The codebook design criterion exploits the quantization problem's relationship with Grassmannian line packing. Systems using the beamforming codebooks are shown to have a diversity order of the product of the number of transmit and the number of receive antennas. Monte Carlo simulations compare the performance of systems using this new codebook method with the performance of systems using previously proposed quantized and unquantized systems.

439 citations


Journal ArticleDOI
TL;DR: The paper proposes equal gain transmission (EGT) to provide diversity advantage in MIMO systems experiencing Rayleigh fading and an algorithm to construct a beamforming vector codebook that guarantees full diversity order.
Abstract: Multiple-input multiple-output (MIMO) wireless systems are of interest due to their ability to provide substantial gains in capacity and quality. The paper proposes equal gain transmission (EGT) to provide diversity advantage in MIMO systems experiencing Rayleigh fading. The applications of EGT with selection diversity combining, equal gain combining, and maximum ratio combining are addressed. It is proven that systems using EGT with any of these combining schemes achieve full diversity order when transmitting over a memoryless, flat-fading Rayleigh matrix channel with independent entries. Since, in practice, full channel knowledge at the transmitter is difficult to realize, a quantized version of EGT is proposed. An algorithm to construct a beamforming vector codebook that guarantees full diversity order is presented. Monte-Carlo simulation comparisons with various beamforming and combining systems illustrate the performance as a function of quantization.

338 citations


Book
30 Apr 2003
TL;DR: This chapter discusses the evolution of Wireless Communication Systems, and the challenges of designing and implementing ST Codes based on the FDFR Code Design.
Abstract: Preface. Acronyms. 1. Motivation and Context. 1.1 Evolution of Wireless Communication Systems. 1.2 Wireless Propagation Effects. 1.3 Parameters and Classification of Wireless Channels. 1.3.1 Delay Spread and Coherence Bandwidth. 1.3.2 Doppler Spread and Coherence Time. 1.4 Providing, Enabling and Collecting Diversity. 1.4.1 Diversity Provided by Frequency-Selective Channels. 1.4.2 Diversity Provided by Time-Selective Channels. 1.4.3 Diversity Provided by Multi-Antenna Channels. 1.5 Chapter-by-Chapter Organization. 2. Fundamentals of ST Wireless Communications. 2.1 Generic ST System Model. 2.2 ST Coding viz Channel Coding. 2.3 Capacity of ST Channels. 2.3.1 Outage Capacity. 2.3.2 Ergodic Capacity. 2.4 Error Performance of ST Coding. 2.5 Design Criteria for ST Codes. 2.6 Diversity and Rate: Finite SNR viz Asymptotics. 2.7 Classification of ST Codes. 2.8 Closing Comments. 3. Coherent ST Codes for Flat Fading Channels. 3.1 Delay Diversity ST Codes. 3.2 ST Trellis Codes. 3.2.1 Trellis Representation. 3.2.2 TSC ST Trellis Codes. 3.2.3 BBH ST Trellis Codes. 3.2.4 GFK ST Trellis Codes. 3.2.5 Viterbi Decoding of ST Trellis Codes. 3.3 Orthogonal ST Block Codes. 3.3.1 Encoding of OSTBCs. 3.3.2 Linear ML Decoding of OSTBCs. 3.3.3 BER Performance with OSTBCs. 3.3.4 Channel Capacity with OSTBCs. 3.4 Quasi-Orthogonal ST Block Codes. 3.5 ST Linear Complex Field Codes. 3.5.1 Antenna Switching and Linear Precoding. 3.5.2 Designing Linear Precoding Matrices. 3.5.3 Upper-Bound on Coding Gain. 3.5.4 Construction based on Parameterization. 3.5.5 Construction Based on Algebraic Tools. 3.5.6 Decoding ST Linear Complex Field Codes. 3.5.7 Modulus-Preserving STLCFC. 3.6 Linking OSTBC, QO-STBC and STLCFC Designs. 3.6.1 Embedding MP-STLCFC into the Alamouti Code. 3.6.2 Embedding 2 x 2 MP-STLCFCs into OSTBC. 3.6.3 Decoding QO-MP-STLCFC. 3.7 Closing Comments. 4. Layered ST Codes. 4.1 BLAST Designs. 4.1.1 D-BLAST. 4.1.2 V-BLAST. 4.1.3 Rate Performance with BLAST Codes. 4.2 ST Codes Trading Diversity for Rate. 4.2.1 Layered ST Codes with Antenna-Grouping. 4.2.2 Layered High-Rate Codes. 4.3 Full-Diversity Full-Rate ST Codes. 4.3.1 The FDFR Transceiver. 4.3.2 Algebraic FDFR Code Design. 4.3.3 Mutual Information Analysis. 4.3.4 Diversity-Rate-Performance Trade-offs. 4.4 Numerical Examples. 4.5 Closing Comments. 5. Sphere Decoding and (Near-) Optimal MIMO Demodulation. 5.1 Sphere Decoding Algorithm. 5.1.1 Selecting a Finite Search Radius. 5.1.2 Initializing with Unconstrained LS. 5.1.3 Searching within the Fixed-Radius Sphere. 5.2 Average Complexity of SDA in Practice. 5.3 SDA Improvements. 5.3.1 SDA with Detection Ordering and Nulling-Cancelling. 5.3.2 Schnorr-Euchner Variate of SDA. 5.3.3 SDA with Increasing Radius Search. 5.3.4 Simulated Comparisons. 5.4 Reduced-Complexity IRS-SDA. 5.5 Soft Decision Sphere Decoding. 5.5.1 List Sphere Decoding (LSD). 5.5.2 Soft SDA using Hard SDAs. 5.6 Closing Comments. 6. Non-Coherent and Differential ST Codes for Flat Fading Channels. 6.1 Non-Coherent ST Codes. 6.1.1 Search-Based Designs. 6.1.2 Training-Based Designs. 6.2 Differential ST Codes. 6.2.1 Scalar Differential Codes. 6.2.2 Differential Unitary ST Codes. 6.2.3 Differential Alamouti Codes. 6.2.4 Differential OSTBCs. 6.2.5 Cayley Differential Unitary ST Codes. 6.3 Closing Comments. 7. ST Codes for Frequency-Selective Fading Channels: Single-Carrier Systems. 7.1 System Model and Performance Limits. 7.1.1 Flat-Fading Equivalence and Diversity. 7.1.2 Rate Outage Probability. 7.2 ST Trellis Codes. 7.2.1 Generalized Delay Diversity. 7.2.2 Search-Based STTC Construction. 7.2.3 Numerical Examples. 7.3 ST Block Codes. 7.3.1 Block Coding with Two Transmit-Antennas. 7.3.2 Receiver Processing. 7.3.3 ML Decoding based on the Viterbi Algorithm. 7.3.4 Turbo Equalization. 7.3.5 Multi-Antenna Extensions. 7.3.6 OSTBC Properties. 7.3.7 Numerical Examples. 7.4 Closing Comments. 8. ST Codes for Frequency-Selective Fading Channels: Multi-Carrier Systems. 8.1 The General MIMO OFDM Framework. 8.1.1 OFDM Basics. 8.1.2 MIMO OFDM. 8.1.3 STF Framework. 8.2 ST and SF Coded MIMO OFDM. 8.3 STF Coded OFDM. 8.3.1 Subcarrier Grouping. 8.3.2 GSTF Block Codes. 8.3.3 GSTF Trellis Codes. 8.3.4 Numerical Examples. 8.4 Digital Phase Sweeping and Block Circular Delay. 8.5 Full-Diversity Full-Rate MIMO OFDM. 8.5.1 Encoders and Decoders. 8.5.2 Diversity and Rate Analysis. 8.5.3 Numerical Examples. 8.6 Closing Comments. 9. ST Codes for Time-Varying Channels. 9.1 Time-Varying Channels. 9.1.1 Channel Models. 9.1.2 Time-Frequency Duality. 9.1.3 Doppler Diversity. 9.2 Space-Time-Doppler Block Codes. 9.2.1 Duality-Based STDO Codes. 9.2.2 Phase Sweeping Design. 9.3 Space-Time-Doppler FDFR Codes. 9.4 Space-Time-Doppler Trellis Codes. 9.4.1 Design Criterion. 9.4.2 Smart-Greedy Codes. 9.5 Numerical Examples. 9.6 Space-Time-Doppler Differential Codes. 9.6.1 Inner Codec. 9.6.2 Outer Differential Codec. 9.7 ST Codes for Doubly-Selective Channels. 9.7.1 Numerical Examples. 9.8 Closing Comments. 10. Joint Galois-Field and Linear Complex-Field ST Codes. 10.1 GF-LCF ST Codes. 10.1.1 Separate versus Joint GF-LCF ST Coding. 10.1.2 Performance Analysis. 10.1.3 Turbo Decoding. 10.2 GF-LCF ST Layered Codes. 10.2.1 GF-LCF ST FDFR Codes: QPSK Signalling. 10.2.2 GF-LCF ST FDFR Codes: QAM Signalling. 10.2.3 Performance Analysis. 10.2.4 GF-LCF FDFR versus GF-Coded V-BLAST. 10.2.5 Numerical Examples. 10.3 GF-LCF Coded MIMO OFDM. 10.3.1 Joint GF-LCF Coding and Decoding. 10.3.2 Numerical Examples. 10.4 Closing Comments. 11. MIMO Channel Estimation and Synchronization. 11.1 Preamble-Based Channel Estimation. 11.2 Optimal Training-Based Channel Estimation. 11.2.1 ZP-Based Block Transmissions. 11.2.2 CP-Based Block Transmissions. 11.2.3 Special Cases. 11.2.4 Numerical Examples. 11.3 (Semi-)Blind Channel Estimation. 11.4 Joint Symbol Detection and Channel Estimation. 11.4.1 Decision-Directed Methods. 11.4.2 Kalman Filtering Based Methods. 11.5 Carrier Synchronization. 11.5.1 Hopping Pilot Based CFO Estimation. 11.5.2 Blind CFO Estimation. 11.5.3 Numerical Examples. 11.6 Closing Comments. 12. ST Codes with Partial Channel Knowledge: Statistical CSI. 12.1 Partial CSI Models. 12.1.1 Statistical CSI. 12.2 ST Spreading. 12.2.1 Average Error Performance. 12.2.2 Optimization based on Average SER Bound. 12.2.3 Mean-Feedback. 12.2.4 Covariance-Feedback. 12.2.5 Beamforming Interpretation. 12.3 Combining OSTBC with Beamforming. 12.3.1 Two-Dimensional Coder-Beamformer. 12.4 Numerical Examples. 12.4.1 Performance with Mean-Feedback. 12.4.2 Performance with Covariance-Feedback. 12.5 Adaptive Modulation for Rate Improvement. 12.5.1 Numerical Examples. 12.6 Optimizing Average Capacity. 12.7 Closing Comments. 13. ST Codes With Partial Channel Knowledge: Finite-Rate CSI. 13.1 General Problem Formulation. 13.2 Finite-Rate Beamforming. 13.2.1 Beamformer Selection. 13.2.2 Beamformer Codebook Design. 13.2.3 Quantifying the Power Loss. 13.2.4 Numerical Examples. 13.3 Finite-Rate Precoded Spatial Multiplexing. 13.3.1 Precoder Selection Criteria. 13.3.2 Codebook Construction: Infinite-Rate. 13.3.3 Codebook Construction: Finite-Rate. 13.3.4 Numerical Examples. 13.4 Finite-Rate Precoded OSTBC. 13.4.1 Precoder Selection Criterion. 13.4.2 Codebook Construction: Infinite-Rate. 13.4.3 Codebook Construction: Finite-Rate. 13.4.4 Numerical Examples. 13.5 Capacity Optimization with Finite-Rate Feedback. 13.5.1 Selection Criterion. 13.5.2 Codebook Design. 13.6 Combining Adaptive Modulation with Beamforming. 13.6.1 Mode Selection. 13.6.2 Codebook Design. 13.7 Finite-rate Feedback in MIMO OFDM. 13.8 Closing Comments. 14. ST Codes in the Presence of Interference. 14.1 ST Spreading. 14.1.1 Maximizing the Average SINR. 14.1.2 Minimizing the Average Error Bound. 14.2 Combining STS with OSTBC. 14.2.1 Low-Complexity Receivers. 14.3 Optimal Training with Interference. 14.3.1 LS Channel Estimation. 14.3.2 LMMSE Channel Estimation. 14.4 Numerical Examples. 14.5 Closing Comments. 15. ST Codes for Orthogonal Multiple Access. 15.1 System Model. 15.1.1 Synchronous downlink. 15.1.2 Quasi-synchronous uplink. 15.2 Single-Carrier Systems: STBC-CIBS-CDMA. 15.2.1 CIBS-CDMA for User Separation. 15.2.2 STBC Encoding and Decoding. 15.2.3 Attractive Features of STBC-CIBS-CDMA. 15.2.4 Numerical Examples. 15.3 Multi-Carrier Systems: STF-OFDMA. 15.3.1 OFDMA for User Separation. 15.3.2 STF Block Codes. 15.3.3 Attractive Features of STF-OFDMA. 15.3.4 Numerical Examples. 15.4 Closing Comments. References. Index.

280 citations


Proceedings ArticleDOI
01 Dec 2003
TL;DR: A precoder codebook design method for maximizing the average effective channel power is shown to relate to chordal distance Grassmannian subspace packing and results show this technique outperforms antenna subset selection spatial multiplexing.
Abstract: Spatial multiplexing multiple-input multiple-output (MIMO) wireless systems are of both theoretical and practical importance because they can achieve high spectral efficiencies by demultiplexing the incoming bit stream into multiple substreams. It has been shown that sending fewer substreams than the number of transmit antennas by linear precoding can provide improved error rate performance. Methods for designing linear precoders using perfect channel knowledge have previously been proposed. In many wireless systems, the assumption of complete channel knowledge is unrealistic because of the lack of forward and reverse channel reciprocity. To overcome this difficulty, we propose a precoding scheme that does not require transmit channel knowledge. The precoder is designed at the receiver and conveyed to the transmitter using a limited number of bits. The limited feedback represents an index within a finite set, or codebook, of precoding matrices. The receiver selects one of these codebook matrices using a modified version of a previously proposed full channel knowledge precoder selection criterion. A precoder codebook design method for maximizing the average effective channel power is shown to relate to chordal distance Grassmannian subspace packing. Simulation results show this technique outperforms antenna subset selection spatial multiplexing.

157 citations


Journal ArticleDOI
TL;DR: A new and fast encoding algorithm for vector quantization is presented that makes full use of two characteristics of a vector: the sum and the variance.
Abstract: In this paper, a new and fast encoding algorithm for vector quantization is presented. This algorithm makes full use of two characteristics of a vector: the sum and the variance. A vector is separated into two subvectors: one is composed of the first half of vector components and the other consists of the remaining vector components. Three inequalities based on the sums and variances of a vector and its two subvectors components are introduced to reject those codewords that are impossible to be the nearest codeword, thereby saving a great deal of computational time, while introducing no extra distortion compared to the conventional full search algorithm. The simulation results show that the proposed algorithm is faster than the equal-average nearest neighbor search (ENNS), the improved ENNS, the equal-average equal-variance nearest neighbor search (EENNS) and the improved EENNS algorithms. Comparing with the improved EENNS algorithm, the proposed algorithm reduces the computational time and the number of distortion calculations by 2.4% to 6% and 20.5% to 26.8%, respectively. The average improvements of the computational time and the number of distortion calculations are 4% and 24.6% for the codebook sizes of 128 to 1024, respectively.

109 citations


01 Jan 2003
TL;DR: A performance evaluation methodology called Perturbation Detection Rate (PDR) analysis is introduced, for measuring performance of background subtraction algorithms, which has some advantages over the commonly used Receiver Operation Characteristics (ROC) analysis.
Abstract: We introduce a performance evaluation methodology called Perturbation Detection Rate (PDR) analysis, for measuring performance of background subtraction (BGS) algorithms. It has some advantages over the commonly used Receiver Operation Characteristics (ROC) analysis. Specifically, it does not require foreground targets or knowledge of foreground distributions. It measures the sensitivity of a BGS algorithm in detecting low contrast targets against background as a function of contrast, also depending on how well the model captures mixed (moving) background events. We compare four algorithms having similarities and differences. Three are in [2, 3, 5] while the fourth is recently developed, called Codebook BGS. The latter algorithm quantizes sample background values at each pixel into codebooks which represent a compressed form of background model for a long image sequence.

91 citations


Proceedings ArticleDOI
25 Mar 2003
TL;DR: A locally adaptive partitioning algorithm is introduced that performs comparably in this application to a more expensive globally optimal one that employs dynamic programming.
Abstract: High dimensional source vectors, such as those that occur in hyperspectral imagery, are partitioned into a number of subvectors of different length and then each subvector is vector quantized (VQ) individually with an appropriate codebook. A locally adaptive partitioning algorithm is introduced that performs comparably in this application to a more expensive globally optimal one that employs dynamic programming. The VQ indices are entropy coded and used to condition the lossless or near-lossless coding of the residual error. Motivated by the need for maintaining uniform quality across all vector components, a percentage maximum absolute error distortion measure is employed. Experiments on the lossless and near-lossless compression of NASA AVIRIS images are presented. A key advantage of the approach is the use of independent small VQ codebooks that allow fast encoding and decoding.

85 citations


Patent
30 May 2003
TL;DR: In this paper, a method and system for multi-rate lattice vector quantization of a source vector x representing a frame from a source signal to be used, for example, in digital transmission and storage systems is presented.
Abstract: The present invention relates to a method and system for multi-rate lattice vector quantization of a source vector x representing a frame from a source signal to be used, for example, in digital transmission and storage systems. The multi-rate lattice quantization encoding method comprises the steps of associating to x a lattice point y in a unbounded lattice Λ; verifying if y is included in a base codebook C derived from the lattice Λ; if it is the case then indexing y in C so as to yield quantization indices if not then extending the base codebook using, for example a Voronoi based extension method, yielding an extended codebook; associating to y a codevector c from the extended codebook, and indexing y in the extended codebook C. The extension technique allows to obtain higher bit rate codebooks from the base codebooks compared to quantization method and system from the prior art.

79 citations


Journal ArticleDOI
TL;DR: Learning-based algorithms for image restoration and blind image restoration are proposed, by utilizing priors that are learned from similar images, and the principal component analysis and VQ-nearest neighbor approaches are utilized.
Abstract: Learning-based algorithms for image restoration and blind image restoration are proposed. Such algorithms deviate from the traditional approaches in this area, by utilizing priors that are learned from similar images. Original images and their degraded versions by the known degradation operator (restoration problem) are utilized for designing the VQ codebooks. The codevectors are designed using the blurred images. For each such vector, the high frequency information obtained from the original images is also available. During restoration, the high frequency information of a given degraded image is estimated from its low frequency information based on the codebooks. For the blind restoration problem, a number of codebooks are designed corresponding to various versions of the blurring function. Given a noisy and blurred image, one of the codebooks is chosen based on a similarity measure, therefore providing the identification of the blur. To make the restoration process computationally efficient, the principal component analysis (PCA) and VQ-nearest neighbor approaches are utilized. Simulation results are presented to demonstrate the effectiveness of the proposed algorithms.

Proceedings ArticleDOI
13 Oct 2003
TL;DR: A quantized preceding scheme is proposed where the receiver sends back a fixed number of bits to the transmitter and this bit pattern corresponds to an index within a finite set of preceding matrices.
Abstract: Multiple-input multiple-output (MIMO) spatial multiplexing wireless systems achieve high spectral efficiencies by demultiplexing the incoming bit stream into multiple substreams. Spatial multiplexing is of practical importance because the multiple substreams can be decoded using linear receivers. Unfortunately, this reduction in complexity degrades the probability of error performance. To overcome this difficulty, error rate performance of spatial multiplexing systems can be improved by sending fewer substreams than the number of transmit antennas by linear preceding. Criteria have been proposed for designing these precoders when complete channel knowledge is available to the transmitter. The assumption of complete channel knowledge is often unrealistic in many communication systems such as those with low rate feedback channels. Thus a quantized preceding scheme is proposed where the receiver sends back a fixed number of bits to the transmitter. This bit pattern corresponds to an index within a finite set of preceding matrices. A previously proposed criterion is used to determine the matrix in this precoder codebook to choose. A design method for these codebooks using techniques from Grassmannian subspace packing is presented. Simulation results show this technique outperforms typical antenna selection.

Patent
28 May 2003
TL;DR: In this article, a method for enhancing the picture quality of a video signal is proposed, which comprises the steps of receiving an encoded video signal having a plurality of headers, maintaining a plurality codebooks based upon the differences between a standard definition picture and a high definition picture, and providing a pointer to a particular codebook of the plurality of codebooks when decoding a frame of the video signal.
Abstract: A method of enhancing picture quality of a video signal is disclosed. The method comprises the steps of receiving an encoded video signal having a plurality of headers; maintaining a plurality of codebooks based upon the differences between a standard definition picture and a high definition picture; and providing a pointer to a particular codebook of the plurality of codebooks when decoding a frame of the video signal.

Patent
26 Sep 2003
TL;DR: In this article, a fast search method for searching for an optimum codeword for nearest neighbor vector quantization is proposed, where an upper boundary value and a lower boundary value between which an optimum codeeword will exist in a codebook are calculated using a distortion of a designated element in an input vector and an experimentally determined threshold.
Abstract: A fast search method for searching for an optimum codeword for nearest neighbor vector quantization. An upper boundary value and a lower boundary value between which an optimum codeword will exist in a codebook are calculated using a distortion of a designated element in an input vector and an experimentally determined threshold. Further, a start point and an end point for codebook search are determined using a binary search method from a codebook rearranged in descending order, and a full search scheme is applied only within a search range calculated by the determined start point and end point, thereby determining an optimum codeword for nearest neighbor vector quantization.

Proceedings ArticleDOI
11 May 2003
TL;DR: This novel scheme overcomes an existing difficulty in the IS practice that requires codebook information and shows large IS gains for single parity-check codes and short-length block codes.
Abstract: We introduce an importance sampling scheme for linear block codes with message-passing decoding. This novel scheme overcomes an existing difficulty in the IS practice that requires codebook information. Experiments show large IS gains for single parity-check codes and short-length block codes. For medium-length block codes, IS gains in the order of 10/sup 3/ and higher are observed at high signal-to-noise ratio.

Patent
Ye Wang1
21 Oct 2003
TL;DR: In this paper, a method and system for error concealment in a bitstream (230) of encoded audio signals, wherein the audio signals include stationary sounds and beat-type sounds, is presented.
Abstract: A method and system for error concealment in a bitstream (230) of encoded audio signals, wherein the audio signals include stationary sounds and beat-type sounds. In the encoder, the audio characteristics of the beat-type sounds are detected in the encoded audio signals and grouped into a plurality of clusters. A codebook (130) including the audio characteristics of the beat-type sounds and the clusters is provided to a decoder (40) to be stored in a buffer (32). The ancillary (150) data in the bitstream (230), including information indicative of the clusters, is provided to the decoder (40) so that the decoder (40) can reconstruct the beat-type sounds based on the ancillary data (150) and the stored codebook (130) if the audio data intervals is defective. The codebook (130) is provided to the decoder (40) before streaming starts, but the audio characteristics of the beat-type sounds and the clusters can be obtained by the decoder (40) on the fly.

Patent
24 Oct 2003
TL;DR: In this paper, an apparatus and method for mapping CELP parameters between a source codec and a destination codec is presented, which consists of an LSP mapping module, an adaptive codebook mapping module coupled to the LSP, and a fixed codebook map module coupled with the LP overflow module.
Abstract: An apparatus and method for mapping CELP parameters between a source codec and a destination codec. The apparatus includes an LSP mapping module, an adaptive codebook mapping module coupled to the LSP mapping module, and a fixed codebook mapping module coupled to the LSP mapping module and the adaptive codebook mapping module. The LSP mapping module includes an LP overflow module and an LSP parameter modification module. The adaptive codebook mapping module includes a first pitch gain codebook. The fixed codebook mapping module includes a first target processing module, a pulse search module, a fixed codebook gain estimation module, a pulse position searching module.

Patent
12 Jun 2003
TL;DR: In this paper, a method and apparatus for encoding or decoding data in accordance with an NB/(N+1)B block code, and a method for determining codebooks for use in such encoding and decoding are presented.
Abstract: A method and apparatus for encoding or decoding data in accordance with an NB/(N+1)B block code, and a method for determining codebooks for use in such encoding or decoding. Some such methods select positive and negative codebooks that are complements of each other, including by eliminating all candidate code words having negative disparity and filtering the remaining candidate code words in automated fashion based on predetermined spectral properties to select a subset of the candidate code words as the code words of the positive codebook. Preferably, all but a small subset of the (N+1)-bit code words (determined by a primary mapping) can be decoded by simple logic circuitry, and the remaining code words (determined by a secondary mapping) can be decoded by other logic circuitry or table lookup.

Patent
12 Aug 2003
TL;DR: In this paper, a topology-preserving mapping of input data to output data is proposed, which includes ordering of neurons in the ordering space according to a given pattern and assigning of codebook objects in the outcome space.
Abstract: A method for data processing, to be run on a data processing device, for the mapping of input data to output data. Data objects are entered as input data and processed by a topology-preserving mapping. The method includes ordering of neurons in the ordering space according to a given pattern, assigning of codebook objects in the outcome space to the neurons processing of codebook objects according to the topology-preserving mapping by use of data objects of the exploration space, and output of the processed codebook objects as output data. At least a part of the entered data objects are used to determine the order of neurons in the ordering space, and/or are used as data objects of the exploration space.

Journal ArticleDOI
TL;DR: A new scheme that aims to cut down on the computational cost of the vector quantization (VQ) encoding procedure is proposed and it is shown that the new scheme outperforms all the other schemes proposed so far in speeding up the VQ encoding procedure.
Abstract: A new scheme that aims to cut down on the computational cost of the vector quantization (VQ) encoding procedure is proposed in this paper. In this scheme, the correlation between the codewords in t...

Journal ArticleDOI
TL;DR: An analytical approach through which the neural symbols and corresponding stimulus space of a neuron or neural ensemble can be discovered simultaneously and quantitatively, making few assumptions about the nature of the code or relevant features is discussed.
Abstract: We discuss an analytical approach through which the neural symbols and corresponding stimulus space of a neuron or neural ensemble can be discovered simultaneously and quantitatively, making few assumptions about the nature of the code or relevant features. The basis for this approach is to conceptualize a neural coding scheme as a collection of stimulus-response classes akin to a dictionary or 'codebook', with each class corresponding to a spike pattern 'codeword' and its corresponding stimulus feature in the codebook. The neural codebook is derived by quantizing the neural responses into a small reproduction set, and optimizing the quantization to minimize an information-based distortion function. We apply this approach to the analysis of coding in sensory interneurons of a simple invertebrate sensory system. For a simple sensory characteristic (tuning curve), we demonstrate a case for which the classical definition of tuning does not describe adequately the performance of the cell studied. Considering a more involved sensory operation (sensory discrimination), we also show that, for some cells in this system, a significant amount of information is encoded in patterns of spikes that would not be discovered through analyses based on linear stimulus-response measures.

Patent
12 Mar 2003
TL;DR: In this paper, an apparatus for processing adaptive codebook pitch lag from one CELP-based standard to another one based on the same coding scheme is presented. But the pitch lag selection module is adapted to select the desired pitch lag parameter.
Abstract: An apparatus for processing adaptive codebook pitch lag from one CELP based standard to another CELP based standard. The apparatus has various modules that perform at least the functionality described herein. The apparatus includes a time-base subframe checker inspection module, which is adapted to associate one or more incoming subframes with an outgoing subframes of a destination codec. The apparatus also has a decision module coupled to the time-base subframe inspection module. The decision module is adapted to determine a desired pitch lag parameter from a plurality of pitch lag parameters among respective two or more incoming subframes. The apparatus has a pitch lag selection module coupled to the decision module. The pitch lag selection module is adapted to select the desired pitch lag parameter.

Patent
Yang Gao1, Adil Benyassine1, Jes Thyssen1, Eyal Shlomot1, Huan-Yu Su1 
21 Apr 2003
TL;DR: In this paper, a speech compression system capable of encoding a speech signal into a bitstream for subsequent decoding to generate synthesized speech is disclosed, where the bitstream comprises a type component and a gain component.
Abstract: A speech compression system capable of encoding a speech signal into a bitstream for subsequent decoding to generate synthesized speech is disclosed. The bitstream comprises a type component and a gain component. The type component is representative of a type classification of a frame of speech signal that is transmitted. The type component comprises a first type and second type. The gain component represents an adaptive codebook gain and a fixed codebook gain component comprises a fixed codebook gain component and an adaptive codebook gain component exclusively encoded as separate components of the bitstream as a function of the bit rate when the type classification is the second type.

Patent
28 Mar 2003
TL;DR: In this article, a maximum likelihood sequence estimator (MLSE) sub-receiver has an equalizer device responsive to input data for processing the same to generate equalized data, input data being generated from transmitted data by wireless transmission, and residual channel response to generate an MLSE codebook.
Abstract: A maximum likelihood sequence estimator (MLSE) sub-receiver having an MLSE equalizer device responsive to input data for processing the same to generate equalized data, said input data being generated from transmitted data by wireless transmission, said MLSE equalizer device processing said input data to generate residual channel response, said MLSE equalizer device using a known codebook and said residual channel response to generate an MLSE codebook, in accordance with an embodiment of the invention. The MLSE sub-receiver further includes an MLSE decoder responsive to said equalized data and said MLSE codebook for processing the same to determine maximum likelihood between said equalized data and said MLSE codebook, said MLSE decoder using said maximum likelihood for decoding said equalized data to generate a decoded transmitted data by mitigating the effects of multi-path communication channel due to wireless transmission of said transmitted data.

Journal ArticleDOI
24 Nov 2003
TL;DR: It has been found that the proposed algorithm, fuzzy reinforced learning vector quantisation (FRLVQ), yields an improved quality of codebook design in an image compression application when F RLVQ is used as a pre-process.
Abstract: A new approach to the design of optimised codebooks using vector quantisation (VQ) is presented. A strategy of reinforced learning (RL) is proposed which exploits the advantages offered by fuzzy clustering algorithms, competitive learning and knowledge of training vector and codevector configurations. Results are compared with the performance of the generalised Lloyd algorithm (GLA) and the fuzzy K-means (FKM) algorithm. It has been found that the proposed algorithm, fuzzy reinforced learning vector quantisation (FRLVQ), yields an improved quality of codebook design in an image compression application when FRLVQ is used as a pre-process. The investigations have also indicated that RL is insensitive to the selection of both the initial codebook and a learning rate control parameter, which is the only additional parameter introduced by RL from the standard FKM.

Journal ArticleDOI
TL;DR: A more effective AVQ system is obtained by combining together the history aid and the locality-based updating, which makes use of the information of previously coded vectors to quantize the current input vector if it is used to update the operational codebook.
Abstract: We propose two techniques that are applicable to any adaptive vector quantization (AVQ) systems. The first one is called the locality-based codebook updating: when performing a codebook updating, we update the operational codebook using not only the current input vector but also the codewords at all positions within a selected neighboring area (called the locality), while the operational codebook is organized in a "cache" manner. This technique is rationalized by the high correlation cross neighboring vectors that facilitates a more efficient coding of the indices of the codewords chosen from the codebook. The second technique is called the history aid, which makes use of the information of previously coded vectors to quantize the current input vector if it is used to update the operational codebook. A more effective AVQ system is obtained by combining together the history aid and the locality-based updating. Extensive simulations are carried out to demonstrate the improved results achieved by our AVQ systems. Particularly, when the operational codebook size is relatively small, the improvement over a benchmark AVQ system - the generalized threshold replenishment (GTR) - is drastic. For example, when the size is 32, testing on a nonstationary signal (containing frames from different video sequences, ordered in the concatenating or interleaving format) shows that the combination of history aid and locality-based updating offers more than 4 dB gain over GTR at 0.5 bpp.

Proceedings ArticleDOI
06 Apr 2003
TL;DR: A new output-based method for assessing speech quality and evaluating its performance is proposed, based on comparing the output speech to an artificial reference signal representing the closest match from an appropriately formulated codebook.
Abstract: This paper proposes a new output-based method for assessing speech quality and evaluates its performance. The measure is based on comparing the output speech to an artificial reference signal representing the closest match from an appropriately formulated codebook. The codebook holds a number of optimally clustered speech parameter vectors, extracted from an undistorted clean speech database, and provides a reference for computing objective auditory distance measures for distorted speech. The median minimum distance is used as a measure of the objective auditory distance. The required clustering and matching processes are achieved by using an efficient data mining technique known as the self-organising map. Speech parameters derived from perceptual linear prediction (PLP) and bark spectrum analysis are used to provide speaker independent information as required by an output-based objective approach for speech quality measure.

Journal ArticleDOI
TL;DR: It is investigated whether the squared error between an original signal and a phase-distorted signal is a perceptually relevant measure for distortions in the Fourier phase spectrum of periodic signals obtained from speech and results indicate that for small values the squarederror correlates well to the perceptual measures.
Abstract: Based on two well-known auditory models, it is investigated whether the squared error between an original signal and a phase-distorted signal is a perceptually relevant measure for distortions in the Fourier phase spectrum of periodic signals obtained from speech. Both the performance of phase vector quantizers and the direct relationship between the squared error and two perceptual distortion measures are studied. The results indicate that for small values the squared error correlates well to the perceptual measures. However, for large errors, an increase in squared error does not, on average, lead to an increase in the perceptual measures. Empirical rate-perceptual distortion curves and listening tests confirm that, for low to medium codebook sizes, the average perceived distortion does not decrease with increasing codebook size when the squared error is used as encoding criterion.

Journal ArticleDOI
15 Sep 2003
TL;DR: In this article, a universal quantization scheme based on random coding is proposed, which consists of a source-independent random codebook (typically mismatched to the source distribution), followed by optimal entropy coding that is matched to the quantized codeword distribution.
Abstract: We introduce a universal quantization scheme based on random coding, and we analyze its performance. This scheme consists of a source-independent random codebook (typically mismatched to the source distribution), followed by optimal entropy coding that is matched to the quantized codeword distribution. A single-letter formula is derived for the rate achieved by this scheme at a given distortion, in the limit of large codebook dimension. The rate reduction due to entropy coding is quantified, and it is shown that it can be arbitrarily large. In the special case of "almost uniform" codebooks (e.g., an independent and identically distributed (i.i.d.) Gaussian codebook with large variance) and difference distortion measures, a novel connection is drawn between the compression achieved by the present scheme and the performance of "universal" entropy-coded dithered lattice quantizers. This connection generalizes the "half-a-bit" bound on the redundancy of dithered lattice quantizers. Moreover, it demonstrates a strong notion of universality where a single "almost uniform" codebook is near optimal for any source and any difference distortion measure. The proofs are based on the fact that the limiting empirical distribution of the first matching codeword in a random codebook can be precisely identified. This is done using elaborate large deviations techniques, that allow the derivation of a new "almost sure" version of the conditional limit theorem.

Patent
13 Mar 2003
TL;DR: In this article, an efficient method for codebook search, employed in speech coding, uses an optimal pulse-position grouping and a split track arrangement, based on a likelihood estimator, and also disclosed are codecs, mobile voice communication devices, telecommunications equipment and telecommunications methods.
Abstract: An efficient method for codebook search, employed in speech coding, uses an optimal pulse-position grouping and a split track arrangement, based on a likelihood estimator. Also disclosed are codecs, mobile voice communication devices, telecommunications equipment and telecommunications methods.