scispace - formally typeset
Search or ask a question

Showing papers on "Codebook published in 2004"


Proceedings ArticleDOI
24 Oct 2004
TL;DR: A new fast algorithm for background modeling and subtraction that can handle scenes containing moving backgrounds or illumination variations (shadows and highlights), and it achieves robust detection for compressed videos.
Abstract: We present a new fast algorithm for background modeling and subtraction. Sample background values at each pixel are quantized into codebooks which represent a compressed form of background model for a long image sequence. This allows us to capture structural background variation due to periodic-like motion over a long period of time under limited memory. Our method can handle scenes containing moving backgrounds or illumination variations (shadows and highlights), and it achieves robust detection for compressed videos. We compared our method with other multimode modeling techniques.

412 citations


Proceedings ArticleDOI
01 Dec 2004
TL;DR: This work first considers multi-antenna beamformed transmissions through independent and identically distributed (i.i.d.) Rayleigh fading channels, and upper-bound the rate distortion function of the vector source, and also lower- bound the operational rate distortion performance achieved by the generalized Lloyd's algorithm.
Abstract: We deal with the design and performance analysis of transmit-beamformers for multi-input multi-output (MIMO) systems, based on bandwidth-limited information that is fed back from the receiver to the transmitter. By casting the design of transmit-beamforming based on limited-rate feedback as an equivalent sphere vector quantization (SVQ) problem, we first consider multi-antenna beamformed transmissions through independent and identically distributed (i.i.d.) Rayleigh fading channels. We upper-bound the rate distortion function of the vector source, and also lower-bound the operational rate distortion performance achieved by the generalized Lloyd's algorithm. A simple and valuable relationship emerges between the theoretical distortion limit and the achievable performance, and the average signal to noise ratio (SNR) performance is accurately quantified. Finally, we study beamformer codebook designs for correlated Rayleigh fading channels. and derive a low-complexity codebook design that achieves near optimal performance.

158 citations


Patent
Qinghua Li1, Xintian E. Lin2
10 Sep 2004
TL;DR: In this paper, a column of a beamforming matrix is quantized using a codebook, a Householder reflection is performed on the beamforming matrices to reduce the dimensionality of the beamform matrix, and the quantizing and performing of Householder reflections on the previously dimensionality reduced beamform matrices is recursively repeated to obtain a further reduction of dimensionality.
Abstract: Feedback bandwidth may be reduced in a closed loop MIMO system by Householder transformations, vector quantization using codebooks, and down-sampling in the frequency domain. A column of a beamforming matrix is quantized using a codebook, a Householder reflection is performed on the beamforming matrix to reduce the dimensionality of the beamforming matrix, and the quantizing and performing of Householder reflection on the previously dimensionality reduced beamforming matrix is recursively repeated to obtain a further reduction of dimensionality of the beamforming matrix. These actions are performed for a subset of orthogonal frequency divisional multiplexing (OFDM) carriers, and quantized column vectors for the subset of OFDM carriers are transmitted.

75 citations


Journal ArticleDOI
TL;DR: A framework for content-based retrieval of historical documents in the Ottoman Empire archives is presented, which is applicable to many other document archives using different scripts.
Abstract: There is an accelerating demand to access the visual content of documents stored in historical and cultural archives. Availability of electronic imaging tools and effective image processing techniques makes it feasible to process the multimedia data in large databases. A framework for content-based retrieval of historical documents in the Ottoman Empire archives is presented. The documents are stored as textual images, which are compressed by constructing a library of symbols occurring in a document, and the symbols in the original image are then replaced with pointers into the codebook to obtain a compressed representation of the image. The features in wavelet and spatial domains, based on angular and distance span of shapes, are used to extract the symbols. In order to make content-based retrieval in the historical archives, a query is specified as a rectangular region in an input image and the same symbol-extraction process is applied to the query region. The queries are processed on the codebook of documents and the query images are identified in the resulting documents using the pointers in the textual images. The query process does not require decompression of images. The new content-based retrieval framework is also applicable to many other document archives using different scripts.

68 citations


PatentDOI
TL;DR: In this paper, sound source separation using convolutional mixing independent component analysis based on a priori knowledge of the target sound source is disclosed, where a vector quantization codebook of vectors representing typical sound source patterns is used to determine whether proper separation has occurred.
Abstract: Sound source separation, without permutation, using convolutional mixing independent component analysis based on a priori knowledge of the target sound source is disclosed. The target sound source can be a human speaker. The reconstruction filters used in the sound source separation take into account the a priori knowledge of the target sound source, such as an estimate the spectra of the target sound source. The filters may be generally constructed based on a speech recognition system. Matching the words of the dictionary of the speech recognition system to a reconstructed signal indicates whether proper separation has occurred. More specifically, the filters may be constructed based on a vector quantization codebook of vectors representing typical sound source patterns. Matching the vectors of the codebook to a reconstructed signal indicates whether proper separation has occurred. The vectors may be linear prediction vectors, among others.

62 citations


Patent
Ada S. Y. Poon1
23 Jun 2004
TL;DR: In this article, a N×N multiple-input-multiple-output (MIMO) wireless network search codewords in a codebook to determine which codeword is closest to a desired pre-coding matrix on a Grassmann manifold.
Abstract: Stations in a N×N multiple-input-multiple-output (MIMO) wireless network search codewords in a codebook to determine which codeword is closest to a desired pre-coding matrix on a Grassmann manifold. An index or indices corresponding to codeword is transmitted from a receiver to a transmitter to identify a codeword to be used for transmit beamforming.

56 citations


Journal ArticleDOI
TL;DR: The proposed scheme can produce reconstructed images of good quality while achieving compression at low bit rates and two indices for quantitative assessment of the psychovisual quality (blocking effect) of the reconstructed image are proposed.
Abstract: We propose a new scheme of designing a vector quantizer for image compression. First, a set of codevectors is generated using the self-organizing feature map algorithm. Then, the set of blocks associated with each code vector is modeled by a cubic surface for better perceptual fidelity of the reconstructed images. Mean-removed vectors from a set of training images is used for the construction of a generic codebook. Further, Huffman coding of the indices generated by the encoder and the difference-coded mean values of the blocks are used to achieve better compression ratio. We proposed two indices for quantitative assessment of the psychovisual quality (blocking effect) of the reconstructed image. Our experiments on several training and test images demonstrate that the proposed scheme can produce reconstructed images of good quality while achieving compression at low bit rates.

55 citations


Proceedings ArticleDOI
17 May 2004
TL;DR: A practical multi-rate quantization system based on Voronoi extension and derived from the lattice RE/sub 8/.
Abstract: We present a new method, called Voronoi extension, for the design of low-complexity multi-rate lattice vector quantization (VQ). With this technique, lattice codebooks of arbitrarily large bit rates can be generated algorithmically and the problem of lattice codebook overload can be bypassed. We describe a practical multi-rate quantization system based on Voronoi extension and derived from the lattice RE/sub 8/. This system is applied to the TCX coding model using pitch prediction, so as to extend AMR-WB speech coding at high bit rates (in particular 32 kbit/s).

55 citations



Patent
Milan Jelinek1, Redwan Salami1
12 Mar 2004
TL;DR: In this article, a gain quantization method and device for implementation in a technique for coding a sampled sound signal processed, during coding, by successive frames of L samples, wherein each frame is divided into a number of subframes and each subframe comprises a number N of samples, where N < L.
Abstract: The present invention relates to a gain quantization method and device for implementation in a technique for coding a sampled sound signal processed, during coding, by successive frames of L samples, wherein each frame is divided into a number of subframes and each subframe comprises a number N of samples, where N

48 citations


Proceedings ArticleDOI
27 Jun 2004
TL;DR: By choosing the size of the codebook large enough, the rate that is reliably realized by the strategy can be made to approach arbitrarily closely the mutual information between channel input and output induced by the user-chosen input distribution.
Abstract: We present a strategy for the reliable communication of a message, in a variable number of channel uses, over an unknown discrete memoryless channel (DMC). The decoder periodically tests the received sequence and, when it can decode, sends an acknowledgment to the transmitter, which then stops transmitting. By choosing the size of the codebook large enough, the rate that is reliably realized by the strategy can be made to approach arbitrarily closely the mutual information between channel input and output induced by the user-chosen input distribution. The strategy presented can be considered as a generalization to arbitrary unknown DMCs of earlier variable length coding schemes, such as digital fountain codes for binary erasure channels (BECs), and a coding strategy for binary symmetric channels (BSCs) presented by Tchamkerten and Telatar

01 Jan 2004
TL;DR: This paper derives capacity expressions for perfectly-secure steganographic systems by exploiting the fact that the warden may be passive, or active using a memoryless attack channel, oractive using an arbitrarily varying channel.
Abstract: This paper extends recent results on steganographic capacity. We derive capacity expressions for perfectly-secure steganographic systems. The warden may be passive, or active using a memoryless attack channel, or active using an arbitrarily varying channel. Neither encoder nor decoder know which channel was selected by the warden. In some cases, the steganographic constraint does not result in any capacity loss. To achieve steganographic capacity, encoder and decoder generally need to share a secret codebook.

Patent
10 Jul 2004
TL;DR: In this article, a speech encoder that analyzes and classifies each frame of speech as being periodic-like speech or non-periodic like speech is presented, where the encoder performs a different gain quantization process depending if the speech is periodic or not.
Abstract: A speech encoder that analyzes and classifies each frame of speech as being periodic-like speech or non-periodic like speech where the speech encoder performs a different gain quantization process depending if the speech is periodic or not. If the speech is periodic, the improved speech encoder obtains the pitch gains from the unquantized weighted speech signal and performs a pre-vector quantization of the adaptive codebook gain GP for each subframe of the frame before subframe processing begins and a closed-loop delayed decision vector quantization of the fixed codebook gain GC. If the frame of speech is non-periodic, the speech encoder may use any known method of gain quantization. The result of quantizing gains of periodic speech in this manner results in a reduction of the number of bits required to represent the quantized gain information and for periodic speech, the ability to use the quantized pitch gain for the current subframe to search the fixed codebook for the fixed codebook excitation vector for the current subframe. Alternatively, the new gain quantization process which was used only for periodic signals may be extended to non-periodic signals as well. This second strategy results in a slightly higher bit rate than that for periodic signals that use the new gain quantization strategy, but is still lower than the prior art's bit rate. Yet another alternative is to use the new gain quantization process for all speech signals without distinguishing between periodic and non-periodic signals.

Patent
09 Jul 2004
TL;DR: In this paper, a coding apparatus including a base layer, a speech quality enhancement layer, and a multiplexer is used to filter an input speech signal using linear prediction coding and generate an excitation signal corresponding to the filtered speech signal through a fixed codebook search.
Abstract: A coding apparatus including a base layer, a speech quality enhancement layer, and a multiplexer. The base layer filters an input speech signal using linear prediction coding and generates an excitation signal corresponding to the filtered speech signal through a fixed codebook search and an adaptive codebook search. The speech quality enhancement layer searches a fixed codebook using parameters obtained through the fixed codebook search in the base layer, or searches the fixed codebook using a target signal, which is obtained by removing a contribution of a fixed codebook of the base layer and a signal which is obtained by synthesizing and filtering a previous fixed codebook of the speech quality enhancement layer from a target signal for the fixed codebook search of the base layer. The multiplexer multiplexes signals generated by the base layer and the at least one speech quality enhancement layer.

Patent
09 Jan 2004
TL;DR: In this article, the authors propose a method and apparatus for a voice transcoder that converts a bitstream representing frames of data encoded according to a first voice compression standard to a binary representation of the data using perceptual weighting that uses tuned weighting factors to produce a higher quality decoded voice signal than a comparable tandem transcoding solution.
Abstract: A method and apparatus for a voice transcoder that converts a bitstream representing frames of data encoded according to a first voice compression standard to a bitstream representing frames of data according to a second voice compression standard using perceptual weighting that uses tuned weighting factors, such that the bitstream of a second voice compression standard to produce a higher quality decoded voice signal than a comparable tandem transcoding solution. The method includes pre-computing weighting factors for a perceptual weighting filter optimized to a specific source and destination codec pair, pre-configuring the transcoding strategies, mapping CELP parameters in the CELP parameter space according to the selected coding strategy, performing Linear Prediction analysis if specified by the transcoding strategy, perceptually weighting the speech using with tuned weighting factors, and searching for adaptive codebook and fixed-codebook parameters to obtain a quantized set of destination codec parameters.

Journal ArticleDOI
TL;DR: The proposed wavelet-based adaptive vector quantizer incorporates a distortion-constrained codebook replenishment (DCCR) mechanism to meet a user-defined quality demand in peak signal-to-noise ratio and an iterative fast searching algorithm to find the desired signal quality along an energy-quality curve instead of a traditional rate-distortion curve.
Abstract: The enormous data of volumetric medical images (VMI) bring a transmission and storage problem that can be solved by using a compression technique. For the lossy compression of a very long VMI sequence, automatically maintaining the diagnosis features in reconstructed images is essential. The proposed wavelet-based adaptive vector quantizer incorporates a distortion-constrained codebook replenishment (DCCR) mechanism to meet a user-defined quality demand in peak signal-to-noise ratio. Combining a codebook updating strategy and the well-known set partitioning in hierarchical trees (SPIHT) technique, the DCCR mechanism provides an excellent coding gain. Experimental results show that the proposed approach is superior to the pure SPIHT and the JPEG2000 algorithms in terms of coding performance. We also propose an iterative fast searching algorithm to find the desired signal quality along an energy-quality curve instead of a traditional rate-distortion curve. The algorithm performs the quality control quickly, smoothly, and reliably.

Proceedings ArticleDOI
13 Nov 2004
TL;DR: A new piecewise dimensionality reduction technique that is based on Vector Quantization, which generally outperforms PCA and its variants in similarity searches and is improved due to the significantly lower dimensionality of the new representation.
Abstract: Efficiently searching for similarities among time series and discovering interesting patterns is an important and non-trivial problem with applications in many domains. The high dimensionality of the data makes the analysis very challenging. To solve this problem, many dimensionality reduction methods have been proposed. PCA (Piecewise Constant Approximation) and its variant have been shown efficient in time series indexing and similarity retrieval. However, in certain applications, too many false alarms introduced by the approximation may reduce the overall performance dramatically. In this paper, we introduce a new piecewise dimensionality reduction technique that is based on Vector Quantization. The new technique, PVQA (Piecewise Vector Quantized Approximation), partitions each sequence into equi-length segments and uses vector quantization to represent each segment by the closest (based on a distance metric) codeword from a codebook of key-sequences. The efficiency of calculations is improved due to the significantly lower dimensionality of the new representation. We demonstrate the utility and efficiency of the proposed technique on real and simulated datasets. By exploiting prior knowledge about the data, the proposed technique generally outperforms PCA and its variants in similarity searches.

Patent
Asela Gunawardana1
23 Apr 2004
TL;DR: In this article, an adaptation transform is estimated, and it is applied to codewords in the codebooks, rather than to the means themselves, and a codebook is generated for each subspace.
Abstract: The present invention is used to adapt acoustic models, quantized in subspaces, using adaptation training data (such as speaker-dependent training data). The acoustic model is compressed into multi-dimensional subspaces. A codebook is generated for each subspace. An adaptation transform is estimated, and it is applied to codewords in the codebooks, rather than to the means themselves.

01 Jan 2004
TL;DR: Experimental results show that the quantization of zerotree vectors using ACS outperforms, in most cases, its traditionally used Linde-Buzo-Gray (LBG) counterpart.
Abstract: Ant colony system (ACS) is a combinatorial optimization method motivated by the behaviour of real ants In this paper, we present a novel image coding method based on ACS vector quantization of groups of wavelet coefficients The generation of codebook using ACS is facilitated by representing the coefficient vectors in a bidirectional graph, followed by defining a suitable mechanism of depositing pheromone on the edges of graph Experimental results show that the quantization of zerotree vectors using ACS outperforms, in most cases, its traditionally used Linde-Buzo-Gray (LBG) counterpart

Journal ArticleDOI
TL;DR: This paper proposes Karhunen-Loe/spl grave/ve transform (KLT)-based adaptive classified VQ (CVQ), where the space-filling advantage can be utilized since the Voronoi-region shape is not affected by the KLT.
Abstract: Compared to scalar quantization (SQ), vector quantization (VQ) has memory, space-filling, and shape advantages. If the signal statistics are known, direct vector quantization (DVQ) according to these statistics provides the highest coding efficiency, but requires unmanageable storage requirements if the statistics are time varying. In code-excited linear predictive (CELP) coding, a single "compromise" codebook is trained in the excitation-domain and the space-filling and shape advantages of VQ are utilized in a nonoptimal, average sense. In this paper, we propose Karhunen-Loe/spl grave/ve transform (KLT)-based adaptive classified VQ (CVQ), where the space-filling advantage can be utilized since the Voronoi-region shape is not affected by the KLT. The memory and shape advantages can be also used, since each codebook is designed based on a narrow class of KLT-domain statistics. We further improve basic KLT-CVQ with companding. The companding utilizes the shape advantage of VQ more efficiently. Our experiments show that KLT-CVQ provides a higher SNR than basic CELP coding, and has a computational complexity similar to DVQ and much lower than CELP. With companding, even single-class KLT-CVQ outperforms CELP, both in terms of SNR and codebook search complexity.

Journal ArticleDOI
TL;DR: A new scheme of codebook initialization in which the competitive learning and code vector splitting are incorporated together to produce a good initial codebook is presented.
Abstract: Codebook initialization usually has a significant effect on the performance of vector quantization algorithms. This letter presents a new scheme of codebook initialization in which the competitive learning and code vector splitting are incorporated together to produce a good initial codebook. Based mainly on the geometrical measurements of the learning tracks of the code vectors, the competitive splitting mechanism shows an ability to appropriately allocate code vectors according to the spatial distribution of the input data and, therefore, tends to give a better initial codebook. Comparisons with other initialization techniques demonstrate the effectiveness of the new scheme.

Proceedings ArticleDOI
17 May 2004
TL;DR: The paper proposes a new output-based system for prediction of subjective speech quality, and evaluates its performance, showing that the proposed system is robust against speaker, utterance and distortion variations.
Abstract: The paper proposes a new output-based system for prediction of subjective speech quality, and evaluates its performance. The system is based on computing objective distance measures, such as the median minimum distance, between perceptually-based parameter vectors representing the voiced parts of the speech signal and appropriately matched reference vectors extracted from a pre-formulated codebook. The distance measures are then mapped into equivalent mean opinion scores (MOS) using regression. The codebook of the system is formed by optimally clustering the large number of speech parameter vectors extracted from an undistorted source speech database. The required clustering and matching processes are achieved by using an efficient data mining technique known as the self-organising map. The perceptually-based speech parameters are derived using perceptual linear prediction (PLP) and bark spectrum analyses. Reported evaluation results show that the proposed system is robust against speaker, utterance and distortion variations.

01 Jan 2004
TL;DR: Performance results based on simulation show that MCOQD provides more robust quantizers than MDSQ, and the proposed scheme is based on extending the channel-optimized scalar quantization scheme of Farvardin and Vaishampayan to multiple parallel channels.
Abstract: o This paper extends the channel optimized quantiza- tion scheme of Farvardin and Vaishampayan (1) to two parallel channels. The extended multiple-channel optimized quantizer design (MCOQD) framework is applied to discrete memory- less channels with erasures. The resultant MCOQD subsumes the multiple description scalar quantizer (MDSQ) design of Vaishampayan (2). While MDSQ is suited to only ion-offi channels, MCOQD accounts for both erasure and symbol errors. Performance results based on simulation show that MCOQD provides more robust quantizers than MDSQ. I. INTRODUCTION Multiple description coding (MDC) is a method of com- municating information from a source over two or more channels such that the information received from any subset of channels can be used for source reconstruction, and the reconstruction quality improves with the size of the subset. Most existing MDC schemes (e.g., (2), (3)) consider only ion- offi channels (i.e., erasure channels), and are not suited to channels with symbol or bit errors. In (4), multiple description scalar quantization is combined with error correcting codes and then applied to channels with bit errors. They reported that the MD codes considered were outperformed by single description codes. In this paper we consider the more general problem of designing a MDC scheme for both erasure errors and sym- bol errors. The proposed scheme is based on extending the channel-optimized scalar quantization scheme of (1) to multiple parallel channels; hence, we call the new scheme multiple-channel optimized quantizer design (MCOQD). We show in this paper that the multiple description scalar quan- tizer (MDSQ) design of Vaishampayan (2) is subsumed by MCOQD. MCOQD specializes to MDSQ design when the channel is free of symbol or bit errors. We introduce MCOQD in the next section. As per the usual practice, we gain insights by rst studying the two-channel case. In Section 3, the features of MCOQD are compared with MDSQ, in order to determine how they are related. Simulation results are provided in Section 4 for comparing the performance of the two approaches and conclusions are drawn in Section 5. II. MDC WITH MCOQD Firstly, we set up the MDC framework by reviewing the MDSQ scheme of Vaishampayan (2), a block diagram of which is shown in Fig. 1. The encoder is shown to comprise two parts. A scalar quantizer maps a source sample x to the nearest codeword in a codebook such that the quantization error is minimized. The quantizer codeword index is then mapped to two channel codewords, each to be sent over a sep- arate independent erasure channel. If both channel codewords are received, the central decoder reconstructs the source as ^ x 0 . If only one codeword is received, a side decoder reconstructs the source as either ^ 1 or ^ x 2 . The expected quality of ^ 0

Journal ArticleDOI
TL;DR: The proposed procedure, the sequential presentation of training vectors is controlled according to an external, user-defined criterion, and the new training scheme is applied to the problem of codebook design, using the neural gas network.

Journal ArticleDOI
TL;DR: While central quantizer cells on a uniform lattice are asymptotically optimal in high dimensions, the present authors have shown that by using nonuniform rather than uniform centralquantizer cells, the central-side distortion product in an MDSQ can be reduced by 0.4 dB at asymPTotically high rate.
Abstract: The asymptotic analysis of multiple-description vector quantization (MDVQ) with a lattice codebook for sources with smooth probability density functions (pdfs) is considered in this correspondence. Goyal et al. (2002) observed that as the side distortion decreases and the central distortion correspondingly increases, the quantizer cells farther away from the coarse lattice points shrink in a spatially periodic pattern. In this correspondence, two special classes of index assignments are used along strategic groupings of central quantizer cells to derive a straightforward asymptotic analysis, which provides an analytical explanation for the aforementioned observation. MDVQ with a lattice codebook was shown earlier to be asymptotically optimal in high dimensions, with a curious converging property, that the side quantizers achieve the space filling advantage of an n-dimensional sphere instead of an n-dimensional optimal polytope. The analysis presented here explains this behavior readily. While central quantizer cells on a uniform lattice are asymptotically optimal in high dimensions, the present authors have shown that by using nonuniform rather than uniform central quantizer cells, the central-side distortion product in an MDSQ can be reduced by 0.4 dB at asymptotically high rate. The asymptotic analysis derived here partially unifies these previous results in the same framework, though a complete characterization is still beyond reach.

Proceedings ArticleDOI
20 Jun 2004
TL;DR: This paper studies the optimal power distribution scheme and finds it to apply the following principle: more power is allocated to symbols corresponding to better estimates at the receiver, while maintaining the average energy constraint satisfied within a period.
Abstract: In communication over time-varying Rayleigh fading channels, adaptive coded modulation for pilot symbol assisted modulation (PSAM) without feedback has been shown to yield significant benefits in terms of achievable rates (M. Medard et al., 2000). This technique adapts transmission rate at the sender to the quality of the channel estimate at the receiver but keeps the mean power constant throughout. In this paper, we show that this adaptive PSAM scheme can be further improved if power and rate are jointly adapted to the quality of the measurement at the receiver. We study the optimal power distribution scheme and find it to apply the following principle: more power is allocated to symbols corresponding to better estimates at the receiver, while maintaining the average energy constraint satisfied within a period. We find that this simple scheme is optimal and performs better than adapting to channel quality using schemes akin to 'water filling'. Our model is a Rayleigh fading channel where time-variance is described by a Gauss-Markov model (M. Medard, 2000). The transmitter periodically sends pilot tones to measure the channel at the receiver. We interleave different codes, while maintaining the power constant over a codebook, and the average power over codebooks satisfying the constraint. Performance is quantified in terms of achievable rates. Our scheme does not require any real time computation or adaptation at the transmitter, and so comes at no extra cost with respect to (M. Meadard et al., 2000). When considering causal and non causal estimation strategies at the receiver, considerable improvement was attained without any added complexity.

Book ChapterDOI
16 May 2004
TL;DR: A background maintenance model defined by a finite set of codebook vectors in the spatial-range domain is proposed and the performance of the model is demonstrated and compared to other background maintenance models using a suitable benchmark of video sequences.
Abstract: In this article a background maintenance model defined by a finite set of codebook vectors in the spatial-range domain is proposed. The model represents its current state by a foreground and a background set of codebook vectors. Algorithms that dynamically update these sets by adding and removing codebook vectors are described. This approach is fundamentally different from algorithms that maintain a background representation at the pixel level and continously update their parameters. The performance of the model is demonstrated and compared to other background maintenance models using a suitable benchmark of video sequences.

Proceedings ArticleDOI
14 Oct 2004
TL;DR: Vector quantization (VQ) is explored for lossless compression of the hyperspectral sounder data, the residual error and the quantization indices are entropy coded and the iterative codebook generation procedure converges much faster and also leads to a better reconstruction of the sounderData.
Abstract: The compression of three-dimensional hyperspectral sounder data is a challenging task given its unprecedented size and nature. Vector quantization (VQ) is explored for the compression of this hyperspectral sounder data. The high dimensional vectors are partitioned into subvectors to reduce codebook search and storage complexity in coding of the data. The partitions are made by use of statistical properties of the sounder data in the spectral dimension. Moreover, the data is decorrelated at first to make it better suited for vector quantization. Due to the data characteristics, the iterative codebook generation procedure converges much faster and also leads to a better reconstruction of the sounder data. For lossless compression of the hyperspectral sounder data, the residual error and the quantization indices are entropy coded. The independent vector quantizers for different partitions make this scheme practical for compression of the large volume 3D hyperspectral sounder data.

Proceedings ArticleDOI
24 Oct 2004
TL;DR: This analysis quantifies the impact of the feedback rate and the number of transmit and receive antennas on the performance of the system and derives an upper bound on the average SNR achieved by an optimal codebook designed using such a criterion.
Abstract: Multiple-input multiple-output (MIMO) wireless systems can achieve significant diversity and array gain by using transmit beamforming and receive combining. In the absence of full channel knowledge at the transmitter, a quantized beamforming vector can be made available to the transmitter using a low-rate feedback channel, called limited feedback. A codebook based quantization approach is considered. Based on metrics such as signal-to-noise ratio and outage probability, maximization of the minimum distance between any pair of codewords has already been proposed as a codebook design criterion. An upper bound on the average SNR achieved by an optimal codebook designed using such a criterion is derived as a function of the feedback rate. This analysis quantifies the impact of the feedback rate and the number of transmit and receive antennas on the performance of the system.

Patent
Edward A. Pazmino1, Tinku Acharya1
15 Jan 2004
TL;DR: In this paper, a method of decompressing a compressed data set includes the following: In multiple passes, each data signal in the data set is categorized into a category of a predetermined set, and, for selected categories of the predetermined sets, the data signals for that category are coded using a codebook for the category.
Abstract: Briefly, in accordance with one embodiment on the invention, a method of compressing a data set includes the following. In multiple passes, each data signal in the data set is categorized into a category of a predetermined set, and, for selected categories of the predetermined set, the data signals for that category are coded using a codebook for that category. Briefly, in accordance with another embodiment of the invention, a method of decompressing a compressed data set includes the following. For compressed data signals in the data set in one category of a predetermined set of categories, a signal associated with the particular category is employed for the compressed data signal, and, for selected categories of the predetermined set, the compressed data signals for that category are decoded using a codebook for that category.