scispace - formally typeset
Search or ask a question

Showing papers on "Codebook published in 2005"


Journal ArticleDOI
TL;DR: A real-time algorithm for foreground-background segmentation that can handle scenes containing moving backgrounds or illumination variations, and it achieves robust detection for different types of videos is presented.
Abstract: We present a real-time algorithm for foreground-background segmentation. Sample background values at each pixel are quantized into codebooks which represent a compressed form of background model for a long image sequence. This allows us to capture structural background variation due to periodic-like motion over a long period of time under limited memory. The codebook representation is efficient in memory and speed compared with other background modeling techniques. Our method can handle scenes containing moving backgrounds or illumination variations, and it achieves robust detection for different types of videos. We compared our method with other multimode modeling techniques. In addition to the basic algorithm, two features improving the algorithm are presented-layered modeling/detection and adaptive codebook updating. For performance evaluation, we have applied perturbation detection rate analysis to four background subtraction algorithms and two videos of different types of scenes.

1,552 citations


Journal ArticleDOI
TL;DR: This correspondence proposes a quantized precoding system where the optimal precoder is chosen from a finite codebook known to both receiver and transmitter and performs close to optimal unitary precoding with a minimal amount of feedback.
Abstract: Multiple-input multiple-output (MIMO) wireless systems use antenna arrays at both the transmitter and receiver to provide communication links with substantial diversity and capacity. Spatial multiplexing is a common space-time modulation technique for MIMO communication systems where independent information streams are sent over different transmit antennas. Unfortunately, spatial multiplexing is sensitive to ill-conditioning of the channel matrix. Precoding can improve the resilience of spatial multiplexing at the expense of full channel knowledge at the transmitter-which is often not realistic. This correspondence proposes a quantized precoding system where the optimal precoder is chosen from a finite codebook known to both receiver and transmitter. The index of the optimal precoder is conveyed from the receiver to the transmitter over a low-delay feedback link. Criteria are presented for selecting the optimal precoding matrix based on the error rate and mutual information for different receiver designs. Codebook design criteria are proposed for each selection criterion by minimizing a bound on the average distortion assuming a Rayleigh-fading matrix channel. The design criteria are shown to be equivalent to packing subspaces in the Grassmann manifold using the projection two-norm and Fubini-Study distances. Simulation results show that the proposed system outperforms antenna subset selection and performs close to optimal unitary precoding with a minimal amount of feedback.

943 citations


Proceedings ArticleDOI
17 Oct 2005
TL;DR: It is shown that dense representations outperform equivalent keypoint based ones on these tasks and that SVM or mutual information based feature selection starting from a dense codebook further improves the performance.
Abstract: Visual codebook based quantization of robust appearance descriptors extracted from local image patches is an effective means of capturing image statistics for texture analysis and scene classification. Codebooks are usually constructed by using a method such as k-means to cluster the descriptor vectors of patches sampled either densely ('textons') or sparsely ('bags of features' based on key-points or salience measures) from a set of training images. This works well for texture analysis in homogeneous images, but the images that arise in natural object recognition tasks have far less uniform statistics. We show that for dense sampling, k-means over-adapts to this, clustering centres almost exclusively around the densest few regions in descriptor space and thus failing to code other informative regions. This gives suboptimal codes that are no better than using randomly selected centres. We describe a scalable acceptance-radius based clusterer that generates better codebooks and study its performance on several image classification tasks. We also show that dense representations outperform equivalent keypoint based ones on these tasks and that SVM or mutual information based feature selection starting from a dense codebook further improves the performance.

817 citations


Journal ArticleDOI
TL;DR: This work constructs analytically optimal codebooks meeting the Welch lower bound, and develops an efficient numerical search method based on a generalized Lloyd algorithm that leads to considerable improvement on the achieved I/sub max/ over existing alternatives.
Abstract: Consider a codebook containing N unit-norm complex vectors in a K-dimensional space. In a number of applications, the codebook that minimizes the maximal cross-correlation amplitude (I/sub max/) is often desirable. Relying on tools from combinatorial number theory, we construct analytically optimal codebooks meeting, in certain cases, the Welch lower bound. When analytical constructions are not available, we develop an efficient numerical search method based on a generalized Lloyd algorithm, which leads to considerable improvement on the achieved I/sub max/ over existing alternatives. We also derive a composite lower bound on the minimum achievable I/sub max/ that is effective for any codebook size N.

445 citations


Journal ArticleDOI
TL;DR: This paper investigates a limited feedback approach that uses a codebook of precoding matrices known a priori to both the transmitter and receiver and the resulting design is found to relate to the famous applied mathematics problem of subspace packing in the Grassmann manifold.
Abstract: Orthogonal space-time block codes (OSTBCs) are a class of easily decoded space-time codes that achieve full diversity order in Rayleigh fading channels. OSTBCs exist only for certain numbers of transmit antennas and do not provide array gain like diversity techniques that exploit transmit channel information. When channel state information is available at the transmitter, though, precoding the space-time codeword can be used to support different numbers of transmit antennas and to improve array gain. Unfortunately, transmitters in many wireless systems have no knowledge about current channel conditions. This motivates limited feedback precoding methods such as channel quantization or antenna subset selection. This paper investigates a limited feedback approach that uses a codebook of precoding matrices known a priori to both the transmitter and receiver. The receiver chooses a matrix from the codebook based on current channel conditions and conveys the optimal codebook matrix to the transmitter over an error-free, zero-delay feedback channel. A criterion for choosing the optimal precoding matrix in the codebook is proposed that relates directly to minimizing the probability of symbol error of the precoded system. Low average distortion codebooks are derived based on the optimal codeword selection criterion. The resulting design is found to relate to the famous applied mathematics problem of subspace packing in the Grassmann manifold. Codebooks designed by this method are proven to provide full diversity order in Rayleigh fading channels. Monte Carlo simulations show that limited feedback precoding performs better than antenna subset selection.

308 citations


Patent
31 Oct 2005
TL;DR: In this paper, a method for communicating a plurality of data streams between a transmitting device with multiple transmit antennas and a receiving device, is disclosed, which comprises determining a set of power weighting, efficiently quantizing the power weightings, and providing the set of weightings the transmitting device, and an additional aspect of the invention is a means of determining the best codebook weights by combining the maximum power and maximum capacity criteria.
Abstract: A method for communicating a plurality of data streams between a transmitting device with multiple transmit antennas and a receiving device, is disclosed. The method comprises determining a set of power weightings, efficiently quantizing the power weightings, and providing the set of power weightings the transmitting device. Another aspect of the invention comprises the transmitter implicitly signaling the number of data streams which the receiver should feedback information for through the amount of feedback requested. An additional aspect of the invention is a means of determining the best codebook weights by combining the maximum power and maximum capacity criteria.

258 citations


Proceedings ArticleDOI
20 Jun 2005
TL;DR: A real time robust human detection and tracking system for video surveillance which can be used in varying environments and the optimal design algorithm of the codebook is proposed.
Abstract: In this paper, we present a real time robust human detection and tracking system for video surveillance which can be used in varying environments. This system consists of human detection, human tracking and false object detection. The human detection utilizes the background subtraction to segment the blob and use codebook to classify human being from other objects. The optimal design algorithm of the codebook is proposed. The tracking is performed at two levels: human classification and individual tracking .The color histogram of human body is used as the appearance model to track individuals. In order to reduce the false alarm, the algorithms of the false object detection are also provided.

156 citations


Proceedings ArticleDOI
17 Oct 2005
TL;DR: A method for object category detection which integrates a generative model with a discriminative classifier, which outperforms previously reported results and exploits the strengths of both original methods, minimizing their weaknesses.
Abstract: Category detection is a lively area of research. While categorization algorithms tend to agree in using local descriptors, they differ in the choice of the classifier, with some using generative models and others discriminative approaches. This paper presents a method for object category detection which integrates a generative model with a discriminative classifier. For each object category, we generate an appearance codebook, which becomes a common vocabulary for the generative and discriminative methods. Given a query image, the generative part of the algorithm finds a set of hypotheses and estimates their support in location and scale. Then, the discriminative part verifies each hypothesis on the same codebook activations. The new algorithm exploits the strengths of both original methods, minimizing their weaknesses. Experiments on several databases show that our new approach performs better than its building blocks taken separately. Moreover, experiments on two challenging multi-scale databases show that our new algorithm outperforms previously reported results

144 citations



Journal ArticleDOI
TL;DR: The performance of joint signature-receiver optimization for direct-sequence code-division multiple access (DS-CDMA) with limited feedback and a less complex and suboptimal reduced-rank signature optimization scheme in which the user's signature is constrained to lie in a lower dimensional subspace.
Abstract: We study the performance of joint signature-receiver optimization for direct-sequence code-division multiple access (DS-CDMA) with limited feedback. The receiver for a particular user selects the signature from a signature codebook, and relays the corresponding B index bits to the transmitter over a noiseless channel. We study the performance of a random vector quantization (RVQ) scheme in which the codebook entries are independent and isotropically distributed. Assuming the interfering signatures are independent, and have independent and identically distributed (i.i.d.) elements, we evaluate the received signal-to-interference plus noise ratio (SINR) in the large system limit as the number of users, processing gain, and feedback bits B all tend to infinity with fixed ratios. This SINR is evaluated for both the matched filter and linear minimum mean-squared error (MMSE) receivers. Furthermore, we show that this large system SINR is the maximum that can be achieved over any sequence of codebooks. Numerical results show that with the MMSE receiver, one feedback bit per signature coefficient achieves close to single-user performance. We also consider a less complex and suboptimal reduced-rank signature optimization scheme in which the user's signature is constrained to lie in a lower dimensional subspace. The optimal subspace coefficients are scalar-quantized and relayed to the transmitter. The large system performance of the quantized reduced-rank scheme can be approximated, and numerical results show that it performs in the vicinity of the RVQ bound. Finally, we extend our analysis to the scenario in which a subset of users optimize their signatures in the presence of random interference.

118 citations


Journal ArticleDOI
TL;DR: Simulation results show that the proposed limited feedback techniques provide performance close to full channel knowledge power loading, and two power allocation selection algorithms that optimize the probability of symbol error and capacity, respectively.
Abstract: Orthogonal frequency division multiplexing (OFDM) is a practical broadband signaling technique for use in multipath fading channels. Over the past ten years, research has shown that power loading, where the power allocations on the OFDM frequency tones are jointly optimized, can improve error rate or capacity performance. The implementation of power loading, however, is dependent on the presence of complete forward link channel knowledge at the transmitter. In systems using frequency division duplexing (FDD), this assumption is unrealistic. In this paper, we propose power loading for OFDM symbols using a limited number of feedback bits sent from the receiver to the transmitter. The power loading vector is designed at the receiver, which is assumed to have perfect knowledge of the forward link channel, and conveyed back to the transmitter over a limited rate feedback channel. To allow for the vector to be represented by a small number of bits, the power loading vector is restricted to lie in a finite set, or codebook, of power loading vectors. This codebook is designed offline and known a priori to both the transmitter and receiver. We present two power allocation selection algorithms that optimize the probability of symbol error and capacity, respectively. Simulation results show that the proposed limited feedback techniques provide performance close to full channel knowledge power loading.

Journal ArticleDOI
TL;DR: An inversion method is developed that provides a complete description of the possible solutions without excessive constraints and retrieves realistic temporal dynamics of the vocal tract shapes and ensures that inverse articulatory parameters generate original formant trajectories with high precision and a realistic sequence of theVoice tract shapes.
Abstract: Acoustic-to-articulatory inversion is a difficult problem mainly because of the nonlinearity between the articulatory and acoustic spaces and the nonuniqueness of this relationship. To resolve this problem, we have developed an inversion method that provides a complete description of the possible solutions without excessive constraints and retrieves realistic temporal dynamics of the vocal tract shapes. We present an adaptive sampling algorithm to ensure that the acoustical resolution is almost independent of the region in the articulatory space under consideration. This leads to a codebook that is organized in the form of a hierarchy of hypercubes, and ensures that, within each hypercube, the articulatory-to-acoustic mapping can be approximated by means of a linear transform. The inversion procedure retrieves articulatory vectors corresponding to acoustic entries from the hypercube codebook. A nonlinear smoothing algorithm together with a regularization technique is then used to recover the best articulatory trajectory. The inversion ensures that inverse articulatory parameters generate original formant trajectories with high precision and a realistic sequence of the vocal tract shapes.

Patent
02 Nov 2005
TL;DR: An encoder, decoder, encoding method, and decoding method enabling acquisition of high-quality decoded signal in scalable encoding of an original signal in first and second layers even if the second or upper layer section performs low bit-rate encoding as mentioned in this paper.
Abstract: An encoder, decoder, encoding method, and decoding method enabling acquisition of high-quality decoded signal in scalable encoding of an original signal in first and second layers even if the second or upper layer section performs low bit-rate encoding In the encoder, a spectrum residue shape codebook (305) stores candidates of spectrum residue shape vectors, a spectrum residue gain codebook (307) stores candidates of spectrum residue gains, and a spectrum residue shape vector and a spectrum residue gain are sequentially outputted from the candidates according to the instruction from a search section (306) A multiplier (308) multiplies a candidate of the spectrum residue shape vector by a candidate of the spectrum residue gain and outputs the result to a filtering section (303) The filtering section (303) performs filtering by using a pitch filter internal state set by a filter state setting section (302), a lag T outputted by a lag setting section (304), and a spectrum residue shape vector which has undergone gain adjustment

Patent
17 Aug 2005
TL;DR: In this article, a closed-loop transmit precoding between a transmitter and a receiver is proposed, where the receiver determines which precoding rotation matrix from the codebook should be used for each subcarrier that has been received.
Abstract: A method for providing closed-loop transmit precoding between a transmitter and a receiver, includes defining a codebook that includes a set of unitary rotation matrices (202). The receiver determines which precoding rotation matrix from the codebook should be used for each sub-carrier that has been received (204). The receiver sends an index to the transmitter (206), where the transmitter reconstructs the precoding rotation matrix using the index, and precodes the symbols to be transmitted using the precoding rotation matrix (208). An apparatus that employs this closed-loop technique is also described.

Journal ArticleDOI
TL;DR: This work considers a constructive approach for distributed binning in an algebraic framework and employs generalized coset codes constructed in a group-theoretic setting for this approach, and analyzes the performance in terms of distance properties and decoding algorithms.
Abstract: In many multiterminal communication problems, constructions of good source codes involve finding distributed partitions (into bins) of a collection of quantizers associated with a group of source encoders. Further, computationally efficient procedures to index these bins are also required. In this work, we consider a constructive approach for distributed binning in an algebraic framework. Several application scenarios fall under the scope of this paper including the CEO problem, distributed source coding, and n-channel symmetric multiple description source coding with n>2. Specifically, in this exposition we consider the case of two codebooks while focusing on the Gaussian CEO problem with mean squared error reconstruction and with two symmetric observations. This problem deals with distributed encoding of correlated noisy observations of a source into descriptions such that the joint decoder having access to them can reconstruct the source with a fidelity criterion. We employ generalized coset codes constructed in a group-theoretic setting for this approach, and analyze the performance in terms of distance properties and decoding algorithms.

Patent
Nico van Waes1, Hua Zhang1, Juha Heiskala1, Victor Stolpman1, Jianzhong Zhang1, Tony Reid1, Jae Son1 
19 Aug 2005
TL;DR: In this article, a network entity in a multi-transmit antenna system is capable of providing a full-spatial-rate codebook having been designed by selecting an underlying PRS codebook designed for a system configured for PRS data.
Abstract: A network entity in a multi-transmit antenna system, such as a transmitting or receiving entity, includes a component such as a pre-coder or a receiver, and is configured for full-spatial-rate coding data. The network entity is capable of providing a full-spatial-rate codebook having been designed by selecting an underlying partial-spatial-rate codebook designed for a system configured for partial-spatial-rate coding data. The full-spatial-rate codebook can then be designed in a manner including defining the full-spatial-rate codewords based upon partial-spatial-rate codewords of the partial-spatial rate codebook and basis vectors of a null space of the respective partial-spatial-rate codewords in a multidimensional vector space. The network entity can also be capable of selecting codewords of the codebook in accordance with a sub-space tracking method whereby the codewords of the codebook can be selected in a manner that exploits correlations therebetween.

Proceedings ArticleDOI
01 Dec 2005
TL;DR: This paper proposes a channel adaptive feedback strategy for arbitrary channel distributions, and presents a simple codebook design methodology based on the channel statistics.
Abstract: Quantized multiple-input multiple-output (MIMO) beamforming systems use predesigned codebooks for the quantization of transmit beamforming vectors. The quantized vector, which is conveyed to the transmitter using a low-rate feedback channel, is used for transmission to provide significant diversity and array gain. The codebook for quantization is a function of the channel distribution, and is typically designed for fixed channel distributions. In this paper, we propose a channel adaptive feedback strategy for arbitrary channel distributions, and present a simple codebook design methodology based on the channel statistics. The codebook for quantization is dynamically chosen rom a structured set of pre-designed codebooks, called the code set, wherein all codebooks are derived from one mother codebook. Simulations illustrate that the proposed method can improve error rate performance in correlated and/or channels with strongly line-of-sight components

Patent
22 Sep 2005
TL;DR: In this paper, an adaptive control of codebook regeneration in data compression mechanisms is proposed. But the authors do not specify how to control the frequency of the codebook updates based on expected performance gains resulting from regeneration.
Abstract: Adaptive control of codebook regeneration in data compression mechanisms. In one implementation, the present invention provides a means controlling the frequency of codebook updates based on expected performance gains resulting from codebook regeneration. The present invention, in one implementation, employs a mechanism that simulates the expected compression performance of a hypothetically, updated codebook. A compression module compares the simulated compression performance to the actual performance of the codebook used to compress the data, and updates the codebook if a threshold condition is satisfied.

Proceedings ArticleDOI
18 Mar 2005
TL;DR: A robust split-band narrowband to wideband extension system based on algorithmic enhancements to the codebook mapping technique for high-band parameter estimation and its robustness to input distortions and non-speech input signals is presented.
Abstract: It is well-known that wideband speech (0-7 kHz) provides better quality and intelligibility than narrowband speech (300-3400 Hz), but typically only narrowband speech information is available in current wireless communication systems. Narrowband to wideband extension technology has been recently investigated to artificially generate wideband speech from narrowband speech for better speech quality and intelligibility. This paper presents a robust split-band narrowband to wideband extension system based on algorithmic enhancements to the codebook mapping technique for high-band parameter estimation. Numerical measurements confirm the performance improvements of the codebook mapping process, and informal listening evaluations show the potential of the system and its robustness to input distortions and non-speech input signals.

Proceedings ArticleDOI
31 Aug 2005
TL;DR: The most important aim of the current work is to compare three different clustering methods for generating the grapheme codebook: k-means Kohonen SOM 1D and 2D and a complementary shape representation using normalized bitmaps.
Abstract: An effective method for writer identification and verification is based on assuming that each writer acts as a stochastic generator of ink-trace fragments, or graphemes. The probability distribution of these simple shapes in a given handwriting sample is characteristic for the writer and is computed using a common codebook of graphemes obtained by clustering. In previous studies we used contours to encode the graphemes, in the current paper we explore a complementary shape representation using normalized bitmaps. The most important aim of the current work is to compare three different clustering methods for generating the grapheme codebook: k-means Kohonen SOM 1D and 2D. Large scale computational experiments show that the proposed method is robust to the underlying shape representation used (whether contours or normalized bitmaps), to the size of codebook used (stable performance for sizes from 10/sup 2/ to 2.5 /spl times/ 10/sup 3/) and to the clustering method used to generate the codebook (essentially the same performance was obtained for all three clustering methods).

Patent
24 Jan 2005
TL;DR: In this paper, a method and apparatus for reducing the complexity of linear prediction analysis-by-synthesis (LPAS) speech coders is presented, which includes a multi-tap pitch predictor having various parameters and utilizing an adaptive codebook subdivided into at least a first vector codebook and a second vector code book.
Abstract: A method and apparatus for reducing the complexity of linear prediction analysis-by-synthesis (LPAS) speech coders. The speech coder includes a multi-tap pitch predictor having various parameters and utilizing an adaptive codebook subdivided into at least a first vector codebook and a second vector codebook. The pitch predictor removes certain redundancies in a subject speech signal and vector quantizes the pitch predictor parameters. Further included is a source excitation (fixed) codebook that indicates pulses in the subject speech signal by deriving corresponding vector values. Serial optimization of the adaptive codebook first and then the fixed codebook produces a low complexity LPAS speech coder of the present invention.

Patent
20 May 2005
TL;DR: In this paper, a NxN multiple-input-multiple-output (MIMO) wireless network search codewords in a codebook to determine which codeword is closest to a desired pre-coding matrix on a Grassmann manifold.
Abstract: Stations in a NxN multiple-input-multiple-output (MIMO) wireless network search codewords in a codebook to determine which codeword is closest to a desired pre-coding matrix on a Grassmann manifold. An index or indices corresponding to codeword is transmitted from a receiver to a transmitter to identify a codeword to be used for transmit beamforming.

Patent
30 Nov 2005
TL;DR: In this article, a method of communication of digital messages with improved efficiency through the use of the transfer of difference data between devices was proposed, where the difference data communicated is between different generations of a derived message sequence such as an email thread.
Abstract: A method of communication of digital messages with improved efficiency through the use of the transfer of difference data between devices. In one aspect of the invention, the difference data communicated is between different generations of a derived message sequence such as an email thread. In another aspect of the invention, the messages are encoded by means of a codebook, and the difference data communicated is between different versions of the codebook. In this second aspect of the invention, the codebooks may automatically utilise the difference data to adapt their efficiency, and the codebooks may be automatically customised for specific individuals or groups.

Book ChapterDOI
27 Aug 2005
TL;DR: This paper introduces Particle Swarm Optimization (PSO) cluster method to build high quality codebook for image compression and sets the result of LBG algorithm to initialize global best particle by which it can speed the convergence of PSO.
Abstract: VQ coding is a powerful technique in digital image compression. Conversional methods such as classic LBG algorithm always generate local optimal codebook. In this paper, we introduce Particle Swarm Optimization (PSO) cluster method to build high quality codebook for image compression. We also set the result of LBG algorithm to initialize global best particle by which it can speed the convergence of PSO. Both image encoding and decoding process are simulated in our experiments. Results show that the algorithm is reliable and the reconstructed images get higher quality to images reconstructed by other methods.

Journal ArticleDOI
TL;DR: A speech watermarking scheme that is combined with CELP speech coding for speech authentication and the new codebook partition technique produces less distortion, and the statistical detection method guarantees that the error probability can be controlled under prescribed level.
Abstract: This letter presents a speech watermarking scheme that is combined with CELP (Code Excited Linear Prediction) speech coding for speech authentication. The excitation codebook of CELP is partitioned into three parts and labeled '0', '1' and 'any' according to the private key. Watermark embedding process chooses the codebook whose label is the same as the watermark bit and combines it with the codebook labeled 'any' for CELP coding. A statistical method is employed to detect the watermark, and the watermark length for authentication and detection threshold are determined by false alarm probability and missed detection probability. The new codebook partition technique produces less distortion, and the statistical detection method guarantees that the error probability can be controlledunder prescribed level.

Proceedings ArticleDOI
18 Mar 2005
TL;DR: Improvements in compression of both inter- and intra-frame images by the matching pursuits (MP) algorithm are reported, and lower distortion is achieved on the residual images tested, and also on intra-frames at low bit rates.
Abstract: This paper reports improvements in compression of both inter- and intra-frame images by the matching pursuits (MP) algorithm. For both types of image, applying a 2D wavelet decomposition prior to MP coding is beneficial. The MP algorithm is then applied using various separable ID codebooks. MERGE coding with precision limited quantization (PLQ) is used to yield a highly embedded data stream. For inter-frames (residuals) a codebook of only 8 bases with compact footprint is found to give improved fidelity at lower complexity than previous MP methods. Great improvement is also achieved in MP coding of still images (intra-frames). Compared to JPEG 2000, lower distortion is achieved on the residual images tested, and also on intra-frames at low bit rates.

Journal ArticleDOI
TL;DR: A novel information-hiding scheme to embed secrets into the side match vector quantization (SMVQ) compressed code and Experimental results show that the performance of the proposed scheme is better than other VQ-based information hiding scheme in terms of the embedding capacity and the image quality.
Abstract: To increase the number of the embedded secrets and to improve the quality of the stego-image in the vector quantization (VQ)-based information hiding scheme, in this paper, we present a novel information-hiding scheme to embed secrets into the side match vector quantization (SMVQ) compressed code First, a host image is partitioned into non-overlapping blocks For these seed blocks of the image, VQ is adopted without hiding secrets Then, for each of the residual blocks, SMVQ or VQ is employed according to the smoothness of the block such that the proper codeword is chosen from the state codebook or the original codebook to compress it Finally, these compressed codes represent not only the host image but also the secret data Experimental results show that the performance of the proposed scheme is better than other VQ-based information hiding scheme in terms of the embedding capacity and the image quality Moreover, in the proposed scheme, the compression rate is better than the compared scheme

Journal ArticleDOI
TL;DR: Simulation results show the proposed STBC-MTCM designs significantly outperform the traditional ST-TCM schemes and decoding complexity of the proposed scheme is low because signal orthogonality is exploited to ease data decoding.
Abstract: In this paper, a new technique to design improved high-rate space-time (ST) codes is proposed based on the concept of concatenated ST block code (STBC) and outer trellis-coded modulation (M-TCM) encoder constructions. Unlike the conventional rate-lossy STBC-MTCM schemes, the proposed designs produce higher rate ST codes by expanding the codebook of the inner orthogonal STBC. The classic set partitioning concept is adopted to realize the STBC-MTCM designs with large coding gains. The proposed expanded STBC-MTCM designs for the two-, three-, and four-transmitter cases are illustrated. Simulation results show the proposed STBC-MTCM designs significantly outperform the traditional ST-TCM schemes. Furthermore, decoding complexity of the proposed scheme is low because signal orthogonality is exploited to ease data decoding.

Proceedings ArticleDOI
18 Mar 2005
TL;DR: This paper studies the problem of quantization of a source that lives on the complex Grassmann manifold, and the expected distortion for such a quantizer is approximately characterized.
Abstract: This paper studies the problem of quantization of a source that lives on the complex Grassmann manifold. The special structure of the Grassmann manifold and the distortion measures that are defined on it differentiates this problem from the traditional problem of vector quantization in Euclidean spaces. Assuming a uniform source distribution along with a distortion based on chordal distance, codebook design algorithms are mentioned and rate distortion tradeoffs are studied. The expected distortion for such a quantizer is approximately characterized. These results are then applied to the performance analysis of a multiple antenna wireless communication system.

Patent
01 Apr 2005
TL;DR: In this paper, a speech encoder includes an LPC synthesizer that obtains synthesized speech by filtering an adaptive excitation vector and a stochastic excitationvector stored in an adaptive codebook and in a Stochastic codebook using LPC coefficients obtained from input speech.
Abstract: A speech encoder includes an LPC synthesizer that obtains synthesized speech by filtering an adaptive excitation vector and a stochastic excitation vector stored in an adaptive codebook and in a stochastic codebook using LPC coefficients obtained from input speech. A gain calculator calculates gains of the adaptive excitation vector and the stochastic excitation vector and searches code of the adaptive excitation vector and code of the stochastic excitation vector by comparing distortions between the input speech and the synthesized speech obtained using the adaptive excitation vector and the stochastic excitation vector. A parameter coder performs predictive coding of gains using the adaptive excitation vector and the stochastic excitation vector corresponding to the codes obtained. The parameter coder comprises a prediction coefficient adjuster that adjusts at least one prediction coefficient used for the predictive coding according to at least one state of at least one previous subframe.