scispace - formally typeset
Search or ask a question

Showing papers on "Codebook published in 2000"


Journal ArticleDOI
TL;DR: A new method for compressing inverted indexes is introduced that yields excellent compression, fast decoding, and exploits clustering—the tendency for words to appear relatively frequently in some parts of the collection and infrequently in others.
Abstract: Information retrieval systems contain large volumes of text, and currently have typical sizes into the gigabyte range. Inverted indexes are one important method for providing search facilities into these collections, but unless compressed require a great deal of space. In this paper we introduce a new method for compressing inverted indexes that yields excellent compression, fast decoding, and exploits clustering—the tendency for words to appear relatively frequently in some parts of the collection and infrequently in others. We also describe two other quite separate applications for the same compression method: representing the MTF list positions generated by the Burrows-Wheeler Block Sorting transformations and transmitting the codebook for semi-static block-based minimum-redundancy coding.

193 citations


Proceedings ArticleDOI
01 Jan 2000
TL;DR: This work presents a suboptimal, practical scheme that employs a lattice-structured codebook to reduce complexity and the performance of the proposed scheme is compared to the information-theoretic limit and similar recent proposals.
Abstract: Blind digital watermarking is the communication of information via multimedia host data, where the unmodified host data is not available to the watermark detector. Many watermarking schemes suffer considerably from the remaining host-signal interference. For the additive white Gaussian case, M.H.M. Costa (1983) showed theoretically that interference from the host can be eliminated. However, the proof involves a huge, unstructured, random codebook, which is not feasible in practical systems. We present a suboptimal, practical scheme that employs a lattice-structured codebook to reduce complexity. The performance of the proposed scheme is compared to the information-theoretic limit and similar recent proposals.

152 citations


Journal ArticleDOI
TL;DR: A digital image watermarking technique based on vector quantisation (VQ) is presented that uses codeword indices to carry the watermark information and simulation results prove the effectiveness of this technique.
Abstract: A digital image watermarking technique based on vector quantisation (VQ) is presented. This technique uses codeword indices to carry the watermark information. The technique is secret and efficient, and the watermarked image is robust to VQ compression with the same codebook. The simulation results prove the effectiveness of this technique.

109 citations


Journal ArticleDOI
TL;DR: A new method for segmenting multispectral satellite images using Agglomerative Hierarchical Clustering and Probabilistic Self-Organizing Map is presented.

97 citations


Journal ArticleDOI
TL;DR: A vector-quantisation (VQ)-based watermarking method is presented which utilises the codebook expansion technique and is efficient, provides enhanced security and the watermarked image is robust against the effects of VQ compression.
Abstract: A vector-quantisation (VQ)-based watermarking method is presented which utilises the codebook expansion technique. This method is efficient, provides enhanced security and the watermarked image is robust against the effects of VQ compression. Moreover, the watermark extraction can be performed without the original image. Experimental results are presented which demonstrate the effectiveness of this algorithm.

81 citations


Proceedings ArticleDOI
17 Sep 2000
TL;DR: An algorithm to recover wideband speech from lowpass-bandlimited speech that needs only one single wideband codebook and inherently guarantees the transparency of the system in the base-band is proposed.
Abstract: In this paper we propose an algorithm to recover wideband speech from lowpass-bandlimited speech. The narrowband input signal is classified into a limited number of speech sounds for which the information about the wideband spectral envelope is taken from a pre-trained codebook. For the codebook search algorithm a statistical approach based on a hidden Markov model is used, which takes different features of the bandlimited speech into account, and minimizes a mean squared error criterion. The new algorithm needs only one single wideband codebook and inherently guarantees the transparency of the system in the base-band. The enhanced speech exhibits a significantly larger bandwidth than the input speech without introducing objectionable artifacts.

80 citations


Journal ArticleDOI
TL;DR: A new deterministic crossover method based on the pairwise nearest neighbor method is introduced that shows that high quality codebooks can be obtained within a few minutes instead of several hours as required by the previous GA-based methods.

75 citations


Journal ArticleDOI
TL;DR: A new method for reducing the number of distance calculations in the generalized Lloyd algorithm (GLA), which is a widely used method to construct a codebook in vector quantization, that utilizes reduced comparison search and calculates distances only to the active code vectors.
Abstract: This paper introduces a new method for reducing the number of distance calculations in the generalized Lloyd algorithm (GLA), which is a widely used method to construct a codebook in vector quantization. Reduced comparison search detects the activity of the code vectors and utilizes it on the classification of the training vectors. For training vectors whose current code vector has not been modified, we calculate distances only to the active code vectors. A large proportion of the distance calculations can be omitted without sacrificing the optimality of the partition. The new method is included in several fast GLA variants reducing their running times over 50% on average.

69 citations


01 Jan 2000
TL;DR: This work compares the performance of different clustering algorithms, and the influence of the codebook size, to find out, which method provides the best clustering result, and whether the difference in quality contribute to improvement in recognition accuracy of the system.
Abstract: In speaker identification, we match a given (unkown) speaker to the set of known speakers in a database. The database is constructed from the speech samples of each known speaker. Feature vectors are extracted from the samples by short-term spectral analysis, and processed further by vector quantization for locating the clusters in the feature space. We study the role of the vector quantization in the speaker identification system. We compare the performance of different clustering algorithms, and the influence of the codebook size. We want to find out, which method provides the best clustering result, and whether the difference in quality contribute to improvement in recognition accuracy of the system.

62 citations


Journal ArticleDOI
TL;DR: A new fast approach to the nearest codeword search using a single kick-out condition is proposed, which means a considerable saving of the central processing unit time needed to encode a data set (using a given codebook) can be achieved.
Abstract: A new fast approach to the nearest codeword search using a single kick-out condition is proposed. The nearest codeword found by the proposed approach is identical to the one found by the full search, although the processing time is much shorter. The principle is to bypass those codewords which satisfy the proposed kick-out condition without the actual (and time-consuming) computation of the distortions from the bypassed codewords to the query vector. Due to the efficiency and simplicity of the proposed condition, a considerable saving of the central processing unit time needed to encode a data set (using a given codebook) can be achieved. Moreover, the memory requirement is low. Comparisons with some previous works are included to show these two benefits.

61 citations


Journal ArticleDOI
TL;DR: This paper describes a spectral index (SI)-based multiple subcodebook algorithm (MSCA) for lossy hyperspectral data compression and shows that both the codebook generation time (CGT) and coding time (CT) can be improved by a factor of around n at almost no loss of fidelity.
Abstract: This paper describes a spectral index (SI)-based multiple subcodebook algorithm (MSCA) for lossy hyperspectral data compression. The scene of a hyperspectral dataset to be compressed is delimited into n regions by segmenting its SI image. The spectra in each region have similar spectral characteristics. The dataset is then separated into n subsets, corresponding to the n regions. While keeping the total number of codevectors the same (i.e. the same compression ratio), not just a single codebook, but n smaller and more efficient subcodebooks are generated. Each subcodebook is used to compress the spectra in the corresponding region. With the MSCA, both the codebook generation time (CGT) and coding time (CT) can be improved by a factor of around n at almost no loss of fidelity. Four segmentation methods for delimiting the scene of the data cube were studied. Three hyperspectral vector quantization data compression systems that use the improved techniques were simulated and tested. The simulation results show that the CGT could be reduced by more than three orders of magnitude, while the quality of the codebooks remained good. The overall processing speed of the compression systems could be improved by a factor of around 1000 at an average fidelity penalty of 1.0 dB.

Proceedings ArticleDOI
28 Mar 2000
TL;DR: It is observed that MDLVQ, in the form introduced by Servetto et al. (1999), is inherently optimized for the central decoder; i.e., for zero probability of a lost description.
Abstract: Multiple description lattice vector quantization (MDLVQ) is a technique for two-channel multiple description coding. We observe that MDLVQ, in the form introduced by Servetto et al. (1999), is inherently optimized for the central decoder; i.e., for zero probability of a lost description. With a nonzero probability of description loss, performance is improved by modifying the encoding rule (using nearest neighbors with respect to "multiple description distance") and by perturbing the lattice codebook. The perturbation maintains many symmetries and hence does not significantly affect encoding or decoding complexity. An extension to more than two descriptions with attractive decoding properties is outlined.

Patent
25 Aug 2000
TL;DR: In this article, a multistage vector list quantizer comprises a first stage quantizer to select candidate first stage codewords from a plurality of first-stage codewwords, a reference table memory storing a set of second-stage codeewords for each first-staged codeword, and a second-stage codebook constructor to generate a reduced complexity second stage codebook.
Abstract: According to one embodiment of the invention, a multistage vector list quantizer comprises a first stage quantizer to select candidate first stage codewords from a plurality of first stage codewords, a reference table memory storing a set of second stage codewords for each first stage codeword, and a second stage codebook constructor to generate a reduced complexity second stage codebook that is the union of sets corresponding to the candidate first stage codewords selected by the first stage quantizer.

Proceedings Article
Kåre Jean Jensen1, Søren Riis1
01 Jan 2000
TL;DR: An improved input coding method for a textto-phoneme (TTP) neural network model for speaker independent speech recognition systems that is jointly optimized with the TTP model ensuring that the coding is optimal in terms of overall performance.
Abstract: This paper describes an improved input coding method for a textto-phoneme (TTP) neural network model for speaker independent speech recognition systems. The code-book is self-organizing and is jointly optimized with the TTP model ensuring that the coding is optimal in terms of overall performance. The codebook is based on a set of single layer neural networks with shared weights. Experiments show that performance is increased compared to the NETTalk and NETSpeak models.

Patent
22 Nov 2000
TL;DR: In this article, a vector quantization method for encoding image data using codebooks was proposed, where each image vector of the image data is then encoded by determining a codevector within the first codebook that best approximates the image vector within the image.
Abstract: The present method relates to a method for encoding image data using vector quantization. According to the invention, a small first codebook is determined. Each image vector of the image data is then encoded by determining a codevector within the first codebook that best approximates the image vector within the image data. A first index map is generated by replacing each image vector with an index indicative of the codevector's location within the first codebook. Then difference data are evaluated based on the original image data and the encoded image data. Each error vector of the difference data is then encoded using another small codebook. In another index map the error vectors are then replaced with an index indicative of the codevector's location within the other codebook. Evaluation of the error based on the difference data and the encoded difference data provides new difference data which is used to evaluate the fidelity of the approximation process performed for compression. The steps of encoding of the difference data are repeated until a control error of the difference data is smaller than a given threshold.

Patent
24 Nov 2000
TL;DR: In this article, a vector quantization method was proposed to encode a hyper-spectral image datacube using a temporary codebook having a small number, n, of codevectors.
Abstract: The present invention relates to a method of encoding a hyper-spectral image datacube using vector quantization. According to the invention, a temporary codebook having a small number, n, of codevectors is generated from the datacube. The datacube is processed using the temporary codebook to form n clusters (subsets) of vectors. A codevector corresponds to a cluster and is the centre of gravity for the cluster. In the compression process, vectors in each cluster are encoded by the corresponding codevector. Then the reconstruction fidelity of the encoded cluster is evaluated. When the fidelity of an encoded cluster is better than a predetermined fidelity, the codevector relating to that cluster is stored in a final codebook and the vectors in the cluster are expressed with the index (address) of the codevector in the final codebook. When the fidelity of an encoded cluster is not suitable, the cluster is reencoded with a new temporary codebook generated from this cluster, and the same process is repeated. The compression process is recursively implemented until all clusters are processed.

Journal ArticleDOI
TL;DR: It is shown how a few fixed vectors designed from a set of training images by a clustering algorithm accelerates the search for the domain blocks and improves both the rate-distortion performance and the decoding speed of a pure fractal coder, when they are used as a supplementary vector quantization codebook.
Abstract: In fractal image compression, the code is an efficient binary representation of a contractive mapping whose unique fixed point approximates the original image. The mapping is typically composed of affine transformations, each approximating a block of the image by another block (called domain block) selected from the same image. The search for a suitable domain block is time-consuming. Moreover, the rate distortion performance of most fractal image coders is not satisfactory. We show how a few fixed vectors designed from a set of training images by a clustering algorithm accelerates the search for the domain blocks and improves both the rate-distortion performance and the decoding speed of a pure fractal coder, when they are used as a supplementary vector quantization codebook. We implemented two quadtree-based schemes: a fast top-down heuristic technique and one optimized with a Lagrange multiplier method. For the 8 bits per pixel (bpp) luminance part of the 512/spl kappa/512 Lena image, our best scheme achieved a peak-signal-to-noise ratio of 32.50 dB at 0.25 bpp.

Journal ArticleDOI
TL;DR: A hierarchical three-sided side match finite-state vector quantization (HTSMVQ) method that can make the state codebook size as small as possible-the size is reduced to one if the prediction can perform perfectly and regularly refresh the codewords to alleviate the error propagation of side match.
Abstract: Several low bit-rate still-image compression methods have been presented, such as SPHIT, hybrid VQ, and the Wu-Chen (see Proc. IEEE ICASSP, 1997) method. In particular, the image "Lena" can be compressed using less than 0.15 bpp at 31.4 dB or higher. These methods exercise the analysis techniques (wavelet or subband) before distributing the bit rate to each piece of an image, thus the dilemma between bit rate and distortion can be solved. In this paper, we propose a simple but comparable method that adopts the technique of side match VQ only. The side match vector quantization (SMVQ) is an effective VQ coding scheme at low bit-rate. The conventional side match (two-sided) VQ utilizes the codeword information of two neighboring blocks to predict the state codebook of an input vector. We propose a hierarchical three-sided side match finite-state vector quantization (HTSMVQ) method that can: (1) make the state codebook size as small as possible-the size is reduced to one if the prediction can perform perfectly; (2) improve the prediction quality for edge blocks; and (3) regularly refresh the codewords to alleviate the error propagation of side match. In the simulation results, the image "Lena" can be coded with a PSNR 34.682 dB at 0.25 bpp. It is better than SPIHT, EZW, FSSQ and hybrid VQ with 34.1, 33.17, 33.1, and 33.7 dB, respectively. At a bit rate lower than 0.15 bpp, only the enhanced version of EZW performs better than our method, at about 0.14 dB.

Journal ArticleDOI
TL;DR: It is shown that the use of a variable scale factor which is a function of the iteration number offers faster convergence than the modified K-means algorithm with a fixed scale factor, without affecting the optimality of the codebook.
Abstract: Previously a modified K-means algorithm for vector quantization design has been proposed where the codevector updating step is as follows: new codevector=current codevector+scale factor (new centroid-current codevector). This algorithm uses a fixed value for the scale factor. In this paper, we propose the use of a variable scale factor which is a function of the iteration number. For the vector quantization of image data, we show that it offers faster convergence than the modified K-means algorithm with a fixed scale factor, without affecting the optimality of the codebook.

Patent
19 Apr 2000
TL;DR: In this article, the Huffman codebook selection section includes a code length calculation section for calculating the code length which would result from a Huffman encoding operation of each group of data using each Huffman Codebook.
Abstract: An encoder of the present invention includes: a number G of storage sections (G is an integer equal to or greater than 1) for storing a number G of groups of data; a Huffman codebook selection section for selecting one of a number H of Huffman codebooks (H is an integer equal to or greater than 1) for each of the groups of data stored in the respective storage sections, each of the Huffman codebooks having a codebook number; a number G of Huffman encoding sections, each of the Huffman encoding sections Huffman-encoding a corresponding one of the G groups of data by using one of the Huffman codebooks which is selected by the Huffman codebook selection section for the one group of data; and a codebook number encoding section for encoding the codebook number of each Huffman codebook selected by the Huffman codebook selection section. The Huffman codebook selection section includes a code length calculation section for calculating a code length which would result from a Huffman encoding operation of each of the G groups of data using each Huffman codebook, and a control section for selecting one of the Huffman codebooks which is suitable for the group of data based on the code length calculated by the code length calculation section. When the Huffman codebook selected is an unsigned codebook, a number of bits required for sign information has previously been added to the code length calculated by the code length calculation section.

Patent
18 Apr 2000
TL;DR: In this article, an encoder of the present invention includes: G storage sections for storing G groups of data; a selection section for selecting one of H Huffman codebooks having codebook numbers for each of the groups; G encoding sections Huffman-encoding the G groups by using the selected Huffman codesbook; and an encoding section for encoding the codebook number of each Huffman codedbook selected.
Abstract: An encoder of the present invention includes: G storage sections for storing G groups of data; a selection section for selecting one of H Huffman codebooks having codebook numbers for each of the groups of data; G encoding sections Huffman-encoding the G groups of data by using the selected Huffman codebook; and an encoding section for encoding the codebook number of each Huffman codebook selected. The selection section includes a calculation section for calculating a code length and a control section for selecting one of the Huffman codebooks. When the Huffman codebook selected is an unsigned codebook, a number of bits required for sign information has previously been added to the calculated code length.

Patent
14 Jun 2000
TL;DR: In this paper, a fast codebook search method for finding an optimal codebook from a group of Huffman codebooks was proposed for MPEG-compliant audio encoding, wherein the method was especially suited for MPEG compliant audio encoding.
Abstract: A fast codebook search method for finding an optimal Huffman codebook from a group of Huffman codebooks, wherein the method is especially suited for MPEG-compliant audio encoding. In order to select an optimal codebook from among candidate codebooks for a given sub-region, a bit difference table is created, which for any given data pair contains a bit difference value. The bit difference value is the difference between the number of bits needed for a given data pair (or quadruple) in a first candidate codebook and a second candidate codebook [N bits−M bits]. By summing all such bit difference values for the data samples in a given sub-region, a quick determination can be made as to which codebook would encode the sub-region using the fewest bits (based on the size and/or sign of the sum(s)). For sub-regions having three candidate codebooks, two bit difference sums are calculated. For an implementation of the MPEG-1 Layer III Audio Encoding standard, only 20 bit difference tables are required in order to cover every possible combination of codebook candidates.

Journal ArticleDOI
TL;DR: A simple improvement to PDS based on principal components analysis (PCA), which rotates the codebook without altering the interpoint distances is described, which can be used to improve many fast encoding algorithms.
Abstract: Partial distance search (PDS) is a method of reducing the amount of computation required for vector quantization encoding. The method is simple and general enough to be incorporated into many fast encoding algorithms. This paper describes a simple improvement to PDS based on principal components analysis (PCA), which rotates the codebook without altering the interpoint distances. Like PDS, this new method can be used to improve many fast encoding algorithms. The algorithm decreases the decoding time of PDS by as much as 44%, and decreases the decoding time of k-d trees by as much as 66% on common vector quantization benchmarks.

Proceedings ArticleDOI
28 Mar 2000
TL;DR: Th thin client compression (TCC) is described, a novel codec for screendumps and sequences of such images that exploits both local and global redundancy as well as interframe redundancy and achieves the best end-to-end latency over low bandwidth connections.
Abstract: In this paper, we describe thin client compression (TCC), a novel codec for screendumps and sequences of such images that exploits both local and global redundancy as well as interframe redundancy. Our method extends textual image compression to non-bilevel images, and uses three piecewise-constant models to separately code bilevel marks, non-bilevel marks, and the residue. It also speeds up pattern matching and substitution by exploiting the absence of noise in synthetic images, and shares its codebook across images. Our method compresses a series of test images 2.6 to 8.2 times better than state-of-the-art methods. Its speed is adequate for interactive logins, and it achieves the best end-to-end latency over low bandwidth connections.

01 Jan 2000
TL;DR: The K-tree algorithm scales up to larger data sets than TSVQ, produces codebooks with somewhat higher distortion rates, but facilitates greater control over the properties of the resulting codebooks.
Abstract: We describe a clustering algorithm for the design of height balanced trees for vector quantisation. The algorithm is a hybrid of the B-tree and the k-means clustering procedure. K-tree supports on-line dynamic tree construction. The properties of the resulting search tree and clustering codebook are comparable to that of codebooks obtained by TSVQ, the commonly used recursive k-means algorithm for constructing vector quantization search trees. The K-tree algorithm scales up to larger data sets than TSVQ, produces codebooks with somewhat higher distortion rates, but facilitates greater control over the properties of the resulting codebooks. We demonstrate the properties and performance of K-tree and compare it with TSVQ and with k-means.

Proceedings ArticleDOI
11 Dec 2000
TL;DR: K-tree as mentioned in this paper is a hybrid of the B-tree and the k-means clustering procedure for vector quantization, which is a clustering algorithm for the design of height balanced trees.
Abstract: We describe a clustering algorithm for the design of height balanced trees for vector quantisation. The algorithm is a hybrid of the B-tree and the k-means clustering procedure. K-tree supports on-line dynamic tree construction. The properties of the resulting search tree and clustering codebook are comparable to that of codebooks obtained by TSVQ, the commonly used recursive k-means algorithm for constructing vector quantization search trees. The K-tree algorithm scales up to larger data sets than TSVQ, produces codebooks with somewhat higher distortion rates, but facilitates greater control over the properties of the resulting codebooks. We demonstrate the properties and performance of K-tree and compare it with TSVQ and with k-means.

Proceedings ArticleDOI
17 Sep 2000
TL;DR: A computationally efficient, high quality, vector quantization scheme based on a parametric probability density function (PDF) is developed for encoding speech line spectral frequencies (LSF), which provides 2-3 bits gain over conventional MSVQ schemes.
Abstract: A computationally efficient, high quality, vector quantization scheme based on a parametric probability density function (PDF) is developed for encoding speech line spectral frequencies (LSF). For this purpose, speech LSFs are modeled as i.i.d realizations of a multivariate normal mixture density. The mixture model parameters are efficiently estimated from the training data using the expectation maximization (EM) algorithm. The estimated density is suitably quantized using transform coding and bit-allocation techniques for both fixed rate and variable rate systems. Source encoding using the resultant codebook involves no searches and its computational complexity is minimal and independent of the rate of the system. Experimental results show that the proposed scheme provides 2-3 bits gain over conventional MSVQ schemes. The proposed memoryless quantizer is enhanced to form a quantizer with memory. The quantizer with memory provides transparent quality speech at 20 bits/frame.

Journal ArticleDOI
TL;DR: An embedded image compression scheme using the discrete multiwavelet transform (DMWT) and a new coding scheme which combines scalar quantization and 2/spl times/2 vector quantization (VQ) is proposed, which is shown to have a better performance than the current schemes.
Abstract: An embedded image compression scheme using the discrete multiwavelet transform (DMWT) is proposed. The proposed coding scheme is based on a new prefilter design for DMWT and a new embedded coding algorithm which combines scalar quantization and 2/spl times/2 vector quantization (VQ). A new algorithm for embedded VQ codebook generation is proposed, which is shown to have a better performance than the current schemes. The performance of the proposed compression scheme is comparable to the one of the SPIHT algorithm.

Journal ArticleDOI
TL;DR: An adaptive VQ (AVQ) scheme is proposed, based on a one-dimensional codebook structure, where codevectors are overlapped and linearly shifted, which makes this easier to be hardware implemented than any existing AVQ method.
Abstract: A discrete semi-periodic signal can be described as x(n)=x(n+T+ΔT) +Δx,∀n, where T is the fundamental period, ΔT represents a random period variation, and Δx is an amplitude variation. Discrete ECG signals are treated as semi-periodic, where T and Δx are associated with the heart beat rate and the baseline drift, respectively. These two factors cause coding inefficiency for ECG signal compression using vector quantisation (VQ). First, the periodic characteristic of ECG signals creates data redundancy among codevectors in a traditional two-dimensional codebook. Secondly, the fixed codevectors in traditional VQ result in low adaptability to signal variations. To solve these two problems simultaneously, an adaptive VQ (AVQ) scheme is proposed, based on a one-dimensional (1D) codebook structure, where codevectors are overlapped and linearly shifted. To further enhance the coding performance, the Δx term is extracted and encoded separately, before 1D-AVQ is applied. The data in the first 3 min of all 48 ECG records from the MIT/BIH arrhythmic database are used as the test signals, and no codebook training is carried out in advance. The compressed data rate is 265.2±92.3 bits s−1 at 10.0±4.1% PRD. No codebook storage or transmission is required. Only a very small codebook storage space is needed temporarily during the coding process. In addition, the linearly shifted nature of codevectors makes this easier to be hardware implemented than any existing AVQ method.

Patent
05 Jan 2000
TL;DR: In this article, the bit error probability influences codebooks through the calculation of channel transition probabilities for all combinations of codewords (90) receivable from the channel (26) given all combinations for transmittability through the channel.
Abstract: A communication system (20) employs fixed rate channel-optimized, trellis-coded quantization (COTCQ) at a plurality of diverse encoding bit rates. COTCQ is performed through a COTCQ encoder (40) and COTCQ decoder (54). The COTCQ encoder and decoder (40,54) each include a codebook table (62) having at least one codebook (64) for each encoding bit rate. Each codebook (64) is configured in response to the bit error probability of the channel (26) through which the communication system (20) communicates. The bit error probability influences codebooks through the calculation of channel transition probabilities for all combinations of codewords (90) receivable from the channel (26) given all combinations of codewords (90) transmittable through the channel (26). Channel transition probabilities are responsive to base channel transition probabilities and the hamming distances between indices for codewords within subsets of the transmittable and receivable codewords.