scispace - formally typeset
Search or ask a question

Showing papers on "List decoding published in 1990"


Patent
Dan S. Bloomberg1, Robert F. Tow1
31 Jul 1990
TL;DR: Weighted and unweighted convolution filtering processes are provided for decoding bitmap image space representations of self-clocking glyph shape codes and for tracking the number and locations of the ambiquities or "errors" that are encountered during the decoding as mentioned in this paper.
Abstract: Weighted and unweighted convolution filtering processes are provided for decoding bitmap image space representations of self-clocking glyph shape codes and for tracking the number and locations of the ambiquities or "errors" that are encountered during the decoding. This error detection may be linked to or compared against the error statistics from an alternative decoding process, such as the binary image processing techniques that are described herein to increase the reliability of the decoding that is obtained.

286 citations


Patent
31 Jul 1990
TL;DR: Weighted and unweighted convolution filtering processes are provided for decoding bitmap image space representations of self-clocking glyph shape codes and for tracking the number and locations of the ambiquities or "errors" that are encountered during the decoding as mentioned in this paper.
Abstract: Weighted and unweighted convolution filtering processes are provided for decoding bitmap image space representations of self-clocking glyph shape codes and for tracking the number and locations of the ambiquities or "errors" that are encountered during the decoding. This error detection may be linked to or compared against the error statistics from an alternative decoding process, such as the binary image processing techniques that are described herein to increase the reliability of the decoding that is obtained.

207 citations


Journal ArticleDOI
TL;DR: To compare encoding and decoding schemes requires one to first look into information and coding theory and solve problems and possible solutions in encoding information.
Abstract: To compare encoding and decoding schemes requires one to first look into information and coding theory. This article discusses problems and possible solutions in encoding information. >

147 citations


Journal ArticleDOI
Jehoshua Bruck1, Moni Naor1
TL;DR: The problem of maximum-likelihood decoding of linear block codes is known to be hard but the fact that the problem remains hard even if the code is known in advance, and can be preprocessed for as long as desired in order to device a decoding algorithm, is shown.
Abstract: The problem of maximum-likelihood decoding of linear block codes is known to be hard. The fact that the problem remains hard even if the code is known in advance, and can be preprocessed for as long as desired in order to device a decoding algorithm, is shown. The hardness is based on the fact that existence of a polynomial-time algorithm implies that the polynomial hierarchy collapses. Thus, some linear block codes probably do not have an efficient decoder. The proof is based on results in complexity theory that relate uniform and nonuniform complexity classes. >

132 citations


Journal ArticleDOI
TL;DR: An algebraic decoding algorithm for the 1/2-rate (32, 16, 8) quadratic residue (QR) code is found and is expected that the algebraic approach developed here and by M. Elia (1987) applies also to longer QR codes and other BCH-type codes that are not fully decoded by the standard BCH decoding algorithm.
Abstract: An algebraic decoding algorithm for the 1/2-rate (32, 16, 8) quadratic residue (QR) code is found. The key idea of this algorithm is to find the error locator polynomial by a systematic use of the Newton identities associated with the code syndromes. The techniques developed extend the algebraic decoding algorithm found recently for the (32, 16, 8) QR code. It is expected that the algebraic approach developed here and by M. Elia (1987) applies also to longer QR codes and other BCH-type codes that are not fully decoded by the standard BCH decoding algorithm. >

72 citations


Journal ArticleDOI
TL;DR: Information set decoding (ISC) as discussed by the authors is an algorithm for decoding any linear code and has been shown to be logarithmically exact for virtually all codes, with the case of complete minimum distance decoding and bounded hard decision decoding.
Abstract: Information set decoding is an algorithm for decoding any linear code. Expressions for the complexity of the procedure that are logarithmically exact for virtually all codes are presented. The expressions cover the cases of complete minimum distance decoding and bounded hard-decision decoding, as well as the important case of bounded soft-decision decoding. It is demonstrated that these results are vastly better than those for the trivial algorithms of searching through all codewords or through all syndromes, and are significantly better than those for any other general algorithm currently known. For codes over large symbol fields, the procedure tends towards a complexity that is subexponential in the symbol size. >

66 citations


Journal ArticleDOI
TL;DR: Two new decoding procedures based on repeated decoding trials and exchange of information between the two decoders and the deinterleaver are proposed.
Abstract: The concatenated coding system recommended by CCSDS (Consultative Committee for Space Data Systems) uses an outer (255,233) Reed-Solomon (RS) code based on 8-b symbols, followed by the block interleaver and an inner rate 1/2 convolutional code with memory 6. Viterbi decoding is assumed. Two new decoding procedures based on repeated decoding trials and exchange of information between the two decoders and the deinterleaver are proposed. In the first one, where the improvement is 0.3-0.4 dB, only the RS decoder performs repeated trials. In the second one, where the improvement is 0.5-0.6 dB, both decoders perform repeated decoding trials and decoding information is exchanged between them. >

60 citations


Journal ArticleDOI
TL;DR: A modified step-by-step complete decoding algorithm of this Golay code is introduced which needs fewer shift operations than Kasami's error-trapping decoder.
Abstract: An algebraic decoding method for triple-error-correcting binary BCH codes applicable to complete decoding of the (23,12,7) Golay code has been proved by M. Elia (see ibid., vol.IT-33, p.150-1, 1987). A modified step-by-step complete decoding algorithm of this Golay code is introduced which needs fewer shift operations than Kasami's error-trapping decoder. Based on the algorithm, a high-speed hardware decoder of this code is proposed. >

24 citations


Journal ArticleDOI
Mario Blaum1, Jehoshua Bruck1
TL;DR: A decoding algorithm, based on Venn diagrams, for decoding the (23, 12, 7) Golay code is presented and is based on the design properties of the parity sets of the code.
Abstract: A decoding algorithm, based on Venn diagrams, for decoding the (23, 12, 7) Golay code is presented. The decoding algorithm is based on the design properties of the parity sets of the code. As for other decoding algorithms for the Golay code, decoding can be easily done by hand. >

19 citations


Journal ArticleDOI
TL;DR: It is shown that error-erasure decoding for a cyclic code allows the correction of a combination of t errors and r erasures when 2t+r >.
Abstract: It is shown that error-erasure decoding for a cyclic code allows the correction of a combination of t errors and r erasures when 2t+r >

10 citations


Journal ArticleDOI
01 Mar 1990
TL;DR: Euclidean algorithm used to evaluate F(x) directly, without going through intermediate steps of solving error-locator and error-evaluator polynomials, suitable for implementation in very-large-scale integrated circuits.
Abstract: In the paper, by considering a Reed-Solomon (RS) code to be a special case of a redundant residue polynomial code, a fast transform decoding algorithm to correct both errors and erasures is presented. This decoding scheme is an improvement of the decoding algorithm for the redundant residue polynomial code suggested by Shiozaki and Nishida [1]. This decoding scheme can be realised readily on VLSI chips.

Journal ArticleDOI
Ron M. Roth1, A. Lempel
TL;DR: A decoding procedure for Reed-Solomon codes is presented, based on a representation of the parity-check matrix by circulant blocks that inherits both the (relatively low) time complexity of the Berlekamp-Massey algorithm and the hardware simplicity characteristic of Blahut's algorithm.
Abstract: The Fourier transform technique is used to analyze and construct several families of double-circulant codes. The minimum distance of the resulting codes is lower-bounded by 2 square root r and can be decoded easily employing the standard BCH decoding algorithm or the majority-logic decoder of Reed-Muller codes. A decoding procedure for Reed-Solomon codes is presented, based on a representation of the parity-check matrix by circulant blocks. The decoding procedure inherits both the (relatively low) time complexity of the Berlekamp-Massey algorithm and the hardware simplicity characteristic of Blahut's algorithm. The procedure makes use of the encoding circuit together with a reduced version of Blahut's decoder. >

Proceedings ArticleDOI
30 Sep 1990
TL;DR: The use of erasure insertion techniques can result in increased coding gains in frequency-hop spread-spectrum communication systems and Viterbi's ratio threshold test is used with Reed-Solomon codes to determine which code symbols should be erased before decoding.
Abstract: The use of erasure insertion techniques can result in increased coding gains in frequency-hop spread-spectrum communication systems. Viterbi's ratio threshold test is used with Reed-Solomon codes to determine which code symbols should be erased before decoding. The system performance in partial-band and multiple-access interference environments is analyzed and compared to errors-only decoding and to errors-and-erasures decoding with perfect side information. When interference is strong, large coding gains are observed and error probabilities are reduced by several orders of magnitude. >

Proceedings ArticleDOI
16 Apr 1990
TL;DR: It is shown that the reliability performance of Reed-Muller and other majority logic decodable codes can be substantially improved at the expense of a very small reduction in throughput.
Abstract: Side information provided by sets of orthogonal check sums in a majority logic decoder for block codes is used in a type-I hybrid ARQ (automatic repeat request) error control scheme. The side information is obtained through a simple modification of the majority logic decoder. It is shown that the reliability performance of Reed-Muller and other majority logic decodable codes can be substantially improved at the expense of a very small reduction in throughput. For raw channel bit error rates up to 10/sup -2/, the reliability of the modified decoder is many orders of magnitude greater than that of the unmodified decoder. The simplicity of the decoding circuit permits implementation in systems with very high data rates. >

Journal ArticleDOI
TL;DR: Classes of systematic codes correcting burst asymmetric or unidirectional errors are proposed, which have less check bits than ordinary burst error correcting codes.
Abstract: Classes of systematic codes correcting burst asymmetric or unidirectional errors are proposed. These codes have less check bits than ordinary burst error correcting codes. Decoding algorithms for the proposed codes are also presented. Encoding and decoding of the codes is very easy.

Proceedings ArticleDOI
Evangelos Eleftheriou1, R. Cideciyan1
02 Dec 1990
TL;DR: Quaternary codes for improving the reliability of baseband data transmission over noisy partial-response channels are proposed and exhibit spectral nulls at the frequencies where the channel transfer function has zeros, together with a significant increase in minimum Euclidean distance between allowed channel output sequences.
Abstract: Quaternary codes for improving the reliability of baseband data transmission over noisy partial-response channels are proposed. These codes offer the spectral shaping properties of line codes, i.e. they exhibit spectral nulls at the frequencies where the channel transfer function has zeros, together with a significant increase in minimum Euclidean distance between allowed channel output sequences. Simple encoders and decoders for selected quaternary codes with a spectral null at DC are given for the dicode channel. The receiver employs soft-decision maximum-likelihood sequence estimation on the combined channel and FSTD (finite-state transition diagram) trellis followed by block decoding. The code design avoids long runs of identical symbols and limits the path memory of the Viterbi detector by eliminating all undesired sequences. Simulation results on the performance of 5B4Q, 8B6Q, and 9B6Q codes and their power spectral density are presented. >

Journal ArticleDOI
TL;DR: A nonconstructive proof of the existence of a good error-erasure-decoding algorithm is presented, including a decoding algorithm that is an extension of the Blokh-Zyablov decoding algorithm for product codes.
Abstract: The decoding of unequal error protection product codes, which are a combination of linear unequal error protection (UEP) codes and product codes, is addressed. A nonconstructive proof of the existence of a good error-erasure-decoding algorithm is presented; however, obtaining the decoding procedure is still an open research problem. A particular subclass of UEP product codes is considered, including a decoding algorithm that is an extension of the Blokh-Zyablov decoding algorithm for product codes. For this particular subclass the decoding problem is solved. >

Proceedings ArticleDOI
30 Sep 1990
TL;DR: The idea of code combining is applied to slotted frequency-hopping spread-spectrum random-access (FH/SSRA) networks and it is shown that significant utilization/delay gains are obtained by using code combining techniques.
Abstract: The idea of code combining is applied to slotted frequency-hopping spread-spectrum random-access (FH/SSRA) networks. M-ary symbol modulation schemes with hard-decision decoding are examined. Both cases of decoding, with and without side-information, are considered. In the case of decoding without side-information two decoding methods are examined: in the first (ideal decoding) the decoder uses a large number of decoding trials, while in the second (practical decoding) it uses a smaller number of trials, based on the combining of only the last two available copies of a packet. Asymptotic results are derived as the code-length goes to infinity. It is shown that significant utilization/delay gains are obtained by using code combining techniques. A 20% performance improvement is observed for systems in which side-information is available at the decoder and the number of frequency slots is small. In this case, higher utilization, along with robust system operation is obtained without increased system complexity (only memory size of one codeword is the additional requirement). >

Journal ArticleDOI
TL;DR: A simplified trellis decoding algorithm, in which the hard decision output of a bit with an envelope sample greater than the threshold value is accepted as correct, is presented and the results show that the trellIS decoding algorithm improves BER performance.
Abstract: Trellis decoding of linear block codes in a Rayleigh fading channel is discussed. Two methods for calculating metric values for each bit in a received block are considered: the values are calculated from the received signal envelope sample and from the demodulator output. Bit error rate (BER) performances of hard decision and trellis decoding are compared using Hamming (7, 4) and Golay (24, 12) codes in computer simulations and laboratory experiments. A simplified trellis decoding algorithm, in which the hard decision output of a bit with an envelope sample greater than the threshold value is accepted as correct, is presented. Laboratory experimental results for trellis decoding in combination with Gaussian minimum-shift-keying (GMSK) modulation and frequency detection are shown. The effect of n-bit A/D-conversion in signal envelope sampling is investigated experimentally. The results show that the trellis decoding algorithm improves BER performance. >

28 Feb 1990
TL;DR: Based on the analysis and simulation results, the difference in error performance between the optimum decoding of the overall multi-level modulation code and the suboptimum multi-stage decode of the code is very little, a fraction of dB loss.
Abstract: Multi-level method is a powerful technique for constructing bandwidth efficient modulation codes. It allows the construction of modulation codes systematically with arbitrary large minimum squared Euclidean distance from component codes in conjunction with proper bits-to-signal mapping. If the component codes are chosen properly, the resultant modulation code not only has good minimum squared Euclidean distance but is also rich in structural properties such as: linear structure, phase invariant property, and trellis structure. A modulation code with linear structure has invariant distance distribution. Phase invariant property is useful in resolving carrier-phase ambiguity and ensuring rapid carrier-phase resynchronization after temporary loss of synchronization. It the component codes have trellis structure, the resultant multi-level modulation code also has trellis structure. Trellis structure allows decoding of a multi-level modulation code with the soft-decision Viterbi decoding algorithm. Furthermore, the multi-level structure allows decoding of a multi-level modulation code with the multi-stage decoding. This type of decoding reduces the decoding complexity. Multi-stage decoding is not optimum even though the decoding of each component is optimum. Based on the analysis and simulation results, the difference in error performance between the optimum decoding of the overall multi-level modulation code and the suboptimum multi-stage decoding of the code is very little, a fraction of dB loss.

Journal ArticleDOI
TL;DR: An attempt to present an overview of the error control coding techniques by suitably classifying them and drawing a qualitative comparison between them in order to provide a design tool to the researcher.
Abstract: Error control coding has gained significant importance in most of the digital communication systems since a few decades. As a result, a large variety of coding and decoding techniques, suitable for different applications and to meet different requirements, have emerged. In this paper, we have made an attempt to present an overview of the error control coding techniques by suitably classifying them and drawing a qualitative comparison between them in order to provide a design tool to the researcher. Some simulated and practically achieved performance results of some of the popular codes have been included to emphasize their utility and suitability. We have also presented here some of the important results of our studies on concatenated coding schemes which have become very important recently. The structure of codes, algorithms, the hard and soft-decision decoding and their complexities have been discussed. The parameters like bandwidth, data-rate, types of errors, BER vs SNR, coding gain and buffer require...

Patent
18 Aug 1990
TL;DR: In this paper, a shift register is provided in an address generating circuit to decide a new address without a special calculation when decoding is not finished, which can reduce the decoding time by using a simple bit shift.
Abstract: PURPOSE: To reduce the decoding time by providing a shift register into an address generating circuit so as to decide a new address without a special calculation when the decoding is not finished CONSTITUTION: A shift register is provided in an address generating circuit 2 A decoding table memory 3 is accessed depending on an address generated by the address generating circuit 2, decoding end/unfinished decoding is discriminated depending on a read decoding end flag, and when the decoding is finished, a decoded data is outputted to a data generating circuit 5 When the decoding is not finished, the address generating circuit 2 sets a valid bit number of an inputted succeeding address to a low-order bit of the shift register and a code bit from a line 6 is inputted sequentially to low-order bits by a shift bit number and shifted to the left to generate a new address Thus, when the decoding is not finished, since a new address is generated by simple bit shift only without special calculation between succeeding address information and the input code bit, the decoding time is reduced COPYRIGHT: (C)1992,JPO&Japio

Journal ArticleDOI
TL;DR: It is shown by computer simulation that the decoding error rate is improved over the traditional Viterbi decoding with equal complexity in instrumentation compared with the traditional decoding methods.
Abstract: Viterbi decoding and sequential decoding are well known as the maximum or approximate maximum likelihood decoding methods for the convolutional code. The decoding error rates of those decoding methods have been examined in the past. However, in those discussions the evaluation has been made by assuming the channel as memoryless. By contrast, this paper proposes several maximum likelihood decoding methods for the convolutional code in the channel model with memory, such as the Gilbert model. In decoding method (I), the conditional probability of the transmitted sequence for the received sequence is determined for each state of the channel. Using the result as the metric, the state of the channel is considered in the same way as the code trellis. The decoding is made by suitably selecting the metric. In decoding method (II), the state of the channel is estimated to suppress the computational complexity with the increase of the channel states. These methods are applied to Viterbi decoding. It is shown by computer simulation that the decoding error rate is improved over the traditional Viterbi decoding with equal complexity in instrumentation compared with the traditional decoding methods.

Proceedings ArticleDOI
12 Aug 1990
TL;DR: A method for decoding RS (Reed Solomon) codes which used fixed decoding time and does not depend upon the number of errors present in messages is presented, which has resulted in improvement in signal to noise ratio and error correction capabilities and saves the bandwidth.
Abstract: A method for decoding RS (Reed Solomon) codes which used fixed decoding time and does not depend upon the number of errors present in messages is presented. The method is much simpler compared to existing decoding algorithms for RS codes. However, for large values of p (where p>or=0.2), the performance degrades significantly because for large noise levels the improvement obtained by error correction codes is less significant and is not necessarily worth the calculations and improved error correction techniques. This technique has not only resulted in improvement in signal to noise ratio and error correction capabilities but also saves the bandwidth. This savings was achieved by shortening the block length of RS code by omitting information symbols that do not reduce its minimum distance, and therefore any shortened RS code is also a maximum distance separable code. The use of neural network approaches improved the speed of computation, the simplicity, and robustness in addition to improvement in error correction and signal to noise ratio. >


01 Feb 1990
TL;DR: Experimental data indicate that Chase's Rank Decoding algorithm, when used with simple parity check codes, provides values of coding gain from 2.0 to 4.0 db, which is above average for a soft decision decoding algorithm.
Abstract: : It is well-known that the use of channel state information can improve decoding reliability. This is because estimates of channel noise can be used to help identify which received symbols are most likely to be in error. Any technique which uses channel noise information to improve decoding is called a soft decision decoding algorithm. Discarding channel state information in the decoding process requires an increase in the transmitter power required to achieve the same decoding error probability as when channel state information is used. The difference can be as much as 2 dB. Much contemporary research in error control coding attempts to design soft decision algorithms and to evaluate the improvement in code performance which they provide. Experimental data indicate that Chase's Rank Decoding algorithm, when used with simple parity check codes, provides values of coding gain from 2.0 to 4.0 db. Keywords: Decoding; Soft decision; Coding gain; Chase; Parity checks.

Proceedings ArticleDOI
03 Apr 1990
TL;DR: It is shown that significant performance advantages can be achieved by combining conventional maximum-likelihood decoding with the context within the speech signal to make global decisions.
Abstract: Research on a technique for improving speech coder performance in the presence of errors is presented. It is shown that significant performance advantages can be achieved by combining conventional maximum-likelihood decoding with the context within the speech signal to make global decisions. The hidden Markov model (HMM) is integrated into a combined channel and speech decoder. A theoretical development of a global maximum-likelihood decoder is presented. Results of testing in channel errors are presented, demonstrating enhanced error performance. >