scispace - formally typeset
Search or ask a question

Showing papers on "Sequential decoding published in 1990"


Patent
Dan S. Bloomberg1, Robert F. Tow1
31 Jul 1990
TL;DR: Weighted and unweighted convolution filtering processes are provided for decoding bitmap image space representations of self-clocking glyph shape codes and for tracking the number and locations of the ambiquities or "errors" that are encountered during the decoding as mentioned in this paper.
Abstract: Weighted and unweighted convolution filtering processes are provided for decoding bitmap image space representations of self-clocking glyph shape codes and for tracking the number and locations of the ambiquities or "errors" that are encountered during the decoding. This error detection may be linked to or compared against the error statistics from an alternative decoding process, such as the binary image processing techniques that are described herein to increase the reliability of the decoding that is obtained.

286 citations


Journal ArticleDOI
S.J. Simmons1
TL;DR: A breadth-first trellis decoding algorithm is introduced for application to sequence estimation in digital data transmission and is shown to exhibit an error-rate versus average-computational-complexity behavior that is much superior to the Viterbi algorithm and also improves on the M-algorithm.
Abstract: A breadth-first trellis decoding algorithm is introduced for application to sequence estimation in digital data transmission. The high degree of inherent parallelism makes a parallel-processing implementation attractive. The algorithm is shown to exhibit an error-rate versus average-computational-complexity behavior that is much superior to the Viterbi algorithm and also improves on the M-algorithm. The decoding algorithm maintains a variable number of paths as its computation adapts to the channel noise actually encountered. Buffering of received samples is required to support this. Bounds that are evaluated by trellis search are produced for the error event rate and average number of survivors. Performance is evaluated with conventional binary convolutional codes over both binary-synchronous-communication (BSC) and additive-white-Gaussian-noise (AWGN) channels. Performance is also found for multilevel AM and phase-shift-keying (PSK) codes and simple intersymbol interference responses over an AWGN channel. At lower signal-to-noise ratio Monte Carlo simulations are used to improve on the bounds and to investigate decoder dynamics. >

282 citations


Patent
31 Jul 1990
TL;DR: Weighted and unweighted convolution filtering processes are provided for decoding bitmap image space representations of self-clocking glyph shape codes and for tracking the number and locations of the ambiquities or "errors" that are encountered during the decoding as mentioned in this paper.
Abstract: Weighted and unweighted convolution filtering processes are provided for decoding bitmap image space representations of self-clocking glyph shape codes and for tracking the number and locations of the ambiquities or "errors" that are encountered during the decoding. This error detection may be linked to or compared against the error statistics from an alternative decoding process, such as the binary image processing techniques that are described herein to increase the reliability of the decoding that is obtained.

207 citations


Journal ArticleDOI
TL;DR: The unequal error protection capabilities of convolutional codes belonging to the family of rate-compatible punctured convolutionAL codes (RCPC codes) are studied and a number of examples are provided to show that it is possible to accommodate widely different error protection levels within short information blocks.
Abstract: The unequal error protection capabilities of convolutional codes belonging to the family of rate-compatible punctured convolutional codes (RCPC codes) are studied. The performance of these codes is analyzed and simulated for the first fading Rice and Rayleigh channels with differentially coherent four-phase modulation (4-DPSK). To mitigate the effect of fading, interleavers are designed for these unequal error protection codes, with the interleaving performed over one or two blocks of 256 channel bits. These codes are decoded by means of the Viterbi algorithm using both soft symbol decisions and channel state information. For reference, the performance of these codes on a Gaussian channel with coherent binary phase-shift keying (2-CPSK) is presented. A number of examples are provided to show that it is possible to accommodate widely different error protection levels within short information blocks. Unequal error protection codes for a subband speech coder are studied in detail. A detailed study of the effect of the code and channel parameters such as the encoder memory, the code rate, interleaver depth, fading bandwidth, and the contrasting performance of hard and soft decisions on the received symbols is provided. >

200 citations


Journal ArticleDOI
TL;DR: To compare encoding and decoding schemes requires one to first look into information and coding theory and solve problems and possible solutions in encoding information.
Abstract: To compare encoding and decoding schemes requires one to first look into information and coding theory. This article discusses problems and possible solutions in encoding information. >

147 citations


Journal ArticleDOI
Jehoshua Bruck1, Moni Naor1
TL;DR: The problem of maximum-likelihood decoding of linear block codes is known to be hard but the fact that the problem remains hard even if the code is known in advance, and can be preprocessed for as long as desired in order to device a decoding algorithm, is shown.
Abstract: The problem of maximum-likelihood decoding of linear block codes is known to be hard. The fact that the problem remains hard even if the code is known in advance, and can be preprocessed for as long as desired in order to device a decoding algorithm, is shown. The hardness is based on the fact that existence of a polynomial-time algorithm implies that the polynomial hierarchy collapses. Thus, some linear block codes probably do not have an efficient decoder. The proof is based on results in complexity theory that relate uniform and nonuniform complexity classes. >

132 citations


Journal ArticleDOI
TL;DR: Computer simulation is used to evaluate the performance of a sequential decoder that uses this metric in conjunction with the stack algorithm, and results are achieved comparable to those obtained using the much more complicated optimal receiver.
Abstract: The application of sequential decoding to the detection of data transmitted over the additive white Gaussian noise channel by K asynchronous transmitters using direct-sequence spread-spectrum multiple access (DS/SSMA) is considered. A modification of R.M. Fano's (1963) sequential-decoding metric, allowing the messages from a given user to be safely decoded if its E/sub b//N/sub 0/ exceeds -1.6 dB, is presented. Computer simulation is used to evaluate the performance of a sequential decoder that uses this metric in conjunction with the stack algorithm. In many circumstances, the sequential decoder achieves results comparable to those obtained using the much more complicated optimal receiver. >

129 citations


Journal ArticleDOI
TL;DR: The weight spectra of high-rate lpunctured convolutional codes are evaluated under the hypothesis of a low-rate structure and yields results slightly different from those obtained when weight specta are evaluated assuming a true high- rate structure for punctured codes.
Abstract: The weight spectra of high-rate lpunctured convolutional codes are evaluated under the hypothesis of a low-rate structure. This interpretation yields results slightly different from those obtained when weight spectra are evaluated assuming a true high-rate structure for punctured codes. The search for long memory punctured codes is extended by providing new punctured codes of rates 4/5, 5/6, 6/7, and 7/8 with memories ranging from 9 to 19. >

88 citations


Journal ArticleDOI
TL;DR: An algebraic decoding algorithm for the 1/2-rate (32, 16, 8) quadratic residue (QR) code is found and is expected that the algebraic approach developed here and by M. Elia (1987) applies also to longer QR codes and other BCH-type codes that are not fully decoded by the standard BCH decoding algorithm.
Abstract: An algebraic decoding algorithm for the 1/2-rate (32, 16, 8) quadratic residue (QR) code is found. The key idea of this algorithm is to find the error locator polynomial by a systematic use of the Newton identities associated with the code syndromes. The techniques developed extend the algebraic decoding algorithm found recently for the (32, 16, 8) QR code. It is expected that the algebraic approach developed here and by M. Elia (1987) applies also to longer QR codes and other BCH-type codes that are not fully decoded by the standard BCH decoding algorithm. >

72 citations


Journal ArticleDOI
TL;DR: Information set decoding (ISC) as discussed by the authors is an algorithm for decoding any linear code and has been shown to be logarithmically exact for virtually all codes, with the case of complete minimum distance decoding and bounded hard decision decoding.
Abstract: Information set decoding is an algorithm for decoding any linear code. Expressions for the complexity of the procedure that are logarithmically exact for virtually all codes are presented. The expressions cover the cases of complete minimum distance decoding and bounded hard-decision decoding, as well as the important case of bounded soft-decision decoding. It is demonstrated that these results are vastly better than those for the trivial algorithms of searching through all codewords or through all syndromes, and are significantly better than those for any other general algorithm currently known. For codes over large symbol fields, the procedure tends towards a complexity that is subexponential in the symbol size. >

66 citations


Journal ArticleDOI
TL;DR: Two new decoding procedures based on repeated decoding trials and exchange of information between the two decoders and the deinterleaver are proposed.
Abstract: The concatenated coding system recommended by CCSDS (Consultative Committee for Space Data Systems) uses an outer (255,233) Reed-Solomon (RS) code based on 8-b symbols, followed by the block interleaver and an inner rate 1/2 convolutional code with memory 6. Viterbi decoding is assumed. Two new decoding procedures based on repeated decoding trials and exchange of information between the two decoders and the deinterleaver are proposed. In the first one, where the improvement is 0.3-0.4 dB, only the RS decoder performs repeated trials. In the second one, where the improvement is 0.5-0.6 dB, both decoders perform repeated decoding trials and decoding information is exchanged between them. >

Patent
30 Jul 1990
TL;DR: In this paper, a convolutional decoder which assigns bit metrics to at least one bit of a symbol in a multilevel system is proposed, which uses soft-decision Viterbi decoding with channel state information.
Abstract: A convolutional decoder which assigns bit metrics to at least one bit of a symbol in a multilevel system. This decoder uses soft-decision Viterbi decoding with channel state information of a convolutionally-encoded communication transmitted using multilevel modulation.

Proceedings ArticleDOI
02 Dec 1990
TL;DR: An architecture model for area-efficient implementation of the Viterbi algorithm and favorable results are presented for trellises of de Bruijn graphs and matched-spectral-null (MSN) trellis codes.
Abstract: An architecture model for area-efficient implementation of the Viterbi algorithm is described. The authors present a systematic way of partitioning and scheduling N trellis states into P add-compare-selects (N>P), which are connected by a fixed-interconnection or a multistage-interconnection network. The proposed architecture allows pipelining to increase the throughput rate even when the channel has memory or intersymbol interference. Design strategies of path metric storage are also discussed. Favorable results are presented for trellises of de Bruijn graphs and matched-spectral-null (MSN) trellis codes. >

Proceedings ArticleDOI
02 Dec 1990
TL;DR: It is demonstrated that the minimized method can be implemented very efficiently by a systolic architecture, on a chip design which achieves 600-Mb/s decoding speed per chip, for a K=3 convolutional code.
Abstract: The Viterbi algorithm is a common application of dynamic programming in communications. Since it has a nonlinear feedback loop, this loop is the bottleneck in high-data-rate implementations. It is shown that asymptotically the loop no longer has to be processed recursively, i.e. there is no feedback (resulting in negligible performance loss). This can be exploited to derive a purely feedforward method for Viterbi decoding, called the minimized method. It is demonstrated that the minimized method can be implemented very efficiently by a systolic architecture. This is shown on a chip design which achieves 600-Mb/s decoding speed per chip, for a K=3 convolutional code. By designing one cascadable module (chip), any speed up can be achieved simply by linearly adding modules to the implementation. >

Journal ArticleDOI
TL;DR: A modified step-by-step complete decoding algorithm of this Golay code is introduced which needs fewer shift operations than Kasami's error-trapping decoder.
Abstract: An algebraic decoding method for triple-error-correcting binary BCH codes applicable to complete decoding of the (23,12,7) Golay code has been proved by M. Elia (see ibid., vol.IT-33, p.150-1, 1987). A modified step-by-step complete decoding algorithm of this Golay code is introduced which needs fewer shift operations than Kasami's error-trapping decoder. Based on the algorithm, a high-speed hardware decoder of this code is proposed. >

Proceedings Article
01 Jan 1990
TL;DR: Computer simulations confirm that the pre-Viterbi-decoding maximal ratio combining method has better performance than other methods in Rician fading channels and QPSK with high coding-rate Viterbi decoding can be an attractive candidate for mobile satellite systems as well as TC8PSK.
Abstract: Diversity combining methods for convolutional coded and soft-decision Viterbi decoded channels in mobile satellite communication systems are evaluated. Computer simulations confirm that the pre-Viterbi-decoding maximal ratio combining method has better performance than other methods in Rician fading channels. Pe performances derived from the analysis model using the probability density function of Rician fading and the bit error probability performance of Viterbi decoding in AWGN channels agree with the simulation results. This diversity method is applied to trellis coded 8PSK modulation and coherent detection 1 differential detection and their performances are compared with conventional QPSK modulation and coherent detection with high coding-rate and high coding-gain Viterbi decoding. In consequence, QPSK with high coding-rate Viterbi decoding can be an attractive candidate for mobile satellite systems as well as TC8PSK.

Journal ArticleDOI
Mario Blaum1, Jehoshua Bruck1
TL;DR: A decoding algorithm, based on Venn diagrams, for decoding the (23, 12, 7) Golay code is presented and is based on the design properties of the parity sets of the code.
Abstract: A decoding algorithm, based on Venn diagrams, for decoding the (23, 12, 7) Golay code is presented. The decoding algorithm is based on the design properties of the parity sets of the code. As for other decoding algorithms for the Golay code, decoding can be easily done by hand. >

Journal ArticleDOI
TL;DR: A dual-mode burst-error-correcting algorithm that combines maximum-likelihood decoding with a burst detection scheme is presented and proves to be more powerful than known adaptive burst decoding schemes.
Abstract: A dual-mode burst-error-correcting algorithm that combines maximum-likelihood decoding with a burst detection scheme is presented. The decoder nominally operates as a Viterbi decoder and switches to time diversity error recovery whenever an uncorrectable error pattern is identified. It is demonstrated that the new scheme outperforms interleaving strategies under the constraint of a fixed overall decoding delay. It also proves to be more powerful than known adaptive burst decoding schemes, such as the Gallager burst finding scheme. As the new method can be used with soft decision decoding, it is mainly intended for use on random-error channels affected by occasional severe bursts. >

Proceedings ArticleDOI
02 Dec 1990
TL;DR: A novel technique for frame synchronization that takes advantage of the information buried in the preamble preceding the actual data frame is described, which is extremely efficient when used in conjunction with a convolutional code and Viterbi-decoding.
Abstract: A novel technique for frame synchronization that takes advantage of the information buried in the preamble preceding the actual data frame is described. This technique can not only be applied to uncoded data, but it is extremely efficient when used in conjunction with a convolutional code and Viterbi-decoding. In this case the error-correcting capabilities of the code affect not only the actual data but also the synchronizer (coded synchronizer). A class of convolutional codes that allow the application of the coded synchronizer are presented. The gain of the method may be up to 4 dB. >

Journal ArticleDOI
TL;DR: The optimal diversity level for minimum error probability of uncoded systems and the diversity level of minimizing the sequential decoder computational load are derived and shown to be different, with the latter requiring a higher order of diversity.
Abstract: Sequential decoding of long-constraint convolutional codes is shown to be a feasible technique for digital data telemetry over realistic marine acoustic channels. A computational bound for sequential decoding over a fading dispersive channel is derived for hardlimiting and quantizing decoders. The results indicate that a minimum of 8 dB of bit SNR (signal-to-noise ratio) is required for sequential decoder operation. Simulations indicate that 14-dB bit SNR results in simple and feasible implementations. Diversity methods for coded transmissions over Rayleigh fading channels are examined. The optimal diversity level for minimum error probability of uncoded systems and the diversity level of minimizing the sequential decoder computational load are derived and shown to be different, with the latter requiring a higher order of diversity. Performance differences between fixed-diversity and optimal-diversity systems are presented. >

Journal ArticleDOI
TL;DR: An asymptotically tight analytic upper bound on the bit error probability performance is developed under the assumption of using the Viterbi decoder with perfect fading amplitude measurement and tightness of the bound is examined by means of computer simulation.
Abstract: Consideration is given to the bit error probability performance of rate 1/2 convolutional codes in conjunction with quaternary phase shift keying (QPSK) modulation and maximum-likelihood Viterbi decoding on fully interleaved Rician fading channels. Applying the generating function union bounding approach, an asymptotically tight analytic upper bound on the bit error probability performance is developed under the assumption of using the Viterbi decoder with perfect fading amplitude measurement. Bit error probability performance of constraint length K=3-7 codes with QPSK is numerically evaluated using the developed bound. Tightness of the bound is examined by means of computer simulation. The influence of perfect amplitude measurement on the performance of the Viterbi decoder is observed. A performance comparison with rate 1/2 codes with binary phase shift keying (BPSK) is provided. >

Dissertation
01 Jan 1990
TL;DR: Error bounds, algorithms, and techniques for evaluating the performance of convolutional codes on the Additive White Gaussian Noise (AWGN) channel are presented and an upper bound on the loss caused by truncating survivors in a Viterbi decoder leads to estimates of minimum practical truncation lengths.
Abstract: This thesis contains error bounds, algorithms, and techniques for evaluating the performance of convolutional codes on the Additive White Gaussian Noise (AWGN) channel. Convolutional encoders are analyzed using simple binary operations in order to determine the longest possible "zero-run" output and if "catastrophic error propagation" may occur. Methods and algorithms are presented for computing the weight enumerator and other generating functions, associated with convolutional codes, which are used to upper-bound maximum-likelihood (i.e., Viterbi) decoder error rates on memoryless channels. In particular, the complete path enumerator T(D, L, I) is obtained for the memory 6, rate 1/2, NASA standard code. A new, direct technique yields the corresponding bit-error generating function. These procedures may be used to count paths between nodes in a finite directed graph or to calculate transfer functions in circuits and networks modelled by signal flow graphs. A modified Viterbi decoding algorithm is used to obtain numbers for error bound computations. New bounds and approximations for maximum-likelihood convolutional decoder first-event, bit, and symbol error rates are derived, the latter one for concatenated coding system analysis. Berlekamp's tangential union bound for maximum-likelihood, block decoder word error probability on the AWGN channel is adapted for convolutional codes. Approximations to bit and symbol error rates are obtained that remain within 0.2 dB of simulation results at low signal-to-noise ratios, where many convolutional codes operate but the standard bounds are useless. An upper bound on the loss caused by truncating survivors in a Viterbi decoder leads to estimates of minimum practical truncation lengths. Lastly, the power loss due to quantizing received (demodulated) symbols from the AWGN channel is studied. Effective schemes are described for uniform channel symbol quantization, branch metric calculations, and path metric renormalization in Viterbi decoders.

Journal ArticleDOI
TL;DR: Efficient VLSI array implementations of the VT, a transform algorithm for maximum-likelihood decoding derived from trellis coding and Viterbi decoding processes, have been developed.
Abstract: Implementation of the Viterbi decoding algorithm has attracted a great deal of interest in many applications, but the excessive hardware/time consumption caused by the dynamic and backtracking decoding procedures make it difficult to design efficient VLSI circuits for practical applications. A transform algorithm for maximum-likelihood decoding is derived from trellis coding and Viterbi decoding processes. Dynamic trellis search operations are paralleled and well formulated into a set of simple matrix operations referred to as the Viterbi transform (VT). Based on the VT, the excessive memory accesses and complicated data transfer scheme demanded by the trellis search are eliminated. Efficient VLSI array implementations of the VT have been developed. Long constraint length codes can be decoded by combining the processors as the building blocks. >

Journal ArticleDOI
TL;DR: It is shown that error-erasure decoding for a cyclic code allows the correction of a combination of t errors and r erasures when 2t+r >.
Abstract: It is shown that error-erasure decoding for a cyclic code allows the correction of a combination of t errors and r erasures when 2t+r >

Journal ArticleDOI
01 Mar 1990
TL;DR: Euclidean algorithm used to evaluate F(x) directly, without going through intermediate steps of solving error-locator and error-evaluator polynomials, suitable for implementation in very-large-scale integrated circuits.
Abstract: In the paper, by considering a Reed-Solomon (RS) code to be a special case of a redundant residue polynomial code, a fast transform decoding algorithm to correct both errors and erasures is presented. This decoding scheme is an improvement of the decoding algorithm for the redundant residue polynomial code suggested by Shiozaki and Nishida [1]. This decoding scheme can be realised readily on VLSI chips.

Journal ArticleDOI
Ron M. Roth1, A. Lempel
TL;DR: A decoding procedure for Reed-Solomon codes is presented, based on a representation of the parity-check matrix by circulant blocks that inherits both the (relatively low) time complexity of the Berlekamp-Massey algorithm and the hardware simplicity characteristic of Blahut's algorithm.
Abstract: The Fourier transform technique is used to analyze and construct several families of double-circulant codes. The minimum distance of the resulting codes is lower-bounded by 2 square root r and can be decoded easily employing the standard BCH decoding algorithm or the majority-logic decoder of Reed-Muller codes. A decoding procedure for Reed-Solomon codes is presented, based on a representation of the parity-check matrix by circulant blocks. The decoding procedure inherits both the (relatively low) time complexity of the Berlekamp-Massey algorithm and the hardware simplicity characteristic of Blahut's algorithm. The procedure makes use of the encoding circuit together with a reduced version of Blahut's decoder. >

Proceedings ArticleDOI
30 Sep 1990
TL;DR: The use of erasure insertion techniques can result in increased coding gains in frequency-hop spread-spectrum communication systems and Viterbi's ratio threshold test is used with Reed-Solomon codes to determine which code symbols should be erased before decoding.
Abstract: The use of erasure insertion techniques can result in increased coding gains in frequency-hop spread-spectrum communication systems. Viterbi's ratio threshold test is used with Reed-Solomon codes to determine which code symbols should be erased before decoding. The system performance in partial-band and multiple-access interference environments is analyzed and compared to errors-only decoding and to errors-and-erasures decoding with perfect side information. When interference is strong, large coding gains are observed and error probabilities are reduced by several orders of magnitude. >

Proceedings ArticleDOI
02 Dec 1990
TL;DR: Computer simulations confirm that the pre-Viterbi-decoding maximal-ratio combining method has better performance than other methods in Rician fading channels and QPSK with high-coding-rate Viterbi decoding can be an attractive candidate for mobile satellite systems.
Abstract: Diversity combining methods for convolutional coded and soft-decision Viterbi decoded channels in mobile satellite communication systems are evaluated. Computer simulations confirm that the pre-Viterbi-decoding maximal-ratio combining method has better performance than other methods in Rician fading channels. Bit error probability performances derived from the analysis model using the probability density function of Rician fading and the bit error probability performance of Viterbi decoding in additive white Gaussian noise (AWGN) channels are shown to agree with the simulation results. The diversity method is applied to trellis-coded 8PSK modulation and coherent detection/differential detection, and their performances are compared with conventional QPSK modulation and coherent detection with high-coding-rate and high-coding-gain Viterbi decoding. QPSK with high-coding-rate Viterbi decoding can be an attractive candidate for mobile satellite systems as well as trellis-coded 8PSK. >

Proceedings ArticleDOI
17 Jun 1990
TL;DR: A neural network decoder designed to provide a constant output of decoded data for long-constraint-length convolutional codes (K⩾11) is presented and Decoder strategy is discussed along with toggling strategy.
Abstract: A neural network decoder designed to provide a constant output of decoded data for long-constraint-length convolutional codes (Kg11) is presented. With only local connections between neurons and digital EX-OR cells, direct hardware implementation in a VLSI ASIC (application-specific integrated circuit) is feasible. Decoder strategy is discussed along with toggling strategy. Architectural modifications to decode other code rates are also discussed

01 Jul 1990
TL;DR: Two classes of multi-level (n, k, d) block codes over GF(q) with block length n, number of information symbols k, and minimum distance d sub min greater than or = d, are presented.
Abstract: Set partitioning to multi-dimensional signal spaces over GF(q), particularly GF sup q-1(q) and GF sup q (q), and show how to construct both multi-level block codes and multi-level trellis codes over GF(q). Two classes of multi-level (n, k, d) block codes over GF(q) with block length n, number of information symbols k, and minimum distance d sub min greater than or = d, are presented. These two classes of codes use Reed-Solomon codes as component codes. They can be easily decoded as block length q-1 Reed-Solomon codes or block length q or q + 1 extended Reed-Solomon codes using multi-stage decoding. Many of these codes have larger distances than comparable q-ary block codes, as component codes. Low rate q-ary convolutional codes, work error correcting convolutional codes, and binary-to-q-ary convolutional codes can also be used to construct multi-level trellis codes over GF(q) or binary-to-q-ary trellis codes, some of which have better performance than the above block codes. All of the new codes have simple decoding algorithms based on hard decision multi-stage decoding.