scispace - formally typeset
Search or ask a question

Showing papers on "Sequential decoding published in 1978"


Journal ArticleDOI
TL;DR: It is shown that soft decision maximum likelihood decoding of any (n,k) linear block code over GF(q) can be accomplished using the Viterbi algorithm applied to a trellis with no more than q^{(n-k)} states.
Abstract: It is shown that soft decision maximum likelihood decoding of any (n,k) linear block code over GF(q) can be accomplished using the Viterbi algorithm applied to a trellis with no more than q^{(n-k)} states. For cyclic codes, the trellis is periodic. When this technique is applied to the decoding of product codes, the number of states in the trellis can be much fewer than q^{n-k} . For a binary (n,n - 1) single parity check code, the Viterbi algorithm is equivalent to the Wagner decoding algorithm.

612 citations


Journal ArticleDOI
TL;DR: A new class of codes in signal space is presented, and their error and spectral properties are investigated, and power spectral density curves show that this type of coding does not increase the transmitted signal bandwidth.
Abstract: A new class of codes in signal space is presented, and their error and spectral properties are investigated. A constant-amplitude continuous-phase signal carries a coded sequence of linear-phase changes; the possible signal phases form a cylindrical trellis in phase and time. Simple codes using 4-16 phases, together with a Viterbi algorithm decoder, allow transmitter power savings of 2-4 dB over binary phase-shift keying in a narrower bandwidth. A method is given to compute the free distance, and the error rates of all the useful codes are given. A software-instrumented decoder is tested on a simulated Gaussian channel to determine multiple error patterns. The error parameter R_{o} is computed for a somewhat more general class of codes and is shown to increase rapidly when mere phases are employed. Finally, power spectral density curves are presented for several codes, which show that this type of coding does not increase the transmitted signal bandwidth.

144 citations


Journal ArticleDOI
TL;DR: It is shown that Reed-Solomon codes can be decoded by using a fast Fourier transform (FFT) algorithm over finite fields GF(F_{n}) , where F_{n} is a Fermat prime, and continued fractions.
Abstract: It is shown that Reed-Solomon (RS) codes can be decoded by using a fast Fourier transform (FFT) algorithm over finite fields GF(F_{n}) , where F_{n} is a Fermat prime, and continued fractions. This new transform decoding method is simpler than the standard method for RS codes. The computing time of this new decoding algorithm in software can be faster than the standard decoding method for RS codes.

41 citations


Journal ArticleDOI
TL;DR: It is proved that code construction for sequential decoding should maximize column-distance growth and free distance in order to guarantee fast decoding, a minimum erasure probability, and a low undetected error probability.
Abstract: A new analysis of the computational effort and the error probability of sequential decoding is presented, which is based entirely on the distance properties of a particular convolutional code and employs no random-coding arguments. An upper bound on the computational distribution P(C_{t}>N_{t}) for a specific time-invariant code is derived, which decreases exponentially with the column distance of the code. It is proved that rapid column-distance growth minimizes the decoding effort and therefore also the probability of decoding failure or erasure. In an analogous way, the undetected error probability of sequential decoding with a particular fixed code is proved to decrease exponentially with the free distance and to increase linearly with the number of minimum free-weight codewords. This analysis proves that code construction for sequential decoding should maximize column-distance growth and free distance in order to guarantee fast decoding, a minimum erasure probability, and a low undetected error probability.

36 citations


Journal ArticleDOI
TL;DR: A state-space approach to the syndrome decoding of binary rate k/n convolutional codes is described, which can be exploited to obtain a reduction in the exponent of growth of the decoder hardware.
Abstract: A state-space approach to the syndrome decoding of binary rate k/n convolutional codes is described. State-space symmetries of a certain class of codes can be exploited to obtain a reduction in the exponent of growth of the decoder hardware. Aside from these savings it is felt that the state-space formalism developed has some unique intrinsic value.

30 citations


Journal ArticleDOI
Stiffler1
TL;DR: A new decoding technique is presented for correcting errors due to bit-oriented hardware failures in parallel, random-access memories and it is shown that the resulting decoder compares favorably, both in complexity and in decoding delay, with currently implemented bit-switching techniques used for the same purpose.
Abstract: A new decoding technique is presented for correcting errors due to bit-oriented hardware failures in parallel, random-access memories. It is shown that the resulting decoder compares favorably, both in complexity and in decoding delay, with currently implemented bit-switching techniques used for the same purpose.

18 citations


Journal ArticleDOI
A. Acampora1
TL;DR: Viterbi decoding of binary convolutional codes on bandlimited channels exhibiting intersymbol interference is considered, and a maximum likelihood sequence estimator algorithm is derived that can provide the power saving associated with low rate (highly redundant) codes without suffering the noise enhancement of linear equalization techniques.
Abstract: Viterbi decoding of binary convolutional codes on bandlimited channels exhibiting intersymbol interference is considered, and a maximum likelihood sequence estimator algorithm is derived. This algorithm might be applied either to increase the allowable data rate for a fixed power transmitter or to reduce the required power for a fixed data rate. Upper and lower bounds on the bit error rate performance of several codes are found for selected values of the ratio of information rate to channel bandwidth, and results are compared against both conventional equalization techniques and the Shannon capacity limit. Results indicate that this algorithm can provide the power saving associated with low rate (highly redundant) codes without suffering the noise enhancement of linear equalization techniques.

15 citations



Journal ArticleDOI
I. Richer1
TL;DR: Simulations show that satisfactory decoder performance can be obtained over a range of RFI parameters.
Abstract: An interleaver/de-interleaver can be used to improve the performance of a Viterbi decoder in the presence of bursty noise. This note describes the design and a method for testing block interleavers which can be used to combat the effects of periodic pulsed RFI. Simulations show that satisfactory decoder performance can be obtained over a range of RFI parameters.

15 citations


Journal ArticleDOI
TL;DR: A scheme is presented for decoding linearelta -decodable codes for the two-user noisy adder channel that exploits the linearity of the codes and corrects all patterns of \lfloor (\delta - 1)/2 \rfloor or fewer transmission errors.
Abstract: A scheme is presented for decoding linear \delta -decodable codes for the two-user noisy adder channel that exploits the linearity of the codes and corrects all patterns of \lfloor (\delta - 1)/2 \rfloor or fewer transmission errors.

12 citations


Journal ArticleDOI
TL;DR: A decoding algorithm for a class of multiple error-correcting arithmetic residue codes can be made siginificantly more efficient.
Abstract: A decoding algorithm for a class of multiple error-correcting arithmetic residue codes can be made siginificantly more efficient.

Journal ArticleDOI
01 Feb 1978
TL;DR: The minimum-distance decoding algorithm proposed in the paper uses a sequential decoding approach to avoid an exponential growth in complexity with increasing constraint length, and also utilises the distance and structural properties of convolutional codes to considerably reduce the amount of tree searching needed to find the minimum- distance path.
Abstract: Minimum-distance decoding of convolutional codes has generally been considered impractical for other than relatively short constraint length codes, because of the exponential growth in complexity with increasing constraint length. The minimum-distance decoding algorithm proposed in the paper, however, uses a sequential decoding approach to avoid an exponential growth in complexity with increasing constraint length, and also utilises the distance and structural properties of convolutional codes to considerably reduce the amount of tree searching needed to find the minimum-distance path. In this way the algorithm achieves a complexity that does not grow exponentially with increasing constraint length, and is efficient for both long and short constraint length codes. The algorithm consists of two main processes. Firstly, a direct-mapping scheme, which automatically finds the minimum-distance path in a single mapping operation, is used to eliminate the need for all short back-up tree searches. Secondly, when a longer back-up search is required, an efficient tree-searching scheme is used to minimise the required search effort. The paper describes the complete algorithm and its theoretical basis, and examples of its operation are given.

01 Sep 1978
TL;DR: A method of decoding of convolutional codes, where the decoding algorithm looks for an information-vector correction sequence, that must be added to the inverse of the received data vector sequence, in contrast with the usual method, that tries to estimate the information vector sequence directly.
Abstract: This report describes a method of decoding of convolutional codes, where the decoding algorithm looks for an information-vector correction sequence, that must be added to the inverse of the received data vector sequence. This in contrast with the usual method, that tries to estimate the information vector sequence directly. Both methods are equally complex for a maximum likelihood (ML) decoder. In sequential decoding, with hard as well as with soft decisions, symmetries of the code can be exploited to reduce the number of computations and hence, the erasure probability. This will be illustrated by simulation results. The method is related to [1,2,31, where Schalkwijk et al. use state space symmetries of the syndrome former to obtain a reduction in the exponential rate of growth of the hardware of a Viterbi-like syndrome decoder. This research was partly supported by the Netherlands Organization for the Advancement of Pure Research (Z.W.O.).

Journal ArticleDOI
TL;DR: An upper bound on the probability of error for list decoding in this case is contained, in which the receiver lists L messages rather than one after receiving a message.

Journal ArticleDOI
W-H. Ng1
01 Jun 1978
TL;DR: The new decoding approach presented in this paper could open a new direction toward overcoming the existing difficulties of using long convolutional codes.
Abstract: In sequential decoding, the buffer overflow is often caused by the Paretian computational distribution problem. This troublesome distribution arises from the sequential decoding algorithm, not from the basic properties of convolutional codes. In this paper, we show that this problem could be removed by using a bidirectional search. In Section 2, we develop a bidirectional tree structure that is applicable to any given convolutional code. In Section 3, we fully interpret why a decoding sequence could recover to the correct path after accepting errors. Also, factors which would affect the length of recovery decoding error bursts are discussed. In Section 4, we first introduce four new properties of convolutional codes. Then we describe the general bidirectional search procedure and explain why and how this search could eliminate the recovery decoding errors. To clarify all the involvements and considerations of using the proposed scheme, examples are given for general reference. It is understood that the performance of a system using long codes depends on the decoding speed and the buffer capability of the decoder. Therefore, the new decoding approach presented in this paper could open a new direction toward overcoming the existing difficulties of using long convolutional codes.

01 Jan 1978
TL;DR: A submitted manuscript is the version of the article upon submission and before peer-review as mentioned in this paper, while a published version is the final layout of the paper including the volume, issue and page numbers.
Abstract: • A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.

Journal ArticleDOI
01 Dec 1978
TL;DR: A simple transparent proof of Berlekamp's algorithm that uses continued fraction approximations to implement BCH and RS codes is given.
Abstract: It was shown recently that BCH and RS codes can be implemented by Berlekamp's algorithm using continued fraction approximations. A simple transparent proof of Berlekamp's algorithm that uses such a development is given in this paper.

Journal ArticleDOI
TL;DR: In a previous correspondence, a decoding procedure which uses continued fractions and which is applicable to a wide class of algebraic codes including Goppa codes was presented, the efficiency of this method is significantly increased.
Abstract: In a previous correspondence, a decoding procedure which uses continued fractions and which is applicable to a wide class of algebraic codes including Goppa codes was presented The efficiency of this method is significantly increased

01 Aug 1978
TL;DR: A decoding procedure for the (n,k) t-error-correcting Reed-Solomon (RS) code, and an implementation of the (31,15) RS code for the I4-TENEX central system are described in this paper.
Abstract: A decoding procedure is described for the (n,k) t-error-correcting Reed-Solomon (RS) code, and an implementation of the (31,15) RS code for the I4-TENEX central system. This code can be used for error correction in large archival memory systems. The principal features of the decoder are a Galois field arithmetic unit implemented by microprogramming a microprocessor, and syndrome calculation by using the g(x) encoding shift register. Complete decoding of the (31,15) code is expected to take less than 500 microsecs. The syndrome calculation is performed by hardware using the encoding shift register and a modified Chien search. The error location polynomial is computed by using Lin's table, which is an interpretation of Berlekamp's iterative algorithm. The error location numbers are calculated by using the Chien search. Finally, the error values are computed by using Forney's method.


Journal ArticleDOI
TL;DR: A mathematical theory of sequential operation of communication channels is developed, and the coding theorem for the capacity using variable-length codes is established by first proving the information-stability theorem using a generalized ergodic theorem.
Abstract: We develop a mathematical theory of sequential operation of communication channels, and prove the coding theorem for the capacity using variable-length codes. The mathematical theory consists in constructing the sequential transfer probability function of a continuous-time channel relative to a stopping rule. It probabilistically describes the sequential output behavior of the channel in response to a given input sequence of functions. The mathematical definition of the capacity using variable-length codes is then given in terms of this probability function. The coding theorem for the capacity is established by first proving the information-stability theorem using a generalized ergodic theorem. It is shown that for some common channels the capacity using variable-length codes is numerically equal to the ordinary capacity using fixed-length codes.

Journal ArticleDOI
TL;DR: The class of convolutional operators is introduced and these operators are used to enumerate codes that are equivalent for use on a symmetric memoryless channel.
Abstract: The properties of convolutional codes and their automorphisms are thoroughly investigated. The structure of the automorphism group of a code is related to some easily obtained properties of a minimal encoder of the code. The class of convolutional operators is introduced and these operators are used to enumerate codes that are equivalent for use on a symmetric memoryless channel.

Journal ArticleDOI
TL;DR: Four novel decoders are considered that come close to achieving minimum-distance decoding of convolutional codes, but without requiring nearly as much storage or nearly as many operations per received data symbol as does the Viterbi-algorithm decoder.
Abstract: The paper considers four novel decoders that come close to achieving minimum-distance decoding of convolutional codes, but without requiring nearly as much storage or nearly as many operations per received data symbol as does the Viterbi-algorithm decoder. The methods of operation of the decoders are first described and the results of computer simulation tests are then presented, comparing the tolerances of the decoders to additive white Gaussian noise with that of a Viterbi-algorithm decoder. Four different rate½ -binary convolutional codes and three different distance measures are used in the tests.

Journal ArticleDOI
TL;DR: The recently introduced notion of a miracle octad generator can be viewed as a means of identifying a codeword of weight eight of the extended binary Golay code, given five of its nonzero positions.
Abstract: The recently introduced notion of a miracle octad generator can be viewed as a means of identifying a codeword of weight eight of the extended binary Golay code, given five of its nonzero positions. It is shown that this fact can be used as the basis of a new decoding algorithm for this code which decodes all the information positions simultaneously. The performance of this algorithm, as well as methods for its implementation, are considered.

Journal ArticleDOI
TL;DR: Soft decision decoding has been successfully applied to convolutional and block-coding schemes and this latter suggests some alternative applications with particular attention being paid to transmission codes.
Abstract: Soft decision decoding has been successfully applied to convolutional and block-coding schemes. This latter suggests some alternative applications with particular attention being paid to transmission codes. Alternate mark inversion (a.m.i.) is used as an example where an effective signal/noise ratio improvement of up to 2.8 dB can be obtained.

Journal ArticleDOI
TL;DR: An algorithm is presented which can be applied to Reed-Massey algorithm codes and utilises orthogonal and non-orthogonal check sums with a resulting improved performance.
Abstract: An algorithm is presented which can be applied to Reed-Massey algorithm codes and utilises orthogonal and non-orthogonal check sums with a resulting improved performance. The algorithm is applied to a well-known class of convolutional threshold codes with a subsequent improvement in the nonbounded error-correcting capability.


15 Oct 1978
TL;DR: An algorithm was developed which optimally decodes a block code for minimum probability of symbol error in an iterative manner and approaches the optimum estimate after only a fraction of the parity check equations were used.
Abstract: An algorithm was developed which optimally decodes a block code for minimum probability of symbol error in an iterative manner. The initial estimate is made by looking at each bit independently and is improved by considering bits related to it through the parity check equations. The dependent bits are considered in order of interesting probability of error. Since the computation proceeds in a systematic way with the bits having the greatest effect being used first, the algorithm approaches the optimum estimate after only a fraction of the parity check equations were used.

Journal ArticleDOI
TL;DR: In this paper, the Viterbi decoding technique has been extended for time-varying convolutional codes and the problem has been presented as a discrete-times, time varying regulator control problem, and dynamic programming has been utilized to explain the decoding process.
Abstract: The Viterbi decoding technique has been extended for time-varying convolutional codes. The problem has been presented as a discrete-times, time-varying regulator control problem, and dynamic programming has been utilized to explain the decoding process.