scispace - formally typeset
Search or ask a question

Showing papers on "List decoding published in 1994"


Journal ArticleDOI
27 Jun 1994
TL;DR: A novel approach to soft decision decoding for binary linear block codes that achieves a desired error performance progressively in a number of stages and is terminated at the stage where either near-optimum error performance or a desired level of error performance is achieved.
Abstract: Presents a novel approach to soft decision decoding for binary linear block codes. The basic idea is to achieve a desired error performance progressively in a number of stages. For each decoding stage, the error performance is tightly bounded and the decoding is terminated at the stage where either near-optimum error performance or a desired level of error performance is achieved. As a result, more flexibility in the tradeoff between performance and decoding complexity is provided. The decoding is based on the reordering of the received symbols according to their reliability measure. The statistics of the noise after ordering are evaluated. Based on these statistics, two monotonic properties which dictate the reprocessing strategy are derived. Each codeword is decoded in two steps: (1) hard-decision decoding based on reliability information and (2) reprocessing of the hard-decision-decoded codeword in successive stages until the desired performance is achieved. The reprocessing is based on the monotonic properties of the ordering and is carried out using a cost function. A new resource test tightly related to the reprocessing strategy is introduced to reduce the number of computations at each reprocessing stage. For short codes of lengths N/spl les/32 or medium codes with 32 >

636 citations


Proceedings ArticleDOI
28 Nov 1994
TL;DR: A new iterative decoding algorithm for product codes (block) based on soft decoding and soft decision output of the component codes is described in the paper, attractive for digital transmission systems requiring powerful coding schemes with a high code rate.
Abstract: A new iterative decoding algorithm for product codes (block) based on soft decoding and soft decision output of the component codes is described in the paper. Monte Carlo simulations of the bit error rate (BER) after decoding using this new algorithm for different product codes indicate coding gains of up to 8 dB. This new coding scheme is attractive for digital transmission systems requiring powerful coding schemes with a high code rate (R>0.8). In the paper the authors compare their coding scheme with one of the best coding schemes, the "turbo-codes", in terms of BER performance.

380 citations


Journal ArticleDOI
G. Poltyrev1
TL;DR: Bounds on the error probability of maximum likelihood decoding of a binary linear code are considered and the author shows that the bound considered for the binary symmetrical channel case coincides asymptotically with the random coding bound.
Abstract: Bounds on the error probability of maximum likelihood decoding of a binary linear code are considered. The bounds derived use the weight spectrum of the code and they are tighter than the conventional union bound in the case of large noise in the channel. The bounds derived are applied to a code with an average spectrum, and the result is compared to the random coding exponent. The author shows that the bound considered for the binary symmetrical channel case coincides asymptotically with the random coding bound. For the case of AWGN channel the author shows that Berlekamp's (1980) tangential bound can be improved, but even this improved bound does not coincide with the random coding bound, although it can be very close to it. >

255 citations


Journal ArticleDOI
TL;DR: Computer simulation results indicate, for some signal-to-noise ratios (SNR), that the proposed soft decoding algorithm requires less average complexity than those of the other two algorithms, but the performance of the algorithm is always superior to those ofthe other two.
Abstract: A new soft decoding algorithm for linear block codes is proposed. The decoding algorithm works with any algebraic decoder and its performance is strictly the same as that of maximum-likelihood-decoding (MLD). Since our decoding algorithm generates sets of different candidate codewords corresponding to the received sequence, its decoding complexity depends on the received sequence. We compare our decoding algorithm with Chase (1972) algorithm 2 and the Tanaka-Kakigahara (1983) algorithm in which a similar method for generating candidate codewords is used. Computer simulation results indicate, for some signal-to-noise ratios (SNR), that our decoding algorithm requires less average complexity than those of the other two algorithms, but the performance of ours is always superior to those of the other two. >

162 citations


Proceedings ArticleDOI
27 Jun 1994
TL;DR: The simple soft output viterbi algorithm (SOVA) meets all the requirements for iterative decoding if an a priori term is added and surprisingly good performance is achieved for the Gaussian and Rayleigh channel.
Abstract: Iterative decoding of two dimensional systematic convolutional codes has been termed "turbo"-(de)coding. It is shown that the simple soft output viterbi algorithm (SOVA) meets all the requirements for iterative decoding if an a priori term is added. With simple 4 and 16 state codes surprisingly good performance is achieved for the Gaussian and Rayleigh channel with a very small degradation relative to the complicated MAP algorithm. >

103 citations


Patent
Jae Moon Jo1, Je Chang Jeon1
16 Dec 1994
TL;DR: In this paper, an adaptive variable-length coding/decoding method performs an optimal variablelength coding and decoding depending on an intra mode/inter mode condition, quantization step size and a current zigzag scanning position, such that a plurality of variable length coding tables having different patterns of a regular region and an escape region according to statistical characteristics of the run level data are set.
Abstract: An adaptive variable-length coding/decoding method performs an optimal variable-length coding and decoding depending on an intra mode/inter mode condition, quantization step size and a current zigzag scanning position, such that a plurality of variable-length coding tables having different patterns of a regular region and an escape region according to statistical characteristics of the run level data are set. One of the variable-length coding tables is selected according to mode, quantization step size and scanning position, and the orthogonal transform coefficients according to the selected variable-length coding table are variable-length-coded.

95 citations


Proceedings Article
01 Jan 1994
TL;DR: A soft-in/soft-out algorithm which estimates the a posteriori probabilities for each transmitted bit is investigated and has the advantage of providing the APP for each decoded bit.
Abstract: A soft-in/soft-out algorithm which estimates the a posteriori probabilities (APP) for each transmitted bit is investigated. The soft outputs can be used at the next decoding stage, which could be an outer code or another iteration in an iterative decoding process. This algorithm is estimated to have approximately four times the complexity of the Viterbi algorithm and has the advantage of providing the APP for each decoded bit.

66 citations


Journal ArticleDOI
TL;DR: In this paper, a general decoding method for linear codes was investigated for cyclic codes and a new family of codes was given for which the decoding needed only O(n/sup 2/) operations.
Abstract: A general decoding method for linear codes is investigated for cyclic codes. The decoding consists of solving two systems of linear equations. All but four binary cyclic codes of length less than 63 can so be decoded up to their actual distance. A new family of codes is given for which the decoding needs only O(n/sup 2/) operations. >

66 citations


Journal ArticleDOI
TL;DR: An efficient decoding algorithm for algebraic-geometric codes, including Hermitian curves, and a proof of d/sub min//spl ges/d* without directly using the Riemann-Roch theorem are presented.
Abstract: An efficient decoding algorithm for algebraic-geometric codes is presented. For codes from a large class of irreducible plane curves, including Hermitian curves, it can correct up to [(d*-1)/2] errors, where d* is the designed minimum distance. With it we also obtain a proof of d/sub min//spl ges/d* without directly using the Riemann-Roch theorem. The algorithm consists of Gaussian elimination on a specially arranged syndrome matrix, followed by a novel majority voting scheme. A fast implementation incorporating block Hankel matrix techniques is obtained whose worst-case running time is O(mn/sup 2/), where m is the degree of the curve. Applications of our techniques to decoding other algebraic-geometric codes, to decoding BCH codes to actual minimum distance, and to two-dimensional shift register synthesis are also presented. >

55 citations


Journal ArticleDOI
TL;DR: It is shown that this reduced-complexity, suboptimal decoding strategy performs nearly as well as maximum-likelihood decoding.
Abstract: This work considers coded M-ary phase-shift keying (MPSK) schemes with noncoherent detection. A class of block codes called module-phase codes is described. The algebraic framework used for describing these codes relies on elements from module theory which are discussed along with a method for constructing such codes for noncoherent detection. It is shown that differential encoding may be viewed as a specific code from a particular class of module-phase codes. Two classes of codes that achieve significant coding gain with respect to coherent detection of uncoded MPSK are presented. In the first class of module-phase codes, the coding gain is achieved at the expense of bandwidth expansion. In the second class, however, the coding gain is achieved at the expense of signal constellation expansion without expanding bandwidth. Finally, an integrated demodulation/decoding technique based on a modification of information set decoding is presented. It Is shown that this reduced-complexity, suboptimal decoding strategy performs nearly as well as maximum-likelihood decoding. >

55 citations


Patent
16 Dec 1994
TL;DR: In this article, a coding device has encoder used to generate code word information in response to data and reordering unit has run count reordering sub-unit designed to arrange code words in coding sequence and binary digit arrangement sub unit for aggregating variable-length code words with fixed length alternation and for submitting these fixed-length words in sequence required by decoding device.
Abstract: FIELD: coding devices designed for data compacting systems incorporating decoding device for decoding information generated by coding device. SUBSTANCE: coding device has encoder used to generate code word information in response to data. Coding device also incorporates re- ordering unit generating stream of coded data in response to code word information arriving from encoder. Re-ordering unit has run count re-ordering sub-unit designed to arrange code words in coding sequence and binary digit arrangement sub-unit for aggregating variable-length code words with fixed length alternation and for submitting these fixed- length words in sequence required by decoding device. EFFECT: provision for precise recovery of original data. 121 cl, 33 dwgr

Proceedings ArticleDOI
27 Jun 1994
TL;DR: A maximum likelihood decoding algorithm for binary VLEC codes over the binary symmetric channel (BSC) is derived and it is shown that this algorithm achieves significant coding gain over the /spl alpha/-prompt decoding introduced by Hartnett et.
Abstract: A different viewpoint on variable-length error correcting (VLEC) codes is presented, as compared to that found in the literature. Consequently, a maximum likelihood decoding algorithm for binary VLEC codes over the binary symmetric channel (BSC) is derived. It is shown that this algorithm achieves significant coding gain over the /spl alpha/-prompt decoding introduced by Hartnett et. Al. (1990), at the expense of increased complexity. >

Journal ArticleDOI
01 Sep 1994
TL;DR: Here, some new more general properties are found for the syndromes of the subclass of binary QR codes of length n = 8m +1 and the new theorems needed to decode this subclass of the QR codes are obtained and proved.
Abstract: Algebraic approaches to the decoding of the quadratic residue (QR) codes were studied recently. In Reed et. al. (1992), a decoding algorithm was given for the (41, 21, 9) binary QR code. Here, some new more general properties are found for the syndromes of the subclass of binary QR codes of length n = 8m +1. Using these properties, the new theorems needed to decode this subclass of the QR codes are obtained and proved. As an example of the application of these theorems, a new algebraic decoding algorithm for the (73, 37, 13) binary QR code is presented.

Journal ArticleDOI
TL;DR: A computationally efficient soft-decision Reed-Solomon (RS) decoding algorithm that makes no assumptions on the d-1 erasures and is useful for soft-Decision decoding techniques such as generalized-minimum-distance decoding.
Abstract: We develop a computationally efficient soft-decision Reed-Solomon (RS) decoding algorithm. Our new algorithm has direct applications to soft-decision decoding procedures such as the generalized-minimum-distance decoding algorithm. Our particular soft-decision decoding technique is not the innovation of the algorithm. The innovation is a new errors-and-erasures RS decoding algorithm that only works with different sets of erasures where the size of each set of erasures is reduced by one for each iteration (i.e., generalized-minimum-distance decoding). A result of each iteration is an error-and-erasure locator polynomial (or locator polynomials) that any RS decoder would produce. Since the new soft-decision RS decoding algorithm makes no assumptions on the d-1 erasures, the algorithm is useful for soft-decision decoding techniques such as generalized-minimum-distance decoding. >

01 Jan 1994
TL;DR: Improved lower bounds for linear and for nonlinear codes for list decoding are derived and coqjecture shows that the two bounds are identical.
Abstract: Elias (6) derived upper and lower bounds on the sizes of error-correcting codes for list decoding. The asymptotic values of his lower bounds for linear codes and for nonlinear codes are separated. We derive improved lower bounds for linear and for nonlinear codes. We coqjecture our two bounds are identical. However, we were able to verify this only for small lists.

Journal ArticleDOI
TL;DR: A family of codes is presented that can correct two-dimensional clusters of errors of size b1 × b2, using simple parity-check codes, and simple encoding and decoding algorithms are presented.
Abstract: The authors present a family of codes that can correct two-dimensional clusters of errors of size b1 × b2, using simple parity-check codes. The new codes are n1 × n2 array codes, where n1 ≥ 2b1b2 – b1, n2 ≥ 2b1b2, b1 divides n1 and b2 divides n2. Simple encoding and decoding algorithms are presented.

Proceedings ArticleDOI
28 Nov 1994
TL;DR: It is shown that the iterated product of single-parity-check (SPC) codes shows that their weight distribution asymptotically approaches that of random coding if the smallest code length in the product approaches infinity.
Abstract: We consider the iterated product of single-parity-check (SPC) codes. We show that their weight distribution asymptotically approaches that of random coding if the smallest code length in the product approaches infinity. According to a specific criterion, the best choice consists of taking all SPC codes of equal length. Estimates of the weight distribution obtained by simulation show that even moderately long codes have a weight distribution close to that obtained in the average by random coding. We also discuss decoding of these codes by iterated replication decoding and report results of its simulation.

Proceedings ArticleDOI
27 Jun 1994
TL;DR: Improved simultaneous exponential upper bounds to error and erasure probabilities are obtained and the lower bound to the zero undetected-error capacity for discrete memoryless channels is improved.
Abstract: A decoder that has the option of not decoding and thus declaring an erasure is called an errors and erasures decoder. We obtain improved simultaneous exponential upper bounds to error and erasure probabilities. We also improve the lower bound to the zero undetected-error capacity for discrete memoryless channels. >

Patent
Glen G. Langdon1, Ahmad Zandi1
03 Jan 1994
TL;DR: An efficient, fast-decoding, order-preserving, easily implementable, length-based (L-based) arithmetic coding method, apparatus, and manufacture for an m-ary alphabet {1,..., i,. −., m} is provided in this paper, where recursive division of intervals on a number line into sub-intervals whose lengths are proportional to symbol probability and which are ordered in lexical order with the constraint that probabilities be estimated as negative powers of two.
Abstract: An efficient, fast-decoding, order-preserving, easily implementable, length-based (L-based) arithmetic coding method, apparatus, and manufacture for an m-ary alphabet {1, . . . , i, . . . , m} is provided. A coding method in accordance with the invention combines recursive division of intervals on a number line into sub-intervals whose lengths are proportional to symbol probability and which are ordered in lexical order with the constraint that probabilities be estimated as negative powers of two (1/2, 1/4, 1/8, etc.). As a consequence, the advantageous preservation of lexical order and computational efficiency are both realized. Also, a coding system in accordance with the invention is simple to implement, and high speed operation is achieved, because shifts take the place of multiplications. A coding apparatus in accordance with the invention preferably includes either a single decoding table to achieve fast decoding, or two decoding tables to achieve fast decoding as well as order preservation. The decoding process can conveniently be performed by constructing a decoding table for the C register. The C register is initialized with the leading bits of the codestring. The decoded symbol is the symbol i, i being the greatest integer that makes the C-register value greater than or equal to P(i).

Journal ArticleDOI
TL;DR: A new crossover operator is defined that exploits domain-specific information and compare it with uniform and two-point crossover and achieves bit-error-probabilities as low as 0.00183 for a [104,52] code with a low signal-to-noise ratio.
Abstract: Soft-decision decoding is an NP-hard problem of great interest to developers of communication systems. We show that this problem is equivalent to the problem of optimizing Walsh polynomials. We present genetic algorithms for soft-decision decoding of binary linear block codes and compare the performance with various other decoding algorithms including the currently developed A* algorithm. Simulation results show that our algorithms achieve bit-error-probabilities as low as 0.00183 for a [104,52] code with a low signal-to-noise ratio of 2.5 dB, exploring only 22,400 codewords, whereas the search space contains 4.5 × 10l5 codewords. We define a new crossover operator that exploits domain-specific information and compare it with uniform and two-point crossover.

Journal ArticleDOI
TL;DR: It is shown that from a polynomial ideal point of view, the decoding problems of cyclic codes are closely related to the monic generators of certain polynometric ideals.
Abstract: This paper provides two theorems for decoding all types of cyclic codes. It is shown that from a polynomial ideal point of view, the decoding problems of cyclic codes are closely related to the monic generators of certain polynomial ideals. This conclusion is also generalized to the decoding problems of algebraic geometry codes. >

Journal ArticleDOI
01 Apr 1994
TL;DR: It is shown that uniquely decodable CCMA schemes permit the multiple access function to be combined with that of forward error correction, and a low complexity maximum likelihood decoding technique is presented.
Abstract: It is highly desirable to use simple and effective multiple access coding and decoding techniques which are capable of multiple access function and error control. The collaborative coding multiple access (CCMA) techniques potentially permit efficient simultaneous transmission by several users sharing a common channel, without subdivision in time, frequency or orthogonal codes. The authors investigate the performance of uniquely decodable CCMA schemes employing hard decision and maximum likelihood decoding techniques. A low complexity maximum likelihood decoding technique is presented. The reliability performance of various coding schemes employing these decoding techniques are carried out in the presence of AWGN conditions. The simulation results are presented in the form of symbol and codeword error rates as a function of signal to noise ratios. It is shown that uniquely decodable CCMA schemes permit the multiple access function to be combined with that of forward error correction.

Patent
22 Apr 1994
TL;DR: In this article, the Viterbi algorithm is used to decode a convolutional-encoded data stream at high speed and with high efficiency, and several modifications to the fully analog system are described, including an analog/digital hybrid design.
Abstract: An artificial neural network (ANN) decoding system decodes a convolutionally-encoded data stream at high speed and with high efficiency. The ANN decoding system implements the Viterbi algorithm and is significantly faster than comparable digital-only designs due to its fully parallel architecture. Several modifications to the fully analog system are described, including an analog/digital hybrid design that results in an extremely fast and efficient Viterbi decoding system. A complexity and analysis shows that the modified ANN decoding system is much simpler and easier to implement than its fully digital counterpart. The structure of the ANN decoding system of the invention provides a natural fit for VLSI implementation. Simulation results show that the performance of the ANN decoding system exactly matches that of an ideal Viterbi decoding system.

Journal ArticleDOI
TL;DR: A systematic design of a trellis-based maximum-likelihood soft-decision decoder for linear block codes is presented and Computational gain of up to 6 is achieved for long high-rate codes over the well-known trellIS decoder of Wolf (1978).
Abstract: A systematic design of a trellis-based maximum-likelihood soft-decision decoder for linear block codes is presented. The essence of the decoder is to apply an efficient search algorithm for the error pattern on a reduced trellis representation of a certain coset. Rather than other efficient decoding algorithms, the proposed decoder is systematically designed for long codes, as well as for short codes. Computational gain of up to 6 is achieved for long high-rate codes over the well-known trellis decoder of Wolf (1978). Efficient decoders are also obtained for short and moderate length codes. >

Journal ArticleDOI
TL;DR: It is shown that the proposed two-stage suboptimum decoding scheme provides an excellent trade-off between the error performance and decoding complexity for codes of moderate and long block length.
Abstract: To decode a long block code with a large minimum distance by maximum likelihood decoding is practically impossible because the decoding complexity is simply enormous. However, if a code can be decomposed into constituent codes with smaller dimensions and simpler structure, it is possible to devise a practical and yet efficient scheme to decode the code. This paper investigates a class of decomposable codes, their distance and structural properties. It is shown that this class includes several classes of well-known and efficient codes as subclasses. Several methods for constructing decomposable codes or decomposing codes are presented. A two-stage (soft-decision or hard-decision) decoding scheme for decomposable codes, their translates or unions of translates is devised, and its error performance is analyzed for an AWGN channel. The two-stage soft-decision decoding is suboptimum. Error performances of some specific decomposable codes based on the proposed two-stage soft-decision decoding are evaluated. It is shown that the proposed two-stage suboptimum decoding scheme provides an excellent trade-off between the error performance and decoding complexity for codes of moderate and long block length. >

Journal ArticleDOI
TL;DR: An upper bound on the effective error coefficient of a two-level code with two-stage decoding is presented and this bound provides a guideline for constructing two- level codes to achieve a good trade-off between the error performance and decoding complexity.
Abstract: An upper bound on the effective error coefficient of a two-level code with two-stage decoding is presented. This bound provides a guideline for constructing two-level codes to achieve a good trade-off between the error performance and decoding complexity. Based on this bound, good two-level decompositions of some Reed-Muller codes for two-stage decoding are found. Simulation results on the error performances of some Reed-Muller codes of lengths up to 64 with two-stage soft-decision suboptimum decoding based on their two-level decompositions are given. >

Journal ArticleDOI
01 Oct 1994
TL;DR: Simulation results showed that soft decision trellis decoding could give 2 dB and 2.5 dB coding gains relative to hard decision, and that the performance of reduced search decoding could approximate that of the Viterbi method with reductions in computation of one to two orders of magnitude.
Abstract: Soft decision decoding of Reed-Solomon codes has been implemented by using trellis decoding methods. Trellis decoding schemes make it possible to incorporate both hard and soft decision methods easily. To establish maximum likelihood performance, the Viterbi decoding algorithm has been used. To reduce the decoder complexity caused by full search Viterbi decoding, reduced search methods have been tried, and a computationally efficient reduced search algorithm is suggested. Importance sampling simulation techniques have been used to reduce simulation time. The simulation results for the (15, 13) and the (15, 11) Reed-Solomon code showed that soft decision trellis decoding could give 2 dB and 2.5 dB coding gains relative to hard decision, respectively, and that the performance of reduced search decoding could approximate that of the Viterbi method with reductions in computation of one to two orders of magnitude.

Journal ArticleDOI
TL;DR: An erasure-free sequential decoding algorithm for trellis codes, called the buffer looking algorithm (BLA), is introduced and it is shown by analysis and simulation that continuous sequential decoding using this scheme has a high probability of resynchronizing successfully.
Abstract: An erasure-free sequential decoding algorithm for trellis codes, called the buffer looking algorithm (BLA), is introduced. Several versions of the algorithm can be obtained by choosing certain parameters and selecting a resynchronization scheme. These can be categorized as block decoding or continuous decoding, depending on the resynchronization scheme. Block decoding is guaranteed to resynchronize at the beginning of each block, but suffers some rate loss when the block length is relatively short. The performance of a typical block decoding scheme is analyzed, and we show that significant coding gains over Viterbi decoding can be achieved with much less computational effort. A resynchronization scheme is proposed for continuous sequential decoding. It is shown by analysis and simulation that continuous sequential decoding using this scheme has a high probability of resynchronizing successfully. This new resynchronization scheme solves the rate loss problem resulting from block decoding. The channel cutoff rate, demodulator quantization, and the tail's influence on performance are also discussed. Although this paper considers only the decoding of trellis codes, the algorithm can also be applied to the decoding of convolutional codes. >

Proceedings ArticleDOI
27 Jun 1994
TL;DR: This paper shows that if these trellises are scored according to the complexity of implementing the Viterbi decoding algorithm on them, there is a uniquely optimal one, viz. the "Wolf ( 1978)-Massey (1978)-Muder (1988)" trellis.
Abstract: A given linear block code can be represented by many different trellises. In this paper, we will show that if these trellises are scored according to the complexity of implementing the Viterbi decoding algorithm on them, there is a uniquely optimal one, viz. The "Wolf (1978)-Massey (1978)-Muder (1988)" trellis. We will also introduce "minimal-span" generator matrices, which permit easy construction of WMM trellises. >

Patent
27 Dec 1994
TL;DR: In this article, a method for decoding coded data is executed to use a decoder composed of plural decoding units, those units being parallel processors, where coded data are divided into a first and a second coded areas.
Abstract: A method for decoding coded data is executed to use a decoder composed of plural decoding units, those units being parallel processors. The coded data is divided into a first and a second coded areas. A first decoding unit starts to decode the coded data from the head of the first coded area, that is, from the head of the overall coded data. A second processor starts the decoding process from the head of the second coded area, that is, from any location of code sequence from the head. If conflict takes place in the decoded data, the decoding device operates to repeat a trial-and-error operation for re-starting the decoding process from a new location close to the head of the second coded area. When the decoding process of the first decoding unit reaches the second coded area, if a right answer is found in the decoded result given by the second decoding process, it is picked up as the proper result.