scispace - formally typeset
Search or ask a question

Showing papers on "List decoding published in 1993"


Journal ArticleDOI
TL;DR: This decoding procedure is a generalization of Peterson's decoding procedure for the BCH codes and can be used to correct any ((d*-1)/2) or fewer errors with complexity O(n/sup 3/), where d* is the designed minimum distance of the algebraic-geometric code and n is the codelength.
Abstract: A simple decoding procedure for algebraic-geometric codes C/sub Omega /(D,G) is presented. This decoding procedure is a generalization of Peterson's decoding procedure for the BCH codes. It can be used to correct any ((d*-1)/2) or fewer errors with complexity O(n/sup 3/), where d* is the designed minimum distance of the algebraic-geometric code and n is the codelength. >

248 citations


Proceedings ArticleDOI
23 May 1993
TL;DR: The decoding of multidimensional product codes, using separable symbol-by-symbol maximum a posteriori filters, and the extension of the concept to concatenated convolutional codes is given and some simulation results are presented.
Abstract: Very efficient signaling in radio channels requires the design of very powerful codes having special structure suitable for practical decoding schemes. Powerful codes are obtained by using simple block codes to construct multidimensional product codes. The decoding of multidimensional product codes, using separable symbol-by-symbol maximum a posteriori filters, is described. Simulation results are presented for three-dimensional product codes constructed with the (16,11) extended Hamming code. The extension of the concept to concatenated convolutional codes is given and some simulation results are presented. Potential applications are briefly discussed. >

188 citations


Journal ArticleDOI
TL;DR: In this paper, a generalized Dijkstra's algorithm is employed to search through a trellis for an equivalent code of the transmitted code, guided by an evaluation function f defined to take advantage of the information provided by the received vector and the inherent properties of transmitted code.
Abstract: The authors present a novel and efficient maximum-likelihood soft-decision decoding algorithm for linear block codes. The approach used here converts the decoding problem into a search problem through a graph that is a trellis for an equivalent code of the transmitted code. A generalized Dijkstra's algorithm, which uses a priority-first search strategy, is employed to search through this graph. This search is guided by an evaluation function f defined to take advantage of the information provided by the received vector and the inherent properties of the transmitted code. This function f is used to reduce drastically the search space and to make the decoding efforts of this decoding algorithm adaptable to the noise level. For example, for most real channels of the 35 000 samples tried, simulation results for the (128,64) binary extended BCH code show that the proposed decoding algorithm is fifteen orders of magnitude more efficient in time and in space than that proposed by Wolf (1978). Simulation results for the (104, 52) binary extended quadratic residue code are also given. >

145 citations



Proceedings ArticleDOI
29 Nov 1993
TL;DR: An intuitive algorithm by Lodge et al.
Abstract: An intuitive algorithm by Lodge et al. [1992] for iterative decoding of block codes is shown to follow from entropy optimization principles. This approach provides a novel and effective algorithm for the soft-decoding of block codes which have a product structure. >

81 citations


Journal ArticleDOI
TL;DR: A Reed-Solomon code decoding algorithm based on Newton's interpolation is presented, which uses a modified Berlekamp-Massey algorithm to perform all necessary generalized-minimum-distance decoding steps in only one run.
Abstract: A Reed-Solomon code decoding algorithm based on Newton's interpolation is presented. This algorithm has as main application fast generalized-minimum-distance decoding of Reed-Solomon codes. It uses a modified Berlekamp-Massey algorithm to perform all necessary generalized-minimum-distance decoding steps in only one run. With a time-domain form of the new decoder the overall asymptotic generalized-minimum-distance decoding complexity becomes O(dn), with n the length and d the distance of the code (including the calculation of all error locations and values). This asymptotic complexity is optimal. Other applications are the possibility of fast decoding of Reed-Solomon codes with adaptive redundancy and a general parallel decoding algorithm with zero delay. >

55 citations


Proceedings ArticleDOI
17 Jan 1993
TL;DR: A novel maximum-likelihood soft-decision decoding algorithm for linear block codes that is adaptable to the noise level and guided by an evaluation function f to take advantage of the information provided by the received vector and the inherent properties of the transmitted code.
Abstract: In this paper we present a novel maximum-likelihood soft-decision decoding algorithm for linear block codes. The approach used here is to convert the decoding problem into a search problem through a graph which is a trellis for an equivalent code of the transmitted code. Algorithm A* is employed to search through this graph. This search is guided by an evaluation function f defined to take advantage of the information provided by the received vector and the inherent properties of the transmitted code. This function f is used to drastically reduce the search space and to make the decoding efforts of this decoding algorithm adaptable to the noise level. Simulation results for the (104, 52) binary extended quadratic residue code and the (128,64) binary extended BCH code are given.

43 citations


Journal ArticleDOI
TL;DR: The error performance of some of these codes based on both one-stage optimum decoding and multistage suboptimum decoding has been simulated and show that these codes achieve good error performance with small decoding complexity.
Abstract: The multilevel coding technique is used for constructing multilevel trellis M-ary phase-shift-keying (MPSK) modulation codes for the Rayleigh fading channel. In the construction of a code, all the factors which affect the code performance and its decoding complexity are considered. The error performance of some of these codes based on both one-stage optimum decoding and multistage suboptimum decoding has been simulated. The simulation results show that these codes achieve good error performance with small decoding complexity. >

40 citations


Proceedings ArticleDOI
29 Nov 1993
TL;DR: This work considers recursive convolutional coding as a means for constructing codes whose distance distribution is close to that obtained in the average by random coding, hence whose performance is expected to closely approach the channel capacity.
Abstract: We consider recursive convolutional coding as a means for constructing codes whose distance distribution is close to that obtained in the average by random coding, hence whose performance is expected to closely approach the channel capacity. We especially consider convolutional codes where the encoder register taps are such that it generates maximum-length sequences. Two algorithms for decoding these codes are discussed. Since both involve implementation difficulties, we propose to generate such codes by means similar to turbo-codes which make their decoding easy. >

40 citations


Journal ArticleDOI
TL;DR: The computation and simulation results for these codes show that with multistage decoding, significant coding gains can be achieved with large reduction in decoding complexity.
Abstract: Multistage decoding of multilevel block multilevel phase-shift keying (M-PSK) modulation codes for the additive white Gaussian noise (AWGN) channel is investigated. Several types of multistage decoding, including a suboptimum soft-decision decoding scheme, are devised and analyzed. Upper bounds on the probability of an incorrect decoding of a code are derived for the proposed multistage decoding schemes. Error probabilities of some specific multilevel block 8-PSK modulation codes are evaluated and simulated. The computation and simulation results for these codes show that with multistage decoding, significant coding gains can be achieved with large reduction in decoding complexity. In one example, it is shown that the difference in performance between the proposed suboptimum multistage soft-decision decoding and the single-stage optimum decoding is small, only a fraction of a dB loss in SNR at the block error probability of 10/sup -6/. >

38 citations


Journal ArticleDOI
TL;DR: The conjecture of Rujan on error-correcting codes is proven and errors in decoding of signals transmitted through noisy channels assume the smallest values when signals are decoded at a particular finite temperature.
Abstract: The conjecture of Rujan on error-correcting codes is proven. Errors in decoding of signals transmitted through noisy channels assume the smallest values when signals are decoded at a particular finite temperature. This finite-temperature decoding is compared with the conventional maximum likelihood decoding which corresponds to the T =0 case. The method of gauge transformation in the spin glass theory is useful in this argument.

Journal ArticleDOI
TL;DR: Bidirectional multiple-path tree searching algorithms for the decoding of convolutional codes are presented and it is shown that the bidirectional exploration considerably reduces the bit error propagation due to correct path loss.
Abstract: Bidirectional multiple-path tree searching algorithms for the decoding of convolutional codes are presented. These suboptimal coding algorithms use a multiple-path breadth-first bidirectional tree exploration procedure and long-memory convolution codes. It is shown that, compared to the usual M-algorithm, the bidirectional exploration considerably reduces the bit error propagation due to correct path loss. Computer simulations using rate-1/2 codes over binary symmetric channels are used to analyze the effect of the number of path extensions, code memory, and frame length on the bit error probability. The results show that with a bit error probability of 10/sup -5/, coding gains on the order of 2 dB over the M-algorithm and 1 dB over a Viterbi decoder of equivalent complexity can be achieved. >

Journal ArticleDOI
17 Jan 1993
TL;DR: It is found, by means of extensive computer simulations as well as a heuristic argument, that the advantage of the BSD appears as a substantial decrease of the computational variability of the sequential decoding.
Abstract: The main drawback of sequential decoding is the variability of its decoding effort which could cause decoding erasures. We propose and analyze an efficient bidirectional sequential decoding (BSD) technique to alleviate this drawback. In the proposed BSD, two decoders are used; one is called a forward decoder (FD), and is used to search the tree from the forward direction; while the other is called a backward decoder (BD), and is used for the backward search of the tree. Forward decoding and backward decoding are performed simultaneously, and stop whenever the FD and BD merge at a common encoder state somewhere in the tree. The relationships between backward coding and forward coding are examined in detail. Good rate 1/2 convolutional codes, with memory m ranging from 2 to 25, suitable for bidirectional decoding found through extensive computer search, are provided. These codes possess the same distance properties from both forward and backward directions. It is found, by means of extensive computer simulations as well as a heuristic argument, that the advantage of the BSD appears as a substantial decrease of the computational variability of the sequential decoding. Our findings suggest that the Pareto exponent of unidirectional sequential decoding (USD) can be practically doubled by using BSD.

Journal ArticleDOI
TL;DR: Decoding methods for error patterns of bounded weight are described, and it is demonstrated that these methods offer a favorable combination of performance and complexity.
Abstract: We discuss minimum distance decoding of convolutional codes. The relevant distance functions are defined, and the set of correctable error patterns is described by a sequence of weight constraints. Decoding methods for error patterns of bounded weight are described, and it is demonstrated that these methods offer a favorable combination of performance and complexity. Exact values and upper bounds on the error probability are calculated from finite state models of the decoding process. >

Patent
Mitsuyoshi Suzuki1
27 Apr 1993
TL;DR: An image coding/decoding apparatus intended for efficient processing by sharing members in coding, local decoding, and decoding processing is described in this article, where either of the two functions can be selected for execution in synchronization with the processing timing for each block.
Abstract: An image coding/decoding apparatus intended for efficient processing by sharing members in coding, local decoding, and decoding processing. Processes such as DCT and IDCT, zigzag scan conversion and inversion, or quantization and inverse quantization performed in coding, local decoding, and decoding processing are similar to each other. DCT/IDCT, zigzag scan conversion/inversion, and quantization/inverse quantization are provided where either of the two functions can be selected for execution in synchronization with the processing timing for each block. Since the time required for one process for data for each block is very short, overall processing is not affected even if the members are used in a time division manner. By sharing processing, the hardware scale can be made small and by using a data bus in a time division manner, an external data bus can also be eliminated.

Journal ArticleDOI
TL;DR: A new encoding/decoding technique for Hamming codes, based on generalised array codes (GACs) is proposed, which allows the design of Hamming code with minimal trellises.
Abstract: A new encoding/decoding technique for Hamming codes, based on generalised array codes (GACs) is proposed. The proposed technique allows the design of Hamming codes with minimal trellises. An example is given for the (7, 4, 3) Hamming code, but the proposed technique is applicable to all existing Hamming codes. The trellis structure of such codes provides low complexity soft maximum likelihood decoding.

Journal ArticleDOI
TL;DR: It is shown for coded 8-PSK that additional coding gains up to 2 dB are achieved by these means at the cost of modest complexity, but the decoding delay is increased.
Abstract: Multilevel codes give an impressive asymptotic coding gain measured in dB However, when classical multistage decoding is used, this gain can only be partly realized at bit error rates (BER) around 10−5 It is shown how multistage decoding can be improved for this BER range We use interleaving between the levels, pass reliability information between the different stages by applying “Soft-Output” decoders, and apply re-iterated decoding In the latter case lower levels are again decoded using the results of the higher levels It is shown for coded 8-PSK that additional coding gains up to 2 dB are achieved by these means at the cost of modest complexity, but the decoding delay is increased

Journal ArticleDOI
TL;DR: A decoding algorithm is presented for the ternary (11, 6, 5) Golay code and relies on solving the Newton identities associated with the syndromes of the code.
Abstract: A decoding algorithm is presented for the ternary (11, 6, 5) Golay code. This algorithm is an algebraic decoding and relies on solving the Newton identities associated with the syndromes of the code. However, to complete the decoding some explicit values of Zech's logarithms need to be established. >

Journal ArticleDOI
William E. Ryan, P. Conoval1
TL;DR: A practical definition of an erasure is presented and a technique is presented for evaluating the performance of Reed-Solomon codes for both types of decoding on an interleaved burst error channel such as is seen in digital magnetic tape recording.
Abstract: A practical definition of an erasure is presented and leads to new expressions for correct and incorrect decoding probabilities for Reed-Solomon codes, assuming an incomplete decoder. The main benefit of this approach is that, in contrast with the usual approach, one is able to analytically demonstrate the performance improvement provided by errors-and-erasures decoding relative to errors-only decoding. A technique is presented for evaluating the performance of Reed-Solomon codes for both types of decoding on an interleaved burst error channel such as is seen in digital magnetic tape recording. Several illustrative examples are included. >

Proceedings ArticleDOI
29 Nov 1993
TL;DR: A new coding design with large coding gains and complexity-reduced decoders is proposed, based on a multilevel coding construction with rate-compatible punctured convolutional codes as building blocks and a new sub-optimal decoding where only one decoder using Viterbi algorithm is drafted.
Abstract: A new coding design with large coding gains and complexity-reduced decoders is proposed. The coding strategy is based on a multilevel coding construction with rate-compatible punctured convolutional codes as building blocks. A new sub-optimal decoding is proposed where only one decoder using Viterbi algorithm is drafted. Typical example codes for quadrature amplitude shift keying (QASK) constellations an constructed and shown to outperform Ungerboeck's codes, but with higher decoding delay. Computer simulation is also performed to verify the results. >

Proceedings ArticleDOI
23 May 1993
TL;DR: Based on a feedback decoding scheme, trellis codes of large constraint length are proposed which can be easily decoded and attain additional coding gain over conventional trellIS codes at similar decoder complexity, but at a larger decoding delay.
Abstract: It has previously been shown that with trellis-coded modulation it is possible to achieve a significant coding gain of 3 to 6 dB over uncoded modulation without bandwidth expansion. However, the decoding effort grows rapidly with the coding gain. A different approach is given by the author. Based on a feedback decoding scheme, trellis codes of large constraint length are proposed which can be easily decoded. At similar decoder complexity, but at a larger decoding delay, they attain additional coding gain over conventional trellis codes. >

Proceedings ArticleDOI
20 Oct 1993
TL;DR: The authors present a new systematic approach to parameter selection and apply this approach to the design optimization of a decoding system for a concatenated coding scheme that is better than that of the well known standard code with 64 states for moderate BER at equivalent implementation cost.
Abstract: Due to the advances in VLSI technology complete digital communication systems can today be implemented on single application specific VLSI circuits. The optimum choice of implementation parameters, such as signal wordlengths, is a critical design task since poor parameter choices can lead to costly designs. On the other hand, the high number of parameters to be selected span a large search space that is very difficult to handle. The authors present a new systematic approach to parameter selection and apply this approach to the design optimization of a decoding system for a concatenated coding scheme. Two convolutional codes are concatenated and both are decoded by soft decision decoding. This is facilitated by means of soft output decoding of the inner code. The performance of the scheme is better than that of the well known standard code with 64 states for moderate BER at equivalent implementation cost. The proposed coding scheme is thus an attractive alternative whenever high bit error rate performance is prerequisite, e.g. for digital HDTV transmission. >

Journal ArticleDOI
TL;DR: In this paper, a method for determining the effective number of neighbours for staged decoding of block coded modulation, for both coded 8-PSK and 16-QAM, was presented.
Abstract: A method is presented for determining the effective number of neighbours for staged decoding of block coded modulation, for both coded 8-PSK and 16-QAM. The results are compared with ML decoding for 8-PSK, and it is shown that if the second row code has Hamming distance greater than 2, the number of neighbours is greater in staged decoding by a large factor, and causes a degradation in coding gain approaching 1 dB

Proceedings ArticleDOI
R. Cox1, C.-E. Sundberg1
18 May 1993
TL;DR: Three robust adaptive stopping rules are constructed and evaluated in blockwide transmission to save the overhead of a known tail and a comparison to previously known algorithms is presented.
Abstract: These algorithms are used in blockwide transmission to save the overhead of a known tail. The basic ideas are: (1) continue conventional seamless continuous Viterbi decoding beyond the block boundary by recording and repeating the received block of (soft) symbols; (2) start the decoding process in all states; and (3) end the decoding process either adaptively or with a fixed length. Three robust adaptive stopping rules are constructed and evaluated. Simulation results and a comparison to previously known algorithms are presented.

Patent
20 Sep 1993
TL;DR: In this paper, the authors proposed a method and device for decoding variable length quantization data in the coding/decoding methods and the decoding devices, which can be used to realize variable length decoding.
Abstract: PURPOSE: To realize coding/decoding methods capable of variable length coding/ decoding quantization data more efficiently than heretofore and to realize a method and device for decoding capable of variable length decoding it in the coding/decoding methods and the coding/decoding devices. CONSTITUTION: Input data is variable length coded through the use of variable length coding tables 23C and 23D selected from among the variable length coding tables 23C and 23D, which are plurality prepared, based on coding efficiency. Thereby, variable length coding efficiency is improved more than the case of fixing the variable length coding table to be one. Thus, in the case of generating information quantity equal to that at the time of fixing the variable length coding table to be one, quantization data quantized by smaller quantization size can be a processing object so that the quality of information transmitted as coding data can be more improved. COPYRIGHT: (C)1994,JPO&Japio

Proceedings ArticleDOI
17 Jan 1993
TL;DR: This study deals with UEP-BCM based on RS code and BCH code, and proposes a multilevel coded modulation scheme using an unequal error protection code which can be considered as an extension of the IN-scheme.
Abstract: Recently various schemes of coding and modulation have been proposed as efficient methods to improve the performance of digital communication systems. One of interesting approach was presented by Imai and Hirakawa in the early stage of researches on the coded modulation. Their scheme (IH-scheme) utilize component codes having different error protection capabilities are employed with step wise decoding or multistage decoding. It is noted that the component codes are designed on the basis of Hamming distance, but nonuniformity is introduced by letting their respective error protection capabilities different. We propose a multilevel coded modulation scheme using an unequal error protection code. The basis of the scheme can be considered as an extension of the IN-scheme. Instead of using several error-correcting codes in the IHscheme, we use a block or trellis code which has unequal error protection capability. To obtain large coding gain from the UEP code, codeword is mapped into channel symbols by using finite memories in the scheme. Figire 1 shows the encoding and decoding block diagram of 3-level coding (i.e. 8-PSK etc.). In the figure, each of 'I 'is memory unit, that is shift register of length n/3, where n is the code length of block code C. BCM coding based on the same structure for the 2-levels has been proposed by M. C. Lin[l]. he calls it block coded modulation with inter block memory. We studied the structure from the view point of application of unequal error protection code. We describe the minimum squared Euclidian distances of UEP-BCM and UEP-TCM by using the separations; that are the measurements of error protection capability of an unequal error protection(UEP) code. Our scheme can be considered as a generalization of his scheme. Although the error performance of UEP-BCM is described from the viewpoint of UEP, ordinary error correction codes having uniform error protection capability can be applied to UEP-BCM. UEP-BCM provides attractive coded scheme when the scheme is easy implemented by using algebraic decoding. This study deals with UEP-BCM based on RS code and BCH code.

Proceedings ArticleDOI
17 Jan 1993
TL;DR: The encoded bits in l/a-rate coding are generated at twice the input information rate, but it is possible to find an encoding operation that relates blocks of input information bits to equal length blocks of encoded bits.
Abstract: Table look-up based decoding schemes for convolutionally encoded data have been designed for both block coding [5], and convolutional coding [6, 21. However, in the latter case most of the work was done for systematic codes. Nonsystematic codes offer better error correcting capability than systematic codes if more than one constraint length of received blocks are considered [7J. Our approach to table look-up based decoding of nonsystematic convolutional codes was introduced in [l, 31. A l/2-rate convolutional encoder is characterized by two generator sequences g(j) = (gf),g;j), ..., gp)), j = 1,2, where U is the constraint length of the code, i.e., the number of memory elements in a minimal realization of the convolutional code [4]. The output conatmint length i s defined as = Z(u + 1) and is equal to the number of encoded bits affected by a single input information bit [TI. An input information sequence U is encoded into two encoded output sequences v(j), j = 1,2, using v = uG, where v = ( u p ) , up), up), up), ...) is the composite encoded sequence, also called a codeword, obtained by multiplexing the two encoded sequences, and G is the semi-infinite code generator matrix [?I. The encoded bits in l/a-rate coding are generated at twice the input information rate. However, it is possible to find an encoding operation that relates blocks of input information bits to equal length blocks of encoded bits [I, 31. Proposition: For I/%-rate convolutional coding with constraint length v, there exists a correspondence between equal length blocks of input information bits and the encoded bits. The length of these corresponding blocks is 2v bits. (For proof, sec [l, 31.) We can formalises this relationship as follows. Let [ul;,i+zu-l=(ui, 2;i.j.,, .q+a.-x{ be a Zv-bit block of the input information sequence, . ,a,+g-l= uzi, uai+t, ... , uai+aV-l) be the 2u-bit block of the corresponding encoded sequence. Given the generator sequences &), j = 1,2, for a l/Z-rate convolutional code with constraint length U, we define the reduced encoding matrix as

Patent
28 Apr 1993
TL;DR: An image coding/decoding apparatus intended for efficient processing by sharing members in coding, local decoding, and decoding processing is described in this paper, where either of the two functions can be selected for execution in synchronization with the processing timing for each block.
Abstract: An image coding/decoding apparatus intended for efficient processing by sharing members in coding, local decoding, and decoding processing. Processes such as DCT and IDCT, zigzag scan conversion and inversion, or quantization and inverse quantization performed in coding, local decoding, and decoding processing are similar to each other. DCT/IDCT, zigzag scan conversion/inversion, and quantization/inverse quantization are provided where either of the two functions can be selected for execution in synchronization with the processing timing for each block. Since the time required for one process for data for each block is very short, overall processing is not affected even if the members are used in a time division manner. By sharing processing, the hardware scale can be made small and by using a data bus in a time division manner, an external data bus can also be eliminated.

Proceedings ArticleDOI
29 Nov 1993
TL;DR: A list output VA is defined using the reliability information of the SOVA to generate a list-SOVA that has a lower complexity than the LVA for a long list size and a low complexity SOVA that forms a short list output of the L VA calculated output symbol reliability values, the soft-LVA.
Abstract: Improvements in the performance of a concatenated coding system that uses the Viterbi algorithm (VA) (inner decoder) can be obtained when an indicator of the reliability of the VA decision is delivered to the outer stage of processing. Two different approaches are considered. (1) The VA is extended with a soft output (SOVA) unit that calculates reliability values for the decoded output information symbols. (2) The VA provides a list of the L best estimates of the transmitted data sequence, the list Viterbi decoding algorithm (LVA). We define a list output VA using the reliability information of the SOVA to generate a list-SOVA that has a lower complexity than the LVA for a long list size. We also introduce a low complexity SOVA that forms a short list output of the LVA calculated output symbol reliability values, the soft-LVA. A new implementation of the iterative serial version of the LVA is also presented. >

Proceedings ArticleDOI
14 Sep 1993
TL;DR: This method can provide a sorted set of survivor metrics during M algorithm decoding of binary linear block codes using only /spl Lambda/1 comparisons in the worst case, a significant improvement over comparison-based sorting, which requires O(M log/sub 2/ M) comparisons.
Abstract: We introduce a very efficient (linear-time) method for finding the best metrics from among a number of contenders during breadth-first decoding of block or convolutional codes. This method can provide a sorted set of survivor metrics during M algorithm decoding of binary linear block codes using only /spl Lambda/1 comparisons in the worst case. This is a significant improvement over comparison-based sorting, which requires O(M log/sub 2/ M) comparisons. A similar improvement is attained over Hadian and Sohel's comparison-based selection method (where the best metrics are found without regard to order). The method is also readily applicable to the decoding of convolutional codes. >