scispace - formally typeset
Search or ask a question

Showing papers on "Sequential decoding published in 1997"


Proceedings ArticleDOI
04 May 1997
TL;DR: In this article, the authors presented randomized constructions of linear-time encodable and decodable codes that can transmit over lossy channels at rates extremely close to capacity.
Abstract: We present randomized constructions of linear-time encodable and decodable codes that can transmit over lossy channels at rates extremely close to capacity. The encod-ing and decoding algorithms for these codes have fast and simple software implementations. Partial implementationsof our algorithms are faster by orders of magnitude than the best software implementations of any previous algorithm forthis problem. We expect these codes will be extremely useful for applications such as real-time audio and video transmission over the Internet, where lossy channels are common and fast decoding is a requirement. Despite the simplicity of the algorithms, their design andanalysis are mathematically intricate. The design requires the careful choice of a random irregular bipartite graph,where the structure of the irregular graph is extremely important. We model the progress of the decoding algorithmby a set of differential equations. The solution to these equations can then be expressed as polynomials in one variable with coefficients determined by the graph structure. Based on these polynomials, we design a graph structure that guarantees successful decoding with high probability

872 citations


Journal ArticleDOI
TL;DR: This letter describes the SISO APP module that updates the APP corresponding to the input and the output bits, of a code, and shows how to embed it into an iterative decoder for a new hybrid concatenation of three codes, to fully exploit the benefits of the proposed S ISO APP module.
Abstract: Concatenated coding schemes consist of the combination of two or more simple constituent encoders and interleavers. The parallel concatenation known as "turbo code" has been shown to yield remarkable coding gains close to theoretical limits, yet admitting a relatively simple iterative decoding technique. The recently proposed serial concatenation of interleaved codes may offer superior performance to that of turbo codes. In both coding schemes, the core of the iterative decoding structure is a soft-input soft-output (SISO) a posteriori probability (APP) module. In this letter, we describe the SISO APP module that updates the APP's corresponding to the input and the output bits, of a code, and show how to embed it into an iterative decoder for a new hybrid concatenation of three codes, to fully exploit the benefits of the proposed SISO APP module.

609 citations


Patent
14 Apr 1997
TL;DR: In this paper, a parallel concatenated convolutional coding scheme utilizes tail-biting non-recursive systematic convolutions to produce hard and soft decision outputs for short messages.
Abstract: A parallel concatenated convolutional coding scheme utilizes tail-biting nonrecursive systematic convolutional codes. The associated decoder iteratively utilizes circular maximum a posteriori decoding to produce hard and soft decision outputs. This encoding/decoding system results in improved error-correction performance for short messages.

192 citations


Proceedings ArticleDOI
07 Jul 1997
TL;DR: A stack decoding algorithm is described and the hypothesis scoring method and the heuristics used in the algorithm are presented, and a simplified model to moderate the sparse data problem and to speed up the decoding process is introduced.
Abstract: Decoding algorithm is a crucial part in statistical machine translation. We describe a stack decoding algorithm in this paper. We present the hypothesis scoring method and the heuristics used in our algorithm. We report several techniques deployed to improve the performance of the decoder. We also introduce a simplified model to moderate the sparse data problem and to speed up the decoding process. We evaluate and compare these techniques/models in our statistical machine translation system.

168 citations


Journal ArticleDOI
TL;DR: An adaptive decoding algorithm for convolutional codes, which is a modification of the Viterbi algorithm (VA), which yields nearly the same error performance as the VA while requiring a substantially smaller average number of computations.
Abstract: In this paper, an adaptive decoding algorithm for convolutional codes, which is a modification of the Viterbi algorithm (VA) is presented For a given code, the proposed algorithm yields nearly the same error performance as the VA while requiring a substantially smaller average number of computations Unlike most of the other suboptimum algorithms, this algorithm is self-synchronizing If the transmitted path is discarded, the adaptive Viterbi algorithm (AVA) can recover the state corresponding to the transmitted path after a few trellis depths Using computer simulations over hard and soft 3-bit quantized additive white Gaussian noise channels, it is shown that codes with a constraint length K up to 11 can be used to improve the bit-error performance over the VA with K=7 while maintaining a similar average number of computations Although a small variability of the computational effort is present with our algorithm, this variability is exponentially distributed, leading to a modest size of the input buffer and, hence, a small probability of overflow

103 citations


Journal ArticleDOI
TL;DR: Efficient code-search maximum-likelihood decoding algorithms, based on reliability information, are presented for binary Linear block codes, applicable to codes of relatively large size.
Abstract: Efficient code-search maximum-likelihood decoding algorithms, based on reliability information, are presented for binary Linear block codes. The codewords examined are obtained via encoding. The information set utilized for encoding comprises the positions of those columns of a generator matrix G of the code which, for a given received sequence, constitute the most reliable basis for the column space of G. Substantially reduced computational complexity of decoding is achieved by exploiting the ordering of the positions within this information set. The search procedures do not require memory; the codeword to be examined is constructed from the previously examined codeword according to a fixed rule. Consequently, the search algorithms are applicable to codes of relatively large size. They are also conveniently modifiable to achieve efficient nearly optimum decoding of particularly large codes.

88 citations


Journal Article
TL;DR: This work presents rational rate k/n punctured convolutional codes with good performance that improve the free distance and (or) weight spectra over previously reported codes with the same parameters.
Abstract: We present rational rate k/n punctured convolutional codes (n up to 8, k=1, /spl middot//spl middot//spl middot/, n-1, and constraint length /spl nu/ up to 8) with good performance. Many of these codes improve the free distance and (or) weight spectra over previously reported codes with the same parameters. The tabulated codes are found by an exhaustive (or a random) search.

66 citations


Journal ArticleDOI
TL;DR: For a synchronous system, the coded performance of the projection receiver metric is shown to be superior to the decorrelator even though they are equally complex, and the theoretical degradation relative to the single user bound is derived.
Abstract: We consider a CDMA system with error-control coding. Optimal joint decoding is prohibitively complex. Instead, we propose a sequential approach for handling multiple-access interference and error-control decoding. Error-control decoding is implemented via single-user soft-input decoders utilizing metrics generated by linear algebraic multiuser metric generators. The decorrelator, and a new scheme termed the projection receiver, are utilized as metric generators. For a synchronous system, the coded performance of the projection receiver metric is shown to be superior to the decorrelator even though they are equally complex. Also, the theoretical degradation relative to the single user bound is derived.

63 citations


Journal ArticleDOI
TL;DR: The authors show that the Shannon capacity limit for the additive white Gaussian noise (AWGN) channel can be approached within 0.27 dB at a bit error rate (BER) of 10/sup -5/ by applying long but simple Hamming codes as component codes to an iterative turbo-decoding scheme.
Abstract: The authors show that the Shannon capacity limit for the additive white Gaussian noise (AWGN) channel can be approached within 0.27 dB at a bit error rate (BER) of 10/sup -5/ by applying long but simple Hamming codes as component codes to an iterative turbo-decoding scheme. In general, the complexity of soft-in/soft-out decoding of binary block codes is rather high. However, the application of a neurocomputer in combination with a parallelization of the decoding rule facilitates an implementation of the decoding algorithm in the logarithmic domain which requires only matrix additions and multiplications. But the storage requirement might still be quite high depending on the interleavers used.

60 citations


Journal ArticleDOI
29 Jun 1997
TL;DR: This analysis of generalized minimum distance (GMD) decoding algorithms for Euclidean space codes is presented, and it is proved that although these decoding regions are polyhedral, they are essentially always nonconvex.
Abstract: We present a detailed analysis of generalized minimum distance (GMD) decoding algorithms for Euclidean space codes. In particular, we completely characterize GMD decoding regions in terms of receiver front-end properties. This characterization is used to show that GMD decoding regions have intricate geometry. We prove that although these decoding regions are polyhedral, they are essentially always nonconvex. We furthermore show that conventional performance parameters, such as error-correction radius and effective error coefficient, do not capture the essential geometric features of a GMD decoding region, and thus do not provide a meaningful measure of performance. As an alternative, probabilistic estimates of, and upper bounds upon, the performance of GMD decoding are developed. Furthermore, extensive simulation results, for both low-dimensional and high-dimensional sphere-packings, are presented. These simulations show that multilevel codes in conjunction with multistage GMD decoding provide significant coding gains at a very low complexity. Simulated performance, in both cases, is in remarkably close agreement with our probabilistic approximations.

57 citations


Journal ArticleDOI
TL;DR: A new soft-decision maximum-likelihood decoding algorithm is proposed, which generates a set of candidate codewords using hard-dec decision bounded-distance decoding, and the decoding time complexity is reduced without degradation of the performance.
Abstract: A new soft-decision maximum-likelihood decoding algorithm is proposed, which generates a set of candidate codewords using hard-decision bounded-distance decoding. By improving the generating method of input vectors for the bounded-distance decoding due to Kaneko et al. (see ibid., vol.40, no.3, p.320-27, 1994), the decoding time complexity is reduced without degradation of the performance. The space complexity is dependent on the bounded-distance decoding.

Book ChapterDOI
TL;DR: The implementation of such complete reconstruction for convolutional codes of any rate is presented, for both noiseless and noisy communications, and yields a new characterization of convolutionAL codes along with some properties.
Abstract: Assuming we only have the coded sequence produced by a convolutional encoder at one's disposal, is it possible to recover all the parameters defining this encoder ? The implementation of such complete reconstruction for convolutional codes of any rate is presented in this paper, for both noiseless and noisy communications. Each time, several different solutions, i.e. alleged reconstructed encoders, are obtained, contrary to the awaited unicity. Their study yields a new characterization of convolutional codes along with some properties.

Journal ArticleDOI
TL;DR: In this letter, the average maximum-likelihood performance of the three ways of co-decoding turbo codes is defined and evaluated.
Abstract: In this letter we define and evaluate the average maximum-likelihood performance of the three ways of co-decoding turbo codes. In all cases the information sequence is split into blocks of N bits (N being the length of the interleaver used by the turbo code), that are encoded by the first constituent encoder and, after interleaving, by the second encoder. In the first operation mode, both constituent encoders work in a continuous fashion, whereas in the second, at the end of each block, a suitably chosen sequence of bits is appended to the information block in order to terminate the trellises of both constituent codes. In the third mode, the operation is similar to the second, but, instead of trellis termination, both constituent encoders are simply reset.

Dissertation
29 May 1997
TL;DR: It has been found that the SOVA turbo code decoding algorithm, as described in the literature, did not perform as well as the published results and modifications to the decoding algorithm are suggested.
Abstract: Evaluation of soft output decoding for turbo codes is presented. Coding theory related to this research is studied, including convolutional encoding and Viterbi decoding. Recursive systematic convolutional (RSC) codes and nonuniform interleavers commonly used in turbo code encoder design are analyzed. Fundamentals such as reliability estimation, log-likelihood algebra, and soft channel outputs for soft output Viterbi algorithm (SOVA) turbo code decoding are examined. The modified Viterbi metric that incorporates a-priori information used for SOVA decoding is derived. A low memory implementation of the SOVA decoder is shown. The iterative SOVA turbo code decoding algorithm is described with illustrative examples. The performance of turbo codes are evaluated through computer simulation. It has been found that the SOVA turbo code decoding algorithm, as described in the literature, did not perform as well as the published results. Modifications to the decoding algorithm are suggested. The simulated turbo code performance results shown after these modifications more closely match with current published research work.

Proceedings ArticleDOI
04 May 1997
TL;DR: For a coherent and a noncoherent RAKE-receiver structure the optimum symbol-by-symbol maximum a posteriori (MAP) decoding rules considering the a priori information about the systematic bits of the codewords if available are given.
Abstract: In the uplink of the CDMA IS-95(A) system decoding of a serial concatenated coding scheme has to be performed, where the inner code comprises M-ary orthogonal modulation with Hadamard (Walsh) spreading codes. For a coherent and a noncoherent RAKE-receiver structure we give the optimum symbol-by-symbol maximum a posteriori (MAP) decoding rules considering the a priori information about the systematic bits of the codewords if available. These algorithms are necessary for iterative decoding of the whole system. We present simulation results which demonstrate the gain of the modified receivers.

Proceedings ArticleDOI
08 Jun 1997
TL;DR: This work gives the optimum symbol-by-symbol maximum a posteriori (MAP) decoding rules for the inner block code in a coherent and noncoherent receiver design for a direct sequence CDMA system like in the uplink of the standard IS-95(A).
Abstract: Iterative decoding is applied to code division multiple access (CDMA) systems with two-stage serial concatenated channel coding. For a direct sequence (DS) CDMA system like in the uplink of the standard IS-95(A) using M-ary orthogonal modulation (inner code) and an outer convolutional code with interleaving we give the optimum symbol-by-symbol maximum a posteriori (MAP) decoding rules for the inner block code in a coherent and noncoherent receiver design. Further suggestions for suboptimum MAP decoders with lower complexity are made. The inner decoding rule is extended to use a priori information for the systematic bits, which is delivered from the outer decoding stage. These algorithms are essential for iterative decoding of the system. Simulation results show a decoding gain of about 0.6 dB for a bit error rate (BER) of 10/sup -3/ by replacing the maximum likelihood (ML) decoder for the inner code by a MAP decoder. A total gain of about 1.2 dB is achieved by the use of iterative decoding after only three to five iterations.

Patent
23 Dec 1997
TL;DR: In this paper, a coding and decoding system using CRC check bits is described, in which symbol interleaving is performed after coding by an outer code of a concatenated code, and decoding by an inner code after CRC check bit are added.
Abstract: A coding and decoding system which uses CRC check bits is disclosed. When a coding apparatus performs coding, symbol interleaving is performed after coding by an outer code of a concatenated code, and coding by an inner code is performed after CRC check bits are added. Then, upon decoding by a decoding apparatus, error detection using the CRC check bits is performed after decoding of the inner code. After symbol deinterleaving is performed, decoding of the outer code by erasure decoding or error correction is performed depending upon the number of symbols included in a frame in which an error has been detected.

Proceedings ArticleDOI
Qiru Zhou1, Wu Chou
21 Apr 1997
TL;DR: An approach to continuous speech recognition based on a layered self-adjusting decoding graph that utilizes a scaffolding layer to support fast network expansion and releasing and introduces self- adjusting capability in dynamic decoding on a general re-entrant decoding network.
Abstract: In this paper, an approach to continuous speech recognition based on a layered self-adjusting decoding graph is described. It utilizes a scaffolding layer to support fast network expansion and releasing. A two level hashing structure is also described. It introduces self-adjusting capability in dynamic decoding on a general re-entrant decoding network. In stack decoding, the scaffolding layer in the proposed approach enables the decoder to look several layers into the future so that long span inter-word context dependency can be exactly preserved. Experimental results indicate that highly efficient decoding can be achieved with a significant savings on recognition resources.

Proceedings ArticleDOI
29 Jun 1997
TL;DR: Simulations show that two rate-2/3 users of a noiseless binary adder channel can be separated by a 4-state code.
Abstract: An iterative scheme for decoding asynchronous users on a multi-user adder channel is presented. In each iteration step, one user's a posteriori symbol probabilities are computed. The contribution of that user to the sum signal is partially cancelled such that the residual noise is minimized. Simulations show that two rate-2/3 users of a noiseless binary adder channel can be separated by a 4-state code.

Proceedings ArticleDOI
03 Nov 1997
TL;DR: This paper presents some results on the iterative decoding of the concatenated RS/convolutional coding scheme using Viterbi decoding for inner code and Reed-Solomon code for the outer code on the Gaussian channel using Monte Carlo simulation.
Abstract: The concatenated coding scheme using convolutional coding with Viterbi decoding for inner code and Reed-Solomon (RS) code for the outer code is an attractive scheme used in a wide range of digital transmission systems. In this paper, we present some results on the iterative decoding of this concatenated coding scheme. This iterative decoding algorithm combines the iterative decoding algorithm for a block turbo code and the iterative decoding algorithm for a convolutional turbo code. The iterative decoding of the concatenated RS/convolutional coding scheme is based on the soft decoding and the soft decision of the component codes. The performance of the iterative decoding of those concatenated codes has been evaluated on the Gaussian channel using Monte Carlo simulation. Coding gains up to 7.1 dB for a BER (bit error rate) of 10/sup -5/ have been obtained on the Gaussian channel.

Journal ArticleDOI
TL;DR: A novel technique for the equalization of nonminimum phase channels that employs noncausal all-pass filters operating in reversed time and a twopass decoding strategy is developed, leading to significant improvement in performance with little increase in computational cost.
Abstract: The Viterbi algorithm is the optimum method for detection of a data sequence in the presence of intersymbol interference and additive white Gaussian noise. Since its computational complexity is very large, several simplifications and alternative methods have been proposed, most of which are more effective when dealing with minimum phase channels. We present a novel technique for the equalization of nonminimum phase channels that employs noncausal all-pass filters operating in reversed time. The impulse response of the equalized channel approximates a minimum phase sequence with higher energy concentration at its left-hand end than at the right-hand end. The method can be modified to obtain a desired impulse response with few nonzero samples with only minor variations in noise level, providing significant complexity reduction in the Viterbi algorithm for detection. In addition, a twopass decoding strategy is developed, leading to significant improvement in performance with little increase in computational cost. Simulation results are included to verify the advantages of the proposed techniques.

Journal ArticleDOI
TL;DR: This correspondence presents a hybrid reliability-based decoding algorithm which combines the reprocessing methodbased on the most reliable basis and a generalized Chase-type algebraic decoder based on the least reliable positions.
Abstract: This correspondence presents a hybrid reliability-based decoding algorithm which combines the reprocessing method based on the most reliable basis and a generalized Chase-type algebraic decoder based on the least reliable positions. It is shown that reprocessing with a simple additional algebraic decoding effort achieves significant coding gain. For long codes, the order of reprocessing required to achieve asymptotic optimum error performance is reduced by approximately 1/3. This significantly reduces the computational complexity, especially for long codes. Also, a more efficient criterion for stopping the decoding process is derived based on the knowledge of the algebraic decoding solution.

Proceedings ArticleDOI
04 May 1997
TL;DR: This work shows that convolutional coding performs better than turbo coding when frame sizes are below a threshold, and compares the performance of turbo codes with Convolutional codes that incur similar decoding complexity.
Abstract: Parallel concatenated convolutional codes (turbo codes) have been shown to yield very good performance for long block sizes. We consider the applicability of such codes to practical mobile communication scenarios, where block sizes for speech or control functions are typically short. We compare the performance of turbo codes with convolutional codes that incur similar decoding complexity. For representative cases, we show that convolutional coding performs better than turbo coding when frame sizes are below a threshold.

01 Jan 1997
TL;DR: The conventional turbo decoding scheme proposed by Berrou et al., is revised, and alternatives such as maximum-likelihood decoding and iterative decoding using a Joint-MAP are investigated.
Abstract: Decoding with feedback or iterative decoding is an eecient, low-complexity means for \informa-tion handover" in any parallel or serial concate-nated coding scheme and in related applications. Invented twenty years ago, it has recently been payed considerable attention in the context of \Turbo Codes". Iterative decoding has been jus-tiied intuitively, but many question remain and perhaps will resist. In this paper, we will revise the conventional turbo decoding scheme proposed by Berrou et al., and investigate alternatives such as maximum-likelihood decoding and iterative decoding using a \Joint-MAP". The alternatives promise slight performance improvements, but at considerable cost. This manifests the excellent original works.

Journal ArticleDOI
TL;DR: It is shown that the ordered decoding scheme combined with diagonal interleaving improves the performance of a product code with reasonably long code length for mobile data communications.
Abstract: We propose a new ordered decoding scheme for a product code for mobile data communications. The ordered decoding scheme determines the order of decoding for both row and column component codewords according to the probability of decoding the component codeword correctly. Component codewords are decoded independently. To randomize burst errors in both row and column codewords, a diagonal interleaving scheme is used for code symbols in the codeword. It is shown that the ordered decoding scheme combined with diagonal interleaving improves the performance of a product code with reasonably long code length for mobile data communications.

Patent
15 Sep 1997
TL;DR: In this paper, an improved coding and decoding process is disclosed to determine the ending state of a convolutional decoding using Viterbi algorithm without the use of tail bits, where the parity bit part can be placed between the end of the most important information bit part and the beginning of the code burst.
Abstract: An improved coding and decoding process is disclosed to determine the ending state of a convolutional decoding using Viterbi algorithm without the use of tail bits. The most important information bits (12) are first coded with a block code (14). The block code word (14) and the rest of the information bits (11) are then split into several bursts (B-Burst). Each burst is coded with a convolutional code. The most important information bit part of the block code word is then placed in the beginning of the code burst for the convolutional encoder. The parity bit part can be placed between the end of the most important information bit part and the end of the code burst. In the convolutional decoding process for each burst, the ending state is determined by detected valid block code word after combining the convolutional decoding processes for all the bursts. The performances of average BER and missed detection rate can be optimized by moving the placement of the parity bit in each message code burst.

Proceedings ArticleDOI
26 Oct 1997
TL;DR: Experimental results for several channel signal-to-noise ratios show that turbo codes achieve much better performance with less decoding complexity than convolutional codes and that similar performance can be achieved at much lower channel SNRs.
Abstract: Compressed images transmitted over noisy channels are extremely sensitive to bit errors. This necessitates the application of error control channel coding to the compressed representation before transmission. This paper presents an image transmission system which takes advantage of the superior performance of turbo codes, an important new class of parallel concatenated codes. Several aspects of the application of turbo codes to image transmission are studied, including comparison to a previous image transmission system using convolutional codes. Experimental results for several channel signal-to-noise ratios show that, in the same SNR range, turbo codes achieve much better performance with less decoding complexity than convolutional codes and that similar performance can be achieved at much lower channel SNRs. Studies also show that the use of feedback from an outer Reed-Solomon code to aid turbo decoding results in further improvement.

Journal ArticleDOI
TL;DR: This work presents two construction methods for Hamming codes and uses techniques from combinatorial optimization to give decoding procedures for graphical codes which turn out to be considerably more efficient than the approach via majority logic decoding proposed by Bredeson and Hakimi.
Abstract: The set of all even subgraphs of a connected graph G on p vertices with q edges forms a binary linear code C=C/sub E/(G) with parameters [q,q-p+1,g], where g is the girth of G. Such codes were studied systematically by Bredeson and Hakimi (1967) and Hakimi and Bredeson (1968) who were concerned with the problems of augmenting C to a larger [q,k,g]-code and of efficiently decoding such augmented graphical codes. We give a new approach to these problems by requiring the augmented codes to be graphical. On one hand, we present two construction methods which turn out to contain the methods proposed by Hakimi and Bredeson as special cases. As we show, this not only gives a better understanding of their construction, it also results in augmenting codes of larger dimension. We look at the case of 1-error-correcting graphical codes in some detail. In particular, we show how to obtain the extended Hamming codes as "purely" graphical codes by our approach. On the other hand, we follow a suggestion of Ntafos and Hakimi (1981) and use techniques from combinatorial optimization to give decoding procedures for graphical codes which turn out to be considerably more efficient than the approach via majority logic decoding proposed by Bredeson and Hakimi. We also consider the decoding problem for the even graphical code based on the complete graph K/sub 2n/ in more detail: we discuss an efficient hardware implementation of an encoding/decoding scheme for these codes and show that things may be arranged in such a way that one can also correct all adjacent double errors. Finally, we discuss nonlinear graphical codes.

Proceedings ArticleDOI
29 Jun 1997
TL;DR: A scheme for optimum power allocation for the different bits of a turbo code is introduced and it is shown that by employing this scheme performance improvements in the range of 0.5-0.7 dB can be achieved without introducing any complexity or delay at the decoder.
Abstract: A scheme for optimum power allocation for the different bits of a turbo code is introduced. It is shown that by employing this scheme performance improvements in the range of 0.5-0.7 dB can be achieved without introducing any complexity or delay at the decoder.

Proceedings ArticleDOI
09 Jun 1997
TL;DR: Two schemes to reduce the power consumption of the register-exchange soft output Viterbi decoder (SOVD) for turbo code decoding are proposed and the notion of the scarce state transition (SST) which changes the data representation during decoding is employed.
Abstract: Turbo codes, which are new forward error-correcting codes (FEC), represent a prospective coding scheme for future wireless communication. In this paper, we propose two schemes to reduce the power consumption of the register-exchange soft output Viterbi decoder (SOVD) for turbo code decoding. The first scheme employs the notion of the scarce state transition (SST) which changes the data representation during decoding. The second scheme simply changes the bit representation in the survivor memory unit (SMU) of the decoder. Simulation results show that up to 70% reduction in bit transitions in the SMU can be achieved by both schemes when decoding a 16-state turbo code.