scispace - formally typeset
Proceedings ArticleDOI

A comparison of optimal and sub-optimal MAP decoding algorithms operating in the log domain

Patrick Robertson, +2 more
- Vol. 2, pp 1009-1013
Reads0
Chats0
TLDR
A log- MAP algorithm is presented that avoids the approximations in the max-log-MAP algorithm and hence is equivalent to the true MAP, but without its major disadvantages, and it is concluded that the three algorithms increase in complexity in the order of their optimality.
Abstract
For estimating the states or outputs of a Markov process, the symbol-by-symbol MAP algorithm is optimal. However, this algorithm, even in its recursive form, poses technical difficulties because of numerical representation problems, the necessity of nonlinear functions and a high number of additions and multiplications. MAP like algorithms operating in the logarithmic domain presented in the past solve the numerical problem and reduce the computational complexity, but are suboptimal especially at low SNR (a common example is the max-log-MAP because of its use of the max function). A further simplification yields the soft-output Viterbi algorithm (SOVA). We present a log-MAP algorithm that avoids the approximations in the max-log-MAP algorithm and hence is equivalent to the true MAP, but without its major disadvantages. We compare the (log-)MAP, max-log-MAP and SOVA from a theoretical point of view to illuminate their commonalities and differences. As a practical example forming the basis for simulations, we consider Turbo decoding, where recursive systematic convolutional component codes are decoded with the three algorithms, and we also demonstrate the practical suitability of the log-MAP by including quantization effects. The SOVA is, at 10/sup -4/, approximately 0.7 dB inferior to the (log-)MAP, the max-log-MAP lying roughly in between. We also present some complexity comparisons and conclude that the three algorithms increase in complexity in the order of their optimality.

read more

Citations
More filters
Journal ArticleDOI

Iterative decoding of binary block and convolutional codes

TL;DR: Using log-likelihood algebra, it is shown that any decoder can be used which accepts soft inputs-including a priori values-and delivers soft outputs that can be split into three terms: the soft channel and aPriori inputs, and the extrinsic value.
Journal ArticleDOI

Convergence behavior of iteratively decoded parallel concatenated codes

TL;DR: A code search based on the EXIT chart technique has been performed yielding new recursive systematic convolutional constituent codes exhibiting turbo cliffs at lower signal-to-noise ratios than attainable by previously known constituent codes.
Journal ArticleDOI

Achieving near-capacity on a multiple-antenna channel

TL;DR: This work provides a simple method to iteratively detect and decode any linear space-time mapping combined with any channel code that can be decoded using so-called "soft" inputs and outputs and shows that excellent performance at very high data rates can be attained with either.
Journal ArticleDOI

Serial concatenation of interleaved codes: performance analysis, design and iterative decoding

TL;DR: In this article, the authors derived upper bounds to the average maximum likelihood bit error probability of serially concatenated block and convolutional codes with interleaver, and derived design guidelines for the outer and inner encoders that maximize the interleavers gain and the asymptotic slope of the error probability curves.
Journal ArticleDOI

Reduced complexity iterative decoding of low-density parity check codes based on belief propagation

TL;DR: Two simplified versions of the belief propagation algorithm for fast iterative decoding of low-density parity check codes on the additive white Gaussian noise channel are proposed, which greatly simplifies the decoding complexity of belief propagation.
References
More filters
Journal ArticleDOI

The viterbi algorithm

TL;DR: This paper gives a tutorial exposition of the Viterbi algorithm and of how it is implemented and analyzed, and increasing use of the algorithm in a widening variety of areas is foreseen.
Proceedings ArticleDOI

Near Shannon limit error-correcting coding and decoding: Turbo-codes. 1

TL;DR: In this article, a new class of convolutional codes called turbo-codes, whose performances in terms of bit error rate (BER) are close to the Shannon limit, is discussed.
Journal ArticleDOI

Optimal decoding of linear codes for minimizing symbol error rate (Corresp.)

TL;DR: The general problem of estimating the a posteriori probabilities of the states and transitions of a Markov source observed through a discrete memoryless channel is considered and an optimal decoding algorithm is derived.
Related Papers (5)