scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Extrinsic Information Memory Reduced Architecture for Non-Binary Turbo Decoder Implementation

11 May 2008-pp 539-543
TL;DR: Two methods are presented that can substantially reduce the memory requirements of non-binary turbo decoders by efficient representation of the extrinsic information by reducing the total decoder complexity by 18%.
Abstract: Two methods are presented that can substantially reduce the memory requirements of non-binary turbo decoders by efficient representation of the extrinsic information. In the case of the duo-binary turbo decoder employed by the IEEE 802.16e standard, the extrinsic information can be reduced by about 43%, which decreases the total decoder complexity by 18%. We also show that the proposed algorithm can be implemented by simple hardware architecture.
Citations
More filters
Journal ArticleDOI
TL;DR: The non-uniform quantization technique reduces state metric memory area of about 50% compared with architectures where state metric compression is not performed, at the expense of slightly increasing the error correcting performance floor and the Walsh-Hadamard transform based solution offers a good tradeoff between performance loss and memory complexity reduction.
Abstract: This papers proposes to compress state metrics in turbo decoder architectures to reduce the decoder area. Two techniques are proposed: the first is based on non-uniform quantization and the second on the Walsh-Hadamard transform followed by non-uniform quantization. The non-uniform quantization technique reduces state metric memory area of about 50% compared with architectures where state metric compression is not performed, at the expense of slightly increasing the error correcting performance floor. On the other hand, the Walsh-Hadamard transform based solution offers a good tradeoff between performance loss and memory complexity reduction, which reaches in the best case 20% of gain with respect to other approaches. Both solutions show lower power consumption than architectures previously proposed to compress state metrics.

21 citations


Cites background or methods from "Extrinsic Information Memory Reduce..."

  • ...The works proposed in [11], [14]–[16] are all aimed at reducing the footprint of the -MEM buffers at expense of reducing the error correcting capability of about 0....

    [...]

  • ...While the works detailed in [11], [14], [15], and [16] deal with parallelism independent memories, this work, as [12] and [13], concerns parallelism dependent memories....

    [...]

  • ...On the other hand, in [14] the same goal is achieved by using a pseudo-floating point representation, whereas in [15] a technique based on most significant bit (MSB) clipping combined with least significant bit (LSB) drop (at transmitter) and append (at receiver) is proposed....

    [...]

Proceedings ArticleDOI
24 Oct 2008
TL;DR: A technique for bit-width reduction of exchanged extrinsic information and the impact of it for different implementation architectures is analyzed over two kinds of turbo decoding system.
Abstract: Soft input soft output (SISO) decoders iteratively exchanging intermediate results (extrinsic information) between themselves lie at the core of turbo decoder architectures. The implementation architecture could be serial, parallel or network on chip (NoC) based. In this paper, we present a technique for bit-width reduction of exchanged extrinsic information and analyze the impact of it for different implementation architectures. The methodology is investigated over two kinds of turbo decoding system, both based on the Max-Log-MAP algorithm. First is a serial concatenated convolutional code (SCCC) decoder and the other is a WiMax (IEEE 802.16e) parallel concatenated convolutional code (PCCC) decoder. For the SCCC decoder, bitwidth of the extrinsic information can be reduced from 8 bits down to 4 without significant bit-error-rate (BER) degradation. For the WiMax case it can be reduced from 8 bits down to 5 with a BER degradation of 0.2 dB.

12 citations


Cites methods from "Extrinsic Information Memory Reduce..."

  • ...In [15] an algorithm was proposed to decreases the necessary bit description width of the extrinsic information by employing a pseudo floating point representation....

    [...]

Journal ArticleDOI
19 Mar 2013
TL;DR: In this article, the authors propose to use bit-level and pseudo-floating-point representation of the extrinsic information to reduce the network complexity to support double-binary codes.
Abstract: In this work novel results concerning Network-on-Chip-based turbo decoder architectures are presented. Stemming from previous publications, this work concentrates first on improving the throughput by exploiting adaptive-bandwidth-reduction techniques. This technique shows in the best case an improvement of more than 60 Mb/s. Moreover, it is known that double-binary turbo decoders require higher area than binary ones. This characteristic has the negative effect of increasing the data width of the network nodes. Thus, the second contribution of this work is to reduce the network complexity to support double-binary codes, by exploiting bit-level and pseudo-floating-point representation of the extrinsic information. These two techniques allow for an area reduction of up to more than the 40 % with a performance degradation of about 0.2 dB.

6 citations

Posted Content
TL;DR: This work concentrates first on improving the throughput by exploiting adaptive-bandwidth-reduction techniques, and secondly on reducing the network complexity to support double-binary codes, by exploiting bit-level and pseudo-floating-point representation of the extrinsic information.
Abstract: In this work novel results concerning Network-on-Chip-based turbo decoder architectures are presented. Stemming from previous publications, this work concentrates first on improving the throughput by exploiting adaptive-bandwidth reduction techniques. This technique shows in the best case an improvement of more than 60 Mb/s. Moreover, it is known that double-binary turbo decoders require higher area than binary ones. This characteristic has the negative effect of increasing the data width of the network nodes. Thus, the second contribution of this work is to reduce the network complexity to support doublebinary codes, by exploiting bit-level and pseudo-floating-point representation of the extrinsic information. These two techniques allow for an area reduction of up to more than the 40% with a performance degradation of about 0.2 dB.

6 citations


Cites methods from "Extrinsic Information Memory Reduce..."

  • ...The adopted technique of traffic reduction offers in the best case a throughput improvement of more than 60 Mb/s and 40 Mb/s for binary and double-binary codes respectively....

    [...]

Book ChapterDOI
TL;DR: This chapter describes the main architectures proposed in the literature to implement the channel decoders required by the WiMax standard, namely convolutional codes, turbo codes (both block and Convolutional) and LDPC and shows a complete design of a convolutionAL turbo code encoder/decoder system for WiMax.
Abstract: This chapter describes the main architectures proposed in the literature to implement the channel decoders required by the WiMax standard, namely convolutional codes, turbo codes (both block and convolutional) and LDPC. Then it shows a complete design of a convolutional turbo code encoder/decoder system for WiMax.

1 citations


Cites background or methods from "Extrinsic Information Memory Reduce..."

  • ...Further studies have been performed to reduce the extrinsic information bit width by using adaptive quantization [Singh et al., 2008], pseudo-floating point representation [Park et al., 2008] and bit level representation [Kim & Park, 2009]....

    [...]

  • ..., 2008], pseudo-floating point representation [Park et al., 2008] and bit level representation [Kim & Park, 2009]....

    [...]

References
More filters
Proceedings Article
01 Jan 1993

7,742 citations

Proceedings ArticleDOI
23 May 1993
TL;DR: In this article, a new class of convolutional codes called turbo-codes, whose performances in terms of bit error rate (BER) are close to the Shannon limit, is discussed.
Abstract: A new class of convolutional codes called turbo-codes, whose performances in terms of bit error rate (BER) are close to the Shannon limit, is discussed. The turbo-code encoder is built using a parallel concatenation of two recursive systematic convolutional codes, and the associated decoder, using a feedback decoding rule, is implemented as P pipelined identical elementary decoders. >

5,963 citations

Journal ArticleDOI
TL;DR: The general problem of estimating the a posteriori probabilities of the states and transitions of a Markov source observed through a discrete memoryless channel is considered and an optimal decoding algorithm is derived.
Abstract: The general problem of estimating the a posteriori probabilities of the states and transitions of a Markov source observed through a discrete memoryless channel is considered. The decoding of linear block and convolutional codes to minimize symbol error probability is shown to be a special case of this problem. An optimal decoding algorithm is derived.

4,830 citations

Journal ArticleDOI
TL;DR: It is shown that quaternary codes can be advantageous, both from performance and complexity standpoints, but that higher-order codes may not bring further improvement.
Abstract: The authors consider the use of non-binary convolutional codes in turbo coding. It is shown that quaternary codes can be advantageous, both from performance and complexity standpoints, but that higher-order codes may not bring further improvement.

134 citations