scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Unequal Error Protection of Memories in LDPC Decoders

01 Oct 2015-IEEE Transactions on Computers (IEEE)-Vol. 64, Iss: 10, pp 2981-2993
TL;DR: The devised UEP method is divided in four adjustable levels, each one offering a different degree of protection, and shows an unmatched level of protection from errors at a small complexity and energy cost.
Abstract: Memories are one of the most critical components of many systems: due to exposure to energetic particles, fabrication defects and aging they are subject to various kinds of permanent and transient errors. In this scenario, Unequal error protection (UEP) techniques have been proposed in the past to encode stored information, allowing to detect and possibly recover from errors during load operations, while offering different levels of protection to partitions of codewords according to their importance. Low-density parity-check (LDPC) codes are used in many communication standards to encode the transmitted information: at reception, LDPC decoders heavily rely on memories to store and correct the received information. To ensure efficient and reliable decoding of information, the need to protect the memories used in LDPC decoders is of primary importance. In this paper we present a study on how to efficiently design UEP techniques for LDPC decoder memories. The devised UEP method is divided in four adjustable levels, each one offering a different degree of protection. The full UEP, along with simplified versions, has been implemented within an existing decoder and its area occupation and power consumption evaluated. Comparison with the literature on the subject shows an unmatched level of protection from errors at a small complexity and energy cost.

Summary (5 min read)

Introduction

  • Memories are particularly critical devices, that are subject to various types of faults.
  • This paper focuses on providing error resilience to LDPC decoders, to ensure correct functionality also in presence of permanent and transient memory error conditions under which current decoders cannot work.
  • As the authors will see throughout the paper, particularly suitable for LDPC decoders and applications that use narrow memories with frequent accesses and complex address patterns.
  • Comparison with the state of the art is performed in Section VIII and conclusions are drawn in Section IX.

II. LDPC DECODING

  • LDPC codes are characterized by a binary parity check matrix H [3] with M rows and N columns.
  • In the following the authors will focus on the layered scheduling technique, which has been shown to be more performing, nearly doubling the convergence speed of the decoding process with respect to two-phase scheduling.
  • Let us denote with λ[c] the LLR of symbol c; the bit LLR λk[c] is initialized, for column k in H, to the corresponding received soft value.
  • Out of the several approximations present in the literature, the authors have considered the Self-Corrected-Min-Sum (SCMS) [14] as it combines easiness of implementation with negligible BER performance losses.
  • Please observe that while Rlk and Qlk[c] are updated only once per iteration, and are thus endowed with the iteration indicator i, λk[c] is updated multiple times during each iteration, and the apexes ‘old” and ‘new” are consequently used to differentiate the values before and after each update.

III. PREVIOUS WORK

  • Several memory-protection techniques and algorithms have been presented in the specialized literature over the years.
  • The concept of unequal error protection of memories has been proposed for the first time in [7]: codewords are subdivided in slots, to which different degrees of protection are applied.
  • From a practical point of view, it has then been studied effectively for wireless transmissions and storage of of images, where a certain degree of unreliability can be tolerated [9], [20], [21].
  • The work in [26] builds and updates a list of cells that are probably faulty.
  • The work presented in [6] applies separate protection techniques to the different functional blocks of an existing LDPC decoder architecture, according to their level of exposure to failures and importance for the correct operations of the system.

IV. ERROR ANALYSIS

  • Almost all practical implementations of LDPC decoders make use of memories to store LLRs between iterations or, in case of multi-core decoders, to exchange data information between processing elements.
  • In the following subsections the authors move to analyze the effect of errors on different bits on the metrics stored in LDPC decoder memories, and how they influence the decoding per- formance at the variation of different parameters.
  • Fig. 2 plots the FER for three different error bits and three Imax values.
  • While the effect of errors is almost the same with Imax=10 and Imax=15, the degradation caused by erroneous bits is more consistent when Imax=20: in fact, the authors observe the existence of a larger gap between the “no error” curve and the MSB7 curve with respect to the two other cases, while the MSB3 curve is proportionally shifted.
  • Fig. 4 shows that rate and error injection variations do not scale proportionally, as it has been already observed with changes in code size: high rate codes are more sensitive to faults.

V. UNEQUAL ERROR PROTECTION

  • The analysis on the impact of the different memory errors carried out in Section IV highlighted that not all errors on the LLR bits have the same influence on the FER of LDPC decoders.
  • The choice on the number and type of error protection techniques is subject to a suitable tradeoff.
  • On one side, it should be guaranteed that the UEP is granted sufficient granularity to effectively act upon errors with different impacts on the decoding capability.
  • On the other side, as this number should be kept small to save area, execution time and complexity of the decoder, bits with similar significance should be protected with the same technique.
  • After having analyzed different tradeoff alternatives, the authors have opted for a UEP subdivided into four levels of possible error protection.

A. Level 1 - towards full recovery

  • The highest level of protection is applied to bits which reliability is mandatory for a correct decoding, i.e. the sign bit and possibly the magnitude MSBs.
  • Errors on sign bits and on bits representing a large part of the total dynamic will consequently have catastrophic effects on the decoding; sign changes and sudden increments or decrements in LLRs may cause an avalanche of metrics to evolve towards misleading directions.
  • To provide a high level of reliability and recovery, their choice has been that bits falling within the Level 1 protection level are tripled during write operations: at load time, a majority voter selects the most probable output.

B. Level 2 - tentative recovery from critical errors

  • An extensive simulation campaign has been performed to observe the characteristics of λoldk [c] and λ new k [c] that are most sensitive to memory errors and that can lead to unsuccessful decoding.
  • The authors have chosen to add a parity bit to the Level 2 bits, and in case of discrepancy during load operations the pattern recognition system is activated.
  • If λnewk [c] matches the critical bit pattern, recovery is possible by observing how λnewk [c] varies with changes of the Level 2 bits in λoldk [c].
  • The distinctive bit pattern is dependent on the total quantization of the LLRs and on the position of the wrong bit, while the Level 2 bits must be chosen with care to obtain the maximum effectiveness: thus, every case must be analyzed separately.
  • Level 2 is not able to give the same level of protection as Level 1, but gives a very good percentage of identification and recovery of errors that have been observed to be the main cause for LDPC decoder performance loss.

C. Level 3 - bounding of error impact

  • The authors have chosen to apply the same concept to the third level of protection, designed for bits of medium-to-low significance.
  • A parity bit is added to the protected bits during write operations.
  • When the LLR is loaded, parity is recomputed and in case of discrepancy the contribution of all the bits falling within Level 3 is nulled or reduced.
  • Level 3 protection does not allow to recover from errors, but reduces their impact by decreasing the LLR magnitude, that in turn induces a conservative behavior in the decoder.
  • This method can not be applied to bits expressing large percentages of the total dynamic, since the LLR magnitude change would be too large and cause errors.

D. Level 4 - no protection zone

  • As shown in Section IV, errors on the least significant bits seldom affect the overall decoding performance.
  • For Level 4, their choice for this set of low-importance bits has been to leave them unprotected, as possibly this will not incur in any impacting degradation.

E. UEP - full design

  • The partition of memory bits among the four levels of the proposed UEP has been carried out through extensive simulations.
  • As mentioned in Section V-B, the pattern recognition and error recovery involved in Level 2 protection must be evaluated according to the LLR quantization and to the number and position of bits assigned to Level 2.
  • This pattern is observed either in case a single error is introduced in MSB2 or MSB3 of λoldk [c], or in case both MSB2 and MSB3 are incorrect: this characteristic is exploited to recover from the error.
  • The authors do not mean that they do not contribute to the decoding, but, as they have observed in their tests, occasional error events on these bits do not affect the overall performance.
  • The additional memory cost of the complete UEP is 57.1%, i.e. the same as applying Level 1 to MSB1-2, but in this case shielding from errors five bits instead of two: the impact on the decoder architecture and the additional logic required are discussed in Section VI.

F. Remarks on Burst Errors

  • With the level of current integration the problem of burst or multi-cell errors has gathered increased interest [17], [32].
  • It is possible to greatly limit the impact of burst errors by scrambling the bits of LLRs before storage, and rearranging them at load time.
  • Scrambling is a technique widely used in communications and data storage, where the MSB LSB probability of burst errors is larger than the probability of single errors, as it allows, under certain conditions, to avoid long error correcting codes to recover from burst errors [32].
  • By interleaving the bits belonging to the same level with those from other levels, multiple errors are spread over the different protection techniques and can still be handled.
  • In fact, the sparse structure of the parity check matrix acts as an interleaver and does not require loading consecutive LLRs.

G. Additional schemes

  • Such a high degree of confidence is not always necessary.
  • This is achieved by including only some of the previous protection levels and has the advantage to guarantee the required lower level of protection by using a reduced overhead.
  • Table I summarizes the previously designed UEP case of study together with the two new ones, with details being given on the protection of λk[c].
  • UEPfull refers to the case of study detailed previously in this section.
  • The second scheme UEPsim1 applies Level 2 protection to MSB1 and MSB2, while MSB3-7 are left in Level 4.

A. Architecture

  • 7; UEPsim1 and UEPsim2 can be derived from it considering only the employed levels.
  • Three datapaths are necessary to implement the Level 2 operations: the parity comparison is performed on the Level 2 bits, and if an error occurred, each datapath receives a different version of the LLR (λk[c]1, λk[c]2 and λk[c]3).
  • This version of the A implementation has been named AREF and has been used as a reference in the following comparisons.
  • The difference in power consumption increments (24.9% and 23.3%) is even smaller, since the Level 1 memory bits in Asim2 contribute to a higher percentage of the total power consumption.
  • To prove that the devised UEP does not influence the performance of the decoder in terms of throughput and maximum frequency, Table III reports the delay introduced by each UEP level and by the whole architecture for different target frequencies in 90 nm CMOS technology.

VII. UEP PERFORMANCE

  • This section presents the performance evaluation of the proposed UEP under the same conditions of Section IV and Section V, showing the impact of each level of UEP as described in UEPfull.
  • The decoders in [31] and [30] present similarities with many other decoders in the state of the art (serial core, min-sumbased layered decoding, partial parallelism, shared or dedicated memories, either high-throughput or flexible design).
  • A relevant improvement can be noticed for all the AFPI values except the largest (i.e. AFPI=248.9), while no degradation is observed for the smallest AFPI=0.09 case.
  • The complete UEPfull has been employed to obtain the curves in Fig. 12.
  • Let us move to the evaluation of another characteristic of the proposed protection technique: stuck at bits errors.

VIII. COMPARISON

  • The resilient LDPC decoder designed in [6] protects both memories and logic from errors.
  • A dedicated 9-bit RAM is used to store λk[c] values, and it is protected with MSB1 tripling (+22% memory increment), while the initial LLRs received from the channel are stored in a 6-bit RAM and protected with MSB1 duplication and puncturing in case of discrepancy (+23% memory increment).
  • On the other hand, the proposed UEP targets much more degraded environments, since total error protection is achieved in presence of AFPI four orders of magnitude greater than those in [6]: moreover, this work tackles permanent errors as well, together with burst errors, both neglected in [6].
  • The work in [32] can potentially reach performance similar to this work’s, but at a much higher complexity cost.
  • The statistical error correction scheme devised in [25] is built around a concept different from the proposed UEP: voltage overscaling is introduced in the decoder to save energy, and the performance loss brought by the timing errors caused by this technique must be compensated.

IX. CONCLUSION

  • This paper proposes a novel Unequal Error Protection technique for memories used in LDPC decoders.
  • It is divided in four levels, that can be adjusted and applied according to the decoder parameters and desired degree of protection.
  • A complete design is presented, together with results for other two alternative schemes, showing a high level of resilience to transient and permanent errors, both single-bit and multibit.
  • The design of an hardware architecture implementing the UEP is proposed, and applied to an existing LDPC decoder to evaluate area and power consumption overheads.
  • Comparison with the state of the art shows superior error resiliency even at comparable complexity overheads.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: The results of this manuscript confirm that the proposed multi-level burst error correcting UEP codes reduce the hardware overhead with no significant degradation in storage protection as potential storage application for approximate computing systems.
Abstract: Processing at the nanometric scales presents unique challenges that may require new computational paradigms such as approximate computing. In this paper a novel approach to memory protection using an unequal protection code (UEP) is proposed; this approach is in synergy with approximate (or inexact) computing. Multi-level burst error correcting UEP codes are analyzed. These codes improve over previously presented two-level burst error correcting UEP codes, because they utilize different conditions and criteria in the code partitions and decoder construction. An analysis by which multiple partitions can be selected to reduce the expected error magnitude, is provided. The area and power consumption of the parallel decoders closely depend on the desired code function. Simulation shows that the area and power consumption of the parallel error pattern generator are proportional to the partition length; the gate depth however is not strongly related to the partition length. The results of this manuscript confirm that the proposed multi-level burst error correcting UEP codes reduce the hardware overhead with no significant degradation in storage protection as potential storage application for approximate computing systems.

3 citations


Cites background from "Unequal Error Protection of Memorie..."

  • ...However, the proposed UEP code is suitable for cases in which the delay of the decoders is not critical (the time criticality is often relaxed in most big data applications suitable for approximate computing [21]); it is not applicable to serial memory systems (the interested reader should refer to [22] for uneven protection of this type of memory)....

    [...]

01 Jan 2016

1 citations


Cites methods from "Unequal Error Protection of Memorie..."

  • ...In WiMAX LDPC decoders, the memory accesses in one LDPC decoding iteration can reach up to 32, 800 [103]....

    [...]

Book ChapterDOI
08 Jul 2016
TL;DR: A new architecture of UEP-LDPC encoder based on the method of Richardson and Urbanke in image transmission and a novel two-stage dynamic programming algorithm to perform matrix triangulation instead of traditional approach are presented.
Abstract: Irregular low-density parity-check (LDPC) codes can provide an unequal error protection (UEP) capability naturally by unequal degree distribution In this paper we propose a new architecture of UEP-LDPC encoder based on the method of Richardson and Urbanke in image transmission In order to reduce processing complexity and hardware consumption, we also present a novel two-stage dynamic programming algorithm to perform matrix triangulation instead of traditional approach Experiment results show the optimization architecture and algorithm can provide high UEP capability and reduce encoding complexity significantly

1 citations

Journal ArticleDOI
TL;DR: This paper carefully investigates how to choose code rates of precodes and degree distributions of EWF codes for different information lengths and proposes a coding scheme that can achieve superior performance compared to the previous scheme for small and moderate information lengths.
Abstract: Expanding window fountain (EWF) codes, which can provide unequal erasure protection property, are used as an efficient application-layer forward error correction solution for scalable multimedia data transmission over packet networks. Similar to Raptor codes, precoded EWF codes can provide linear coding complexity. However, only when the information length is large, the precoded EWF codes in previous literatures can achieve good performance. In this paper, we carefully investigate how to choose code rates of precodes and degree distributions of EWF codes for different information lengths. Our proposed precoded EWF coding scheme can achieve superior performance compared to the previous scheme for small and moderate information lengths. Simulation results for the scalable video coding extension of the H.264/AVC standard show that, compared with the previous scheme, our proposed scheme requires a smaller reception overhead to recover the base layer.

1 citations

01 Jan 2015
TL;DR: This dissertation provides a novel information theoretic approach to analyze and develop robust system design for iterative information processing algorithms running on noisy hardware and proposes new theory-guided methods that guarantee reliable performance under hardware errors of varied characteristics.
Abstract: Author(s): Huang, Chu-Hsiang | Advisor(s): Dolecek, Lara | Abstract: In traditional information processing systems, inference algorithms are designed to collect and process information subject to noisy transmission. It is saliently assumed that the inference algorithms themselves have error-free implementations. However, with the scaling of process technologies and the increase in process variations, nano-devices will be inherently unreliable. Producing reliable decisions in systems with unreliable components thus becomes an important and challenging problem. In this dissertation, we provide a novel information theoretic approach to analyze and develop robust system design for iterative information processing algorithms running on noisy hardware. We characterize the fundamental performance limits of the systems under the joint effect of communication/environment noise and hardware noise. Based on this analysis, we then propose new theory-guided methods that guarantee reliable performance under hardware errors of varied characteristics. The proposed methods successfully explore the inherent robustness of the information processing algorithms and leverage the error-tolerance of the considered applications to minimize the overhead introduced in robust system design. We investigate a wide range of iterative information processing systems implemented on noisy hardware via the proposed information theoretic approach. Starting from iterative message passing decoders, we study different decoder implementations including finite-precision and infinite precision decoders subject to various types of hardware errors. We identify the performance-critical components in the iterative decoders via a theoretical analysis and develop robust system designs to assign computation units with different error characteristics to different components in the decoder. Then, we apply the proposed analysis and design methodology to general inference problems on probabilistic graphical models and develop robust implementations of the general belief propagation algorithms by noise cancellation based on averaging. For certain applications with error-tolerance, e.g. image processing and classification based on machine learning, we propose theory-guided adaptive coding schemes inspired by approximate computing to correct errors without additional hardware redundancy. The redundant free codes have the same performance as the traditional codes. Our algorithm-guided approach offers up to 100x reduction in the error rates relative to the nominal system designs.

Cites background from "Unequal Error Protection of Memorie..."

  • ...Important topics including density evolution, equivalent noise modeling, and unequal error protection were studied in [12, 55, 74]....

    [...]

References
More filters
Book
01 Jan 1963
TL;DR: A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described and the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length.
Abstract: A low-density parity-check code is a code specified by a parity-check matrix with the following properties: each column contains a small fixed number j \geq 3 of l's and each row contains a small fixed number k > j of l's. The typical minimum distance of these codes increases linearly with block length for a fixed rate and fixed j . When used with maximum likelihood decoding on a sufficiently quiet binary-input symmetric channel, the typical probability of decoding error decreases exponentially with block length for a fixed rate and fixed j . A simple but nonoptimum decoding scheme operating directly from the channel a posteriori probabilities is described. Both the equipment complexity and the data-handling capacity in bits per second of this decoder increase approximately linearly with block length. For j > 3 and a sufficiently low rate, the probability of error using this decoder on a binary symmetric channel is shown to decrease at least exponentially with a root of the block length. Some experimental results show that the actual probability of decoding error is much smaller than this theoretical bound.

11,592 citations


"Unequal Error Protection of Memorie..." refers background in this paper

  • ...Low Density Parity Check (LDPC) codes [3] are block error correcting codes employed in a variety of communica-...

    [...]

  • ...LDPC codes are characterized by a binary parity check matrix H [3] with M rows and N columns....

    [...]

Journal ArticleDOI
TL;DR: A general method of constructing error correcting binary group codes is obtained and an example is worked out to illustrate the method of construction.
Abstract: A general method of constructing error correcting binary group codes is obtained. A binary group code with n places, k of which are information places is called an (n,k) code. An explicit method of constructing t-error correcting (n,k) codes is given for n = 2m−1 and k = 2m−1−R(m,t) ≧ 2m−1−mt where R(m,t) is a function of m and t which cannot exceed mt. An example is worked out to illustrate the method of construction.

1,246 citations


"Unequal Error Protection of Memorie..." refers methods in this paper

  • ...Many types of codes have been experimented with, from simple Hamming and BCH [16] codes to more complex turbo codes and LDPC codes themselves [17], [18], [19]....

    [...]

  • ...Many types of codes have been experimented with, from simple Hamming and BCH [16] codes to more complex turbo codes and LDPC codes themselves [17]–[19]....

    [...]

Journal ArticleDOI
TL;DR: Two simplified versions of the belief propagation algorithm for fast iterative decoding of low-density parity check codes on the additive white Gaussian noise channel are proposed, which greatly simplifies the decoding complexity of belief propagation.
Abstract: Two simplified versions of the belief propagation algorithm for fast iterative decoding of low-density parity check codes on the additive white Gaussian noise channel are proposed. Both versions are implemented with real additions only, which greatly simplifies the decoding complexity of belief propagation in which products of probabilities have to be computed. Also, these two algorithms do not require any knowledge about the channel characteristics. Both algorithms yield a good performance-complexity trade-off and can be efficiently implemented in software as well as in hardware, with possibly quantized received values.

1,039 citations


"Unequal Error Protection of Memorie..." refers methods in this paper

  • ...5 shows the the FER degradation due to P(e)= 0.0005 for a code decoded with both NMS and SCMS, and the inherent resilience of SCMS can be easily noted....

    [...]

  • ...The decoding performance of the SCMS approximation is intrinsically more resistant to hardware errors than more common approximations of the BP algorithm like the Normalized-Min-Sum (NMS) [29]....

    [...]

Proceedings ArticleDOI
D.E. Hocevar1
06 Dec 2004
TL;DR: The previously devised irregular partitioned permutation LDPC codes have a construction that easily accommodates a layered decoding and it is shown that the decoding performance is improved by a factor of two in the number of iterations required.
Abstract: We apply layered belief propagation decoding to our previously devised irregular partitioned permutation LDPC codes These codes have a construction that easily accommodates a layered decoding and we show that the decoding performance is improved by a factor of two in the number of iterations required We show how our previous flexible decoding architecture can be adapted to facilitate layered decoding This results in a significant reduction in the number of memory bits and memory instances required, in the range of 45-50% The faster decoding speed means the decoder logic can also be reduced by nearly 50% to achieve the same throughput and error performance In total, the overall decoder architecture can be reduced by nearly 50%

628 citations


"Unequal Error Protection of Memorie..." refers background in this paper

  • ...In layered decoders the H matrix is subdivided into sets of consecutive, non-communicating parity-check constraints, called layers: these can be decoded in sequence, with the extrinsic information being propagated from one layer to the following ones [12]....

    [...]

Frequently Asked Questions (1)
Q1. What have the authors contributed in "Unequal error protection of memories in ldpc decoders" ?

Memories are one of the most critical components of many systems: due to exposure to energetic particles, fabrication defects and aging they are subject to various kinds of permanent and transient errors. In this paper the authors present a study on how to efficiently design UEP techniques for LDPC decoder memories.