scispace - formally typeset
Search or ask a question

Showing papers on "BCH code published in 2013"


Proceedings ArticleDOI
12 Feb 2013
TL;DR: A strong ECC alternative can be used in NAND flash memory to retain its reliability to respond the continuous cost reduction, and its relatively small increase of response time delay is acceptable to mainstream application users, considering a huge gain in SSD capacity, its reliability, and the price reduction.
Abstract: Conventional error correction codes (ECCs), such as the commonly used BCH code, have become increasingly inadequate for solid state drives (SSDs) as the capacity of NAND flash memory continues to increase and its reliability continues to degrade. It is highly desirable to deploy a much more powerful ECC, such as low-density parity-check (LDPC) code, to significantly improve the reliability of SSDs. Although LDPC code has had its success in commercial hard disk drives, to fully exploit its error correction capability in SSDs demands unconventional fine-grained flash memory sensing, leading to an increased memory read latency. To address this important but largely unexplored issue, this paper presents three techniques to mitigate the LDPC-induced response time delay so that SSDs can benefit its strong error correction capability to the full extent. We quantitatively evaluate these techniques by carrying out trace-based SSD simulations with runtime characterization of NAND flash memory reliability and LDPC code decoding. Our study based on intensive experiments shows that these techniques used in an integrated way in SSDs can reduce the worst-case system read response time delay from over 100% down to below 20%. With our proposed techniques, a strong ECC alternative can be used in NAND flash memory to retain its reliability to respond the continuous cost reduction, and its relatively small increase of response time delay is acceptable to mainstream application users, considering a huge gain in SSD capacity, its reliability, and the price reduction.

223 citations


Proceedings ArticleDOI
01 Dec 2013
TL;DR: Analysis and simulation of the iterative HDD of tightly-braided block codes with BCH component codes for high-speed optical communication shows that these codes are competitive with the best schemes based on HDD.
Abstract: Designing error-correcting codes for optical communication is challenging mainly because of the high data rates (e.g., 100 Gbps) required and the expectation of low latency, low overhead (e.g., 7% redundancy), and large coding gain (e.g., >9dB). Although soft-decision decoding (SDD) of low-density parity-check (LDPC) codes is an active area of research, the mainstay of optical transport systems is still the iterative hard-decision decoding (HDD) of generalized product codes with algebraic syndrome decoding of the component codes. This is because iterative HDD allows many simplifications and SDD of LDPC codes results in much higher implementation complexity. In this paper, we use analysis and simulation to evaluate tightly-braided block codes with BCH component codes for high-speed optical communication. Simulation of the iterative HDD shows that these codes are competitive with the best schemes based on HDD. Finally, we suggest a specific design that is compatible with the G.709 framing structure and exhibits a coding gain of >9.35 dB at 7% redundancy under iterative HDD with a latency of approximately 1 million bits.

86 citations


Proceedings ArticleDOI
23 Dec 2013
TL;DR: A decoding algorithm is proposed, which employs estimates of the not-yet-processed bit channel error probabilities to perform directed search in code tree, reducing thus the total number of iterations.
Abstract: A novel construction of polar codes with dynamic frozen symbols is proposed. The proposed codes are subcodes of extended BCH codes, which ensure sufficiently high minimum distance. Furthermore, a decoding algorithm is proposed, which employs estimates of the not-yet-processed bit channel error probabilities to perform directed search in code tree, reducing thus the total number of iterations.

80 citations


Journal ArticleDOI
TL;DR: This paper presents several simple design techniques that can reduce such latency penalty caused by soft-decision ECCs, and suggests that the latency can be reduced by up to 85.3%.
Abstract: With the aggressive technology scaling and use of multi-bit per cell storage, NAND flash memory is subject to continuous degradation of raw storage reliability and demands more and more powerful error correction codes (ECC). This inevitable trend makes conventional BCH code increasingly inadequate, and iterative coding solutions such as LDPC codes become very natural alternative options. However, these powerful coding solutions demand soft-decision memory sensing, which results in longer on-chip memory sensing latency and memory-to-controller data transfer latency. Leveraging well-established lossless data compression theories, this paper presents several simple design techniques that can reduce such latency penalty caused by soft-decision ECCs. Their effectiveness have been well demonstrated through extensive simulations, and the results suggest that the latency can be reduced by up to 85.3%.

78 citations


Posted Content
TL;DR: In this paper, the word-line voltages are optimized by maximizing the mutual information (MI) of the quantized channel, and the quantization that maximizes the MI also minimizes the frame error rate.
Abstract: Multiple reads of the same Flash memory cell with distinct word-line voltages provide enhanced precision for LDPC decoding. In this paper, the word-line voltages are optimized by maximizing the mutual information (MI) of the quantized channel. The enhanced precision from a few additional reads allows FER performance to approach that of full-precision soft information and enables an LDPC code to significantly outperform a BCH code. A constant-ratio constraint provides a significant simplification in the optimization with no noticeable loss in performance. For a well-designed LDPC code, the quantization that maximizes the mutual information also minimizes the frame error rate in our simulations. However, for an example LDPC code with a high error floor caused by small absorbing sets, the MMI quantization does not provide the lowest frame error rate. The best quantization in this case introduces more erasures than would be optimal for the channel MI in order to mitigate the absorbing sets of the poorly designed code. The paper also identifies a trade-off in LDPC code design when decoding is performed with multiple precision levels; the best code at one level of precision will typically not be the best code at a different level of precision.

78 citations


Journal ArticleDOI
TL;DR: The nonbinary quantum B CH codes presented here have better parameters than those quantum BCH codes available in the literature.
Abstract: Let q ≥ 3 be a prime power. Maximal designed distances of primitive Hermitian dual containing q2-ary BCH codes (narrow-sense or non-narrow-sense) are determined by a careful analysis of properties of cyclotomic cosets. Non-narrow-sense BCH codes which achieve these maximal designed distances are presented, and a sequence of nested nonnarrow-sense BCH codes that contain these BCH codes with maximal designed distances are constructed and their parameters are computed. Consequently, new nonbinary quantum BCH codes are derived from these non-narrow-sense BCH codes. The nonbinary quantum BCH codes presented here have better parameters than those quantum BCH codes available in the literature.

45 citations


Journal ArticleDOI
TL;DR: This paper presents a robust readable data hiding algorithm for H.264/AVC video streams without intra-frame distortion drift, and can get more robustness, effectively avert intra- frame distortion drift and get high visual quality.

41 citations


01 Jan 2013
TL;DR: An infinite family of BCH DNA codes is constructed derived from cyclic reverse-complement codes over the ring F2 + uF2 with u 2 = 0 for use in DNA computing applications.
Abstract: We construct codes over the ring F2 + uF2 with u 2 = 0 for use in DNA computing applications The codes obtained satisfy the reverse complement constraint, the GC content constraint, and avoid the secondary structure They are derived from cyclic reverse-complement codes over the ring F2 + uF2 We also construct an infinite family of BCH DNA codes

34 citations


Journal ArticleDOI
TL;DR: This brief presents a new area-efficient multimode encoder for long Bose-Chaudhuri-Hocquenghen codes that reduces hardware complexity by up to 97.2% and 49.1% compared with the previous Chinese-remainder-theorem-based and weighted-summation-based multimode architectures, respectively.
Abstract: This brief presents a new area-efficient multimode encoder for long Bose-Chaudhuri-Hocquenghen codes. In the proposed multimode encoding architecture, several short linear-feedback shift registers (LFSRs) are cascaded in series to achieve the same functionality that a long LFSR has, and the output of a short LFSR is fed back to the input side to support multimode encoding. Whereas previous multimode architectures necessitate huge overhead due to preprocessing and postprocessing, the proposed architecture completely eliminates the overhead by exploiting an efficient transformation. Without sacrificing the latency, the proposed architecture reduces hardware complexity by up to 97.2% and 49.1% compared with the previous Chinese-remainder-theorem-based and weighted-summation-based multimode architectures, respectively.

30 citations


Journal ArticleDOI
08 May 2013-Entropy
TL;DR: A method of blind recognition of the coding parameters for binary Bose-Chaudhuri-Hocquenghem (BCH) codes is proposed in this paper and the recognition approach in soft decision situations with Binary-Phase-Shift-Key modulations and Additive-White-Gaussian-Noise channels is proposed.
Abstract: A method of blind recognition of the coding parameters for binary Bose-Chaudhuri-Hocquenghem (BCH) codes is proposed in this paper. We consider an intelligent communication receiver which can blindly recognize the coding parameters of the received data stream. The only knowledge is that the stream is encoded using binary BCH codes, while the coding parameters are unknown. The problem can be addressed on the context of the non-cooperative communications or adaptive coding and modulations (ACM) for cognitive radio networks. The recognition processing includes two major procedures: code length estimation and generator polynomial reconstruction. A hard decision method has been proposed in a previous literature. In this paper we propose the recognition approach in soft decision situations with Binary-Phase-Shift-Key modulations and Additive-White-Gaussian-Noise (AWGN) channels. The code length is estimated by maximizing the root information dispersion entropy function. And then we search for the code roots to reconstruct the primitive and generator polynomials. By utilizing the soft output of the channel, the recognition performance is improved and the simulations show the efficiency of the proposed algorithm.

30 citations


Journal ArticleDOI
TL;DR: A study is presented through simulation and experiment on the proposed forward error correction (FEC) codes for data centers using higher order pulse amplitude modulation (PAM) to highlight the tradeoffs in the adopted FEC approach for a fixed transmission link.
Abstract: A study is presented through simulation and experiment on the proposed forward error correction (FEC) codes for data centers using higher order pulse amplitude modulation (PAM). The results highlight the tradeoffs in the adopted FEC approach for a fixed transmission link. Reed-Solomon (RS) and Bose-Chaudhuri-Hocquenghem (BCH) codes are considered in data center applications due to the low latency requirement budgeted for the encoding and decoding processes. Using Monte-Carlo and semi-analytical simulations, the signal to noise ratio requirement of PAM-N is obtained for a 500-m fiber transmission link at 100 Gb/s. For latency requirement under 100 ns, short-block RS codes offer possibly low complexity implementation with a pre-FEC bit error rate (BER) threshold at 8.8×10-5. On the other end, BCH codes provide higher coding gain up to 9.3 dB with a BER threshold at 2.5×10-3 at the expense of potentially longer decoding delay and complexity. An experimental investigation at 25 Gb/s for PAM-4 signal is performed to measure the actual net coding gain of the system. Results show that the performance of RS(578 514) code is within 1 dB of both BCH(3456 3084) and BCH(2464 2056) with 15% and 23% reduction in complexity, respectively.

Journal ArticleDOI
Youngjoo Lee1, Hoyoung Yoo1, Jaehwan Jung1, Jihyuck Jo1, In-Cheol Park1 
TL;DR: To improve the reliability of MLC NAND flash memory, this paper presents an energy-efficient high-throughput architecture for decoding concatenated-BCH (CBCH) codes, being vastly superior to the state-of-the-art architectures.
Abstract: To improve the reliability of MLC NAND flash memory, this paper presents an energy-efficient high-throughput architecture for decoding concatenated-BCH (CBCH) codes. As the data read from the flash memory is hard-decided in practical applications, the proposed CBCH decoding method is a promising solution to achieve both high error-correction capability and energy efficiency. In the proposed CBCH decoding, the number of on-chip memory accesses consuming much energy is minimized by computing and updating syndromes two-dimensionally. To achieve an area-efficient hardware realization, row and column decoders are unified into one decoder and some syndromes are computed when they are needed. In addition, the decoding throughput is enhanced remarkably by skipping redundant decoding processes. Based on the proposed CBCH decoding architecture, a prototype chip is implemented in a 65-nm CMOS process to decode the (70528, 65536) CBCH code. The proposed decoder provides a decoding throughput of 17.7 Gb/s and an energy efficiency of 2.74 pJ/bit, being vastly superior to the state-of-the-art architectures.

Journal ArticleDOI
Yonggang Fu1
01 Mar 2013-Optik
TL;DR: A novel DCT based image watermarking scheme is proposed, where the watermark bits are encoded by BCH code, and then embedded into the host by modulating the relationships between the selected DCT coefficients.

Journal ArticleDOI
TL;DR: An iterative decoding algorithm is presented between inner QC-LDPC and outer BCH codes to alleviate the performance degradation in the waterfall region due to code-concatenation rate loss and to guarantee that the concatenated coding system is free from undesired error floor.
Abstract: In this letter, we consider a concatenated BCH and QC-LDPC coding system for potential use of data protection on flash memory. Two issues are studied, and strategies to resolve them are proposed. First, in order to guarantee that the concatenated coding system is free from undesired error floor, we propose a strategy to select the outer BCH codes according to the error patterns of inner QC-LDPC code. We next present an iterative decoding algorithm between inner QC-LDPC and outer BCH codes to alleviate the performance degradation in the waterfall region due to code-concatenation rate loss. The two proposals jointly provide a feasible design for the concatenated BCH and QC-LDPC coding system. Simulations to verify the performance of the proposed concatenated coding system design are given at the end.

Patent
24 Sep 2013
TL;DR: In this article, a method and apparatus for transmitting and receiving a Broadcast Channel (BCH) in a cellular communication system is presented, which includes repeating symbols comprising information about the BCH, code-covering the repeated symbols with codes selected from a previously given code set, subcarrier-mapping the code-covered symbols, and transmitting the sub-carriermapped symbols in one frame by using different beams corresponding to the selected codes.
Abstract: Provided is a method and apparatus for transmitting and receiving a Broadcast Channel (BCH) in a cellular communication system. The method for transmitting a BCH in a cellular communication system includes repeating symbols comprising information about the BCH, code-covering the repeated symbols with codes selected from a previously given code set, subcarrier-mapping the code-covered symbols, and transmitting the subcarrier-mapped symbols in one frame by using different beams corresponding to the selected codes. The codes are selected based on a number of repetitions, a cell identifier, and a beam index.

Journal ArticleDOI
TL;DR: The codes obtained satisfy the reverse complement constraint, the GC content constraint, and avoid the secondary structure, and are derived from cyclic reverse-complement codes over the ring F2+uF2.
Abstract: We construct codes over the ring $$\mathbb F _2+u\mathbb F _2$$ with $$u^2=0$$ for use in DNA computing applications. The codes obtained satisfy the reverse complement constraint, the $$GC$$ content constraint, and avoid the secondary structure. They are derived from cyclic reverse-complement codes over the ring $$\mathbb F _2+u\mathbb F _2$$ . We also construct an infinite family of BCH DNA codes.

Journal Article
TL;DR: Performance of Reed Solomon Code and BCH Code is compared over Rayleigh fading channel to ensure that the data is received correctly by the receiver in minimum number of retransmissions.
Abstract: Data transmission over a communication channel is prone to a number of factors that can render the data unreliable or inconsistent by introducing noise, crosstalk or various other disturbances. A mechanism has to be in place that detects these anomalies in the received data and corrects it to get the data back as it was meant to be sent by the sender. Over the years a number of error detection and correction methodologies have been devised to send and receive the data in a consistent and correct form. The best of these methodologies ensure that the data is received correctly by the receiver in minimum number of retransmissions. In this paper performance of Reed Solomon Code (RS) and BCH Code is compared over Rayleigh fading channel.

Journal ArticleDOI
TL;DR: In this paper, the authors give the parameters of any evaluation code on a smooth quadric surface, including hyperbolic quadrics, by detecting a BCH structure on these codes and using the BCH bound.
Abstract: We give the parameters of any evaluation code on a smooth quadric surface. For hyperbolic quadrics the approach uses elementary results on product codes and the parameters of codes on elliptic quadrics are obtained by detecting a BCH structure on these codes and using the BCH bound. The elliptic quadric is a twist of the surface P 1 � P 1 and we detect a similar BCH structure on twists of the Segre embedding of a product of any d copies of the projective line.

Patent
10 Jun 2013
TL;DR: In this article, a BCH encoder with linear feedback shift registers (LFSRs) was proposed to correct up to a selectable maximum number of errors in the input polynomials.
Abstract: An embodiment of the invention relates to a BCH encoder formed with linear feedback shift registers (LFSRs) to form quotients and products of input polynomials with irreducible polynomials of a generator polynomial g(x) of the BCH encoder, with and without pre-multiplication by a factor x m . The BCH encoder includes multiplexers that couple LFSR inputs and outputs to other LFSRs depending on a data input or parity generation state. The BCH encoder can correct up to a selectable maximum number of errors in the input polynomials. The BCH encoder further includes LFSR output polynomial exponentiation processes to produce partial syndromes for the input data in a syndrome generation state. In the syndrome generation state the LFSRs perform polynomial division without pre-multiplication by the factor x m . The exponentiation processes produce partial syndromes from the resulting remainder polynomials of the input data block.

Journal ArticleDOI
TL;DR: An improved soft BCH decoding algorithm is presented to achieve both competitive hardware complexity and better error-correcting performance by dealing with least reliable bits and compensating one extra error outside the least reliable set.
Abstract: Compared with traditonal hard Bose-Chaudhuri-Hochquenghem (BCH) decoders, soft BCH decoders provide better error-correcting performance but much higher hardware complexity. In this brief, an improved soft BCH decoding algorithm is presented to achieve both competitive hardware complexity and better error-correcting performance by dealing with least reliable bits and compensating one extra error outside the least reliable set. For BCH (255, 239; 2) and (255, 231; 3) codes, our proposed soft BCH decoders can achieve up to 0.75-dB coding gain with one extra error compensation and 5% less complexity than the traditional hard BCH decoders.

Proceedings ArticleDOI
19 May 2013
TL;DR: Two binary versions of reformulated inverse-free Berlekamp-Massey (riBM) algorithm are presented, one of which reduces about 1/3 fewer process elements and registers and the other requires only one half of gate counts.
Abstract: Although various universal algorithms have been proposed for Key Equation Solver(KES), the most critical part of Reed-Solomon(RS) codes, little optimization has been done for their binary sibling-binary BCH codes. This paper presents two binary versions of reformulated inverse-free Berlekamp-Massey (riBM) algorithm. The proposed algorithms halve iteration cycles and arrange variables more flexibly and effectively. The first simplified algorithm reduces about 1/3 fewer process elements and registers, while the second requires only one half of gate counts compared to the original riBM algorithm. Also folded architectures can be adopted because both of the two optimized algorithms are symmetrical and regular.

Journal ArticleDOI
01 Nov 2013
TL;DR: A priority based ECC (PB-ECC) approach, where the more important higher order bits are protected with higher priority than the less important lower order bits since the human visual system is less sensitive to LOB errors is presented.
Abstract: With aggressive supply voltage scaling, SRAM bit-cell failures in the embedded memory of the H.264 system result in significant degradation to video quality. Error Correction Coding (ECC) has been widely used in the embedded memories in order to correct these failures, however, the conventional ECC approach does not consider the differences in the importance of the data stored in the memory. This paper presents a priority based ECC (PB-ECC) approach, where the more important higher order bits (HOBs) are protected with higher priority than the less important lower order bits (LOBs) since the human visual system is less sensitive to LOB errors. The mathematical analysis regarding the error correction capability of the PB-ECC scheme and its resulting peak signal-to-noise ratio(PSNR) degradation in H.264 system are also presented to help the designers to determine the bit-allocation of the higher and lower priority segments of the embedded memory. We designed and implemented three PB-ECC cases (Hamming only, BCH only, and Hybrid PB-ECC) using 90 nm CMOS technology. With the supply voltage at 900 mV or below, the experiment results delivers up to 6.0 dB PSNR improvement with a smaller circuit area compared to the conventional ECC approach.

Posted ContentDOI
TL;DR: In this paper, the performance of Reed Solomon Code (RS) and BCH Code is compared over Rayleigh fading channel and the best of these methodologies ensure that the data is received correctly by the receiver in minimum number of retransmissions.
Abstract: Data transmission over a communication channel is prone to a number of factors that can render the data unreliable or inconsistent by introducing noise, crosstalk or various other disturbances. A mechanism has to be in place that detects these anomalies in the received data and corrects it to get the data back as it was meant to be sent by the sender. Over the years a number of error detection and correction methodologies have been devised to send and receive the data in a consistent and correct form. The best of these methodologies ensure that the data is received correctly by the receiver in minimum number of retransmissions. In this paper performance of Reed Solomon Code (RS) and BCH Code is compared over Rayleigh fading channel.

Patent
09 May 2013
TL;DR: In this paper, modulation and coding schemes are provided for improved performance of wireless communications systems to support services and applications for terminals with operational requirements at relatively low E s/N 0 ratios and terminals at relatively high E s /N 0 ratio.
Abstract: Modulation and coding schemes are provided for improved performance of wireless communications systems to support services and applications for terminals with operational requirements at relatively low E s /N 0 ratios and terminals at relatively high E s /N 0 ratios. The new modulation and coding schemes provide new BCH codes, low density parity check (LDPC) codes and interleaving methods. The modulation and coding schemes also provide new modulation signal constellations.

Proceedings ArticleDOI
Zhen Wang1
29 May 2013
TL;DR: The presented error correcting algorithm takes only 1 clock cycle to finish if no error or a single-bit error occurs, and the decoding latency is much smaller than the latency for decoding BCH codes using Berlekamp Massey algorithm and Chien search.
Abstract: As the technology moves into the nano-realm, traditional single-error-correcting, double-error-detecting (SEC-DED) codes are no longer sufficient for protecting memories against transient errors due to the increased multi-bit error rate. The well known double-error-correcting BCH codes and the classical decoding method for BCH codes based on Berlekamp-Massey algorithm and Chien search cannot be directly adopted to replace SEC-DED codes because of their much larger decoding latency. In this paper, we propose the hierarchical double-error-correcting (HDEC) code. The construction methods and the decoder architecture for the codes are described. The presented error correcting algorithm takes only 1 clock cycle to finish if no error or a single-bit error occurs. When there are multi-bit errors, the decoding latency is O(log2m) clock cycles for codes defined over GF(2m). This is much smaller than the latency for decoding BCH codes using Berlekamp Massey algorithm and Chien search, which is O(k) clock cycles - k is the number of information bits for the code and m ~ O(log2k). Synthesis results show that the proposed (79, 64) HDEC code requires only 80% of the area and consumes <; 70% of the power compared to the classical (78, 64) BCH code. For a large bit distortion rate (10-3 ~ 10-2), the average decoding latency for the (79, 64) HDEC code is only 36% ~ 60% of the latency for decoding the (78, 64) BCH code.

Proceedings ArticleDOI
01 Sep 2013
TL;DR: Different options for updating the error correcting code currently used in space mission telecommand links are investigated and the behavior of alternative schemes, based on parallel concatenated turbo codes and soft-decision decoded BCH codes are explored.
Abstract: We investigate and compare different options for updating the error correcting code currently used in space mission telecommand links. Taking as a reference the solutions recently emerged as the most promising ones, based on Low-Density Parity-Check codes, we explore the behavior of alternative schemes, based on parallel concatenated turbo codes and soft-decision decoded BCH codes. Our analysis shows that these further options can offer similar or even better performance.

Proceedings ArticleDOI
11 Apr 2013
TL;DR: This paper investigates the possibility of using Reed Muller (RM) codes to address multiple bit errors in high speed on board aerospace applications and shows improved speed power performance.
Abstract: With the continuous decrease in the minimum feature size and increase in the chip density due to technology scaling on-chip memories are becoming increasingly susceptible to multi-bit soft errors due to single or multiple event upsets caused by environmental factors such as cosmic rays, neutrons particles. The increase in multi-bit errors could lead to higher risk of data corruption and even catastrophic disasters in aerospace applications. Traditionally the memories have been protected from soft errors using error detection/correction codes. The traditional Hamming code with SEC-DED capability cannot address these type of errors. It is possible to use powerful non-binary BCH codes such as Reed-Solomon code to address multiple bit errors, However, it could take several cycles of latency to complete such algorithms and run at relatively slow speed. We investigate the possibility of using Reed Muller (RM) codes to address multiple bit errors in. high speed on board aerospace applications in this paper. Comparison with traditional techniques shows improved speed power performance. Specifically with its importance in applications as a 3 bit error correcting, self dual code a RM(2, 5) of dimension 16 and length 32 is implemented in a flash based FPGA, which is much more resistant to Single Event Upsets(SEUs) in comparison to SRAM based FPGAs for onboard applications.

Journal ArticleDOI
TL;DR: An elastic error correction code (EECC) technique is proposed, which can progressively enhance the error correction capability for each page when performing program operation, and is able to make significant power consumption savings without degrading the error Correction capability.
Abstract: Multi-level cell (MLC) NAND flash-based consumer electronic devices suffer from random multiple bit errors that grow exponentially with the increase of program/erase counts. Numerous error correction codes (ECCs) have been developed to detect and correct these multiple erroneous bits within a codeword, such as bose-chaudhuri-hocquenghem (BCH) and reed-solomon (RS) codes. However, most of these existing techniques do not take into account the uneven distribution of bit errors over flash pages, thus they cannot meet varying correction needs of the flash memories during its lifetime. Specifically, weak ECCs are eventually unable to correct some particular pages' bit errors beyond their correction capabilities, while powerful ECCs can protect each page longer yet incur unnecessary computation overhead too early. In this paper, an elastic error correction code (EECC) technique is proposed, which can progressively enhance the error correction capability for each page when performing program operation. In particular, based on a scalable coding mapping model, EECC technique can enhance the ECC level progressively, by allowing each page to employ changeable ECC parity in its own spare out-of-band area according to its own remaining lifetime as well as the hot level of the data in it. In this way, this technique not only meets the changing error correction demands for different page, but also obtains a good reliability-performance tradeoff. Analytically and experimentally, the results demonstrate EECC scheme is efficient in many aspects of performance, and particularly is able to make significant power consumption savings without degrading the error correction capability 1.

Proceedings ArticleDOI
26 May 2013
TL;DR: An algorithm to choose errors among the possible error locations based on the dominant error type is developed and results show that the proposed solutions have the same performance as BCH codes with larger error correction capability but with significantly lower hardware overhead.
Abstract: Errors in MLC NAND Flash can be classified into retention errors and program interference (PI) errors. While retention errors are dominant when the data storage time is greater than 1 day, PI errors are dominant for short data storage times. Furthermore these two types of errors have different probabilities of 0->1 or 1->0 bit flips. We utilize the characteristics of the two types of errors in the development of ECC schemes for applications that have different storage times. In both cases, we first apply Gray coding and 2-bit interleaving. The corresponding most significant bit (MSB) and least significant bit (LSB) sub-page has only one type of dominating error (0->1 or 1->0). Next we form a product code using linear block code along rows and even parity check along columns to detect all the possible error locations. We develop an algorithm to choose errors among the possible error locations based on the dominant error type. Performance simulation and hardware implementation results show that the proposed solutions have the same performance as BCH codes with larger error correction capability but with significantly lower hardware overhead. For instance, for a 2KB MLC Flash used in long storage time applications, the proposed ECC scheme has 50% lower energy and 60% lower decoding latency compared to the BCH scheme.

Proceedings ArticleDOI
14 Nov 2013
TL;DR: To improve the EDAC ability and to decrease the area overhead of storing check-bits, a interleaving grouping Hamming code algorithm about 32-bit data is proposed, and a greedy algorithm is developed to minimize hardware area overhead.
Abstract: For the space application, single event upset (SEU) is one of the important causes of fault or even failure of system on chip (SOC), the error detection and correction (EDAC) technique is often adopted to protect memory cells in SOC against SEU error To improve the EDAC ability and to decrease the area overhead of storing check-bits, a interleaving grouping Hamming code algorithm about 32-bit data is proposed Each 32-bit data are divided crosswise into 2 groups, each group adopts a single error correction and double error detection (SEC-DED) (22, 16) Hamming code, and the check-bits are interleaved storage The number of the check-bits is only two thirds of that of the double error correction and four error detection Bose-Claudhuri-Hocquenghem (BCH) code The proposed method can correct all burst 2-bit error, and can detect all burst no greater than 5-bit error, otherwise, part 3-bit to 24-bit faults can be also detected The concrete encoder and decoder are implemented, and a greedy algorithm is developed to minimize hardware area overhead The 136×32 bits register file uses this design to protect against SEU error, the order of magnitude of SEU failure rate is the same with that of using BCH code technique