scispace - formally typeset
Search or ask a question

Showing papers on "BCH code published in 2014"


Journal ArticleDOI
TL;DR: This work introduces new notions of evaluation of skew polynomials with derivations and the corresponding evaluation codes, and proposes several approaches to generalize Reed-Solomon and BCH codes to module skew codes.
Abstract: In this work the definition of codes as modules over skew polynomial rings of automorphism type is generalized to skew polynomial rings, whose multiplication is defined using an automorphism and a derivation. This produces a more general class of codes which, in some cases, produce better distance bounds than module skew codes constructed only with an automorphism. Extending the approach of Gabidulin codes, we introduce new notions of evaluation of skew polynomials with derivations and the corresponding evaluation codes. We propose several approaches to generalize Reed-Solomon and BCH codes to module skew codes and for two classes we show that the dual of such a Reed-Solomon type skew code is an evaluation skew code. We generalize a decoding algorithm due to Gabidulin for the rank metric and derive families of Maximum Distance Separable and Maximum Rank Distance codes.

102 citations


Journal ArticleDOI
TL;DR: The word-line voltages are optimized by maximizing the mutual information (MI) of the quantized channel to allow frame error rate (FER) performance to approach that of full-precision soft information and enables an LDPC code to significantly outperform a BCH code.
Abstract: Multiple reads of the same Flash memory cell with distinct word-line voltages provide enhanced precision for LDPC decoding. In this paper, the word-line voltages are optimized by maximizing the mutual information (MI) of the quantized channel. The enhanced precision from a few additional reads allows frame error rate (FER) performance to approach that of full-precision soft information and enables an LDPC code to significantly outperform a BCH code. A constant-ratio constraint provides a significant simplification in the optimization with no noticeable loss in performance. For a well-designed LDPC code, the quantization that maximizes the mutual information also minimizes the FER in our simulations. However, for an example LDPC code with a high error floor caused by small absorbing sets, the MMI quantization does not provide the lowest frame error rate. The best quantization in this case introduces more erasures than would be optimal for the channel MI in order to mitigate the absorbing sets of the poorly designed code. The paper also identifies a trade-off in LDPC code design when decoding is performed with multiple precision levels; the best code at one level of precision will typically not be the best code at a different level of precision.

98 citations


Journal ArticleDOI
TL;DR: A simple proof of threshold saturation that applies to a wide class of coupled scalar recursions based on constructing potential functions for both the coupled and uncoupled recursions is presented.
Abstract: Low-density parity-check (LDPC) convolutional codes (or spatially coupled codes) were recently shown to approach capacity on the binary erasure channel (BEC) and binary-input memoryless symmetric channels. The mechanism behind this spectacular performance is now called threshold saturation via spatial coupling. This new phenomenon is characterized by the belief-propagation threshold of the spatially coupled ensemble increasing to an intrinsic noise threshold defined by the uncoupled system. In this paper, we present a simple proof of threshold saturation that applies to a wide class of coupled scalar recursions. Our approach is based on constructing potential functions for both the coupled and uncoupled recursions. Our results actually show that the fixed point of the coupled recursion is essentially determined by the minimum of the uncoupled potential function and we refer to this phenomenon as Maxwell saturation. A variety of examples are considered including the density-evolution equations for: irregular LDPC codes on the BEC, irregular low-density generator-matrix codes on the BEC, a class of generalized LDPC codes with BCH component codes, the joint iterative decoding of LDPC codes on intersymbol-interference channels with erasure noise, and the compressed sensing of random vectors with independent identically distributed components.

79 citations


Journal ArticleDOI
TL;DR: A rate-0.96 (68254, 65536) shortened Euclidean geometry low-density parity-check code and its VLSI implementation for high-throughput NAND Flash memory systems is presented and compared with a BCH (Bose-Chaudhuri-Hocquenghem) decoding circuit showing comparable error- correcting performance and throughput.
Abstract: The reliability of data stored in high-density Flash memory devices tends to decrease rapidly because of the reduced cell size and multilevel cell technology. Soft-decision error correction algorithms that use multiple-precision sensing for reading memory can solve this problem; however, they require very complex hardware for high-throughput decoding. In this paper, we present a rate-0.96 (68254, 65536) shortened Euclidean geometry low-density parity-check code and its VLSI implementation for high-throughput NAND Flash memory systems. The design employs the normalized a posteriori probability (APP)-based algorithm, serial schedule, and conditional update, which lead to simple functional units, halved decoding iterations, and low-power consumption, respectively. A pipelined-parallel architecture is adopted for high-throughput decoding, and memory-reduction techniques are employed to minimize the chip size. The proposed decoder is implemented in 0.13-μm CMOS technology, and the chip size and energy consumption of the decoder are compared with those of a BCH (Bose-Chaudhuri-Hocquenghem) decoding circuit showing comparable error-correcting performance and throughput.

62 citations


Journal ArticleDOI
TL;DR: Four quantum code constructions generating several new families of good nonbinary quantum nonprimitive nonnarrow-sense, Bose-Chaudhuri-Hocquenghem codes, are presented in this paper.
Abstract: Four quantum code constructions generating several new families of good nonbinary quantum nonprimitive nonnarrow-sense, Bose-Chaudhuri-Hocquenghem codes, are presented in this paper. The first two are based on Calderbank-Shor-Steane (CSS) construction derived from two nonprimitive Bose-Chaudhuri-Hocquenghem codes. The third one is based on Steane's enlargement of nonbinary CSS codes applied to suitable subfamilies of nonprimitive nonnarrow-sense Bose-Chaudhuri-Hocquenghem codes. The fourth construction is derived from suitable subfamilies of Hermitian dual-containing nonprimitive nonnarrow-sense Bose-Chaudhuri-Hocquenghem codes. These constructions generate new families of quantum codes whose parameters are better than the ones available in the literature.

57 citations


Journal ArticleDOI
TL;DR: The proposed error-control systems achieve good tradeoffs between error-performance and complexity as compared to the traditional schemes and is also very favorable for implementation.
Abstract: In this work, we consider high-rate error-control systems for storage devices using multi-level per cell (MLC) NAND flash memories. Aiming at achieving a strong error-correcting capability, we propose error-control systems using block-wise parallel/serial concatenations of short Bose-Chaudhuri-Hocquenghem (BCH) codes with two iterative decoding strategies, namely, iterative hard-decision decoding (IHDD) and iterative reliability based decoding (IRBD). It will be shown that a simple but very efficient IRBD is possible by taking advantage of a unique feature of the block-wise concatenation. For tractable performance analysis and design of IHDD and IRBD at very low error rates, we derive semi-analytic approaches. The proposed error-control systems are compared with various error-control systems with well-known coding schemes such as a product code, multiple BCH codes, a single long BCH code, and low-density parity-check codes in terms of page error rates, which confirms our claim: the proposed error-control systems achieve good tradeoffs between error-performance and complexity as compared to the traditional schemes and is also very favorable for implementation.

51 citations


Journal ArticleDOI
TL;DR: This work discusses construction of EAQECCs from Hermitian non-dual containing primitive Bose–Chaudhuri–Hocquenghem (BCH) codes over the Galois field GF(4) with careful analysis of the cyclotomic cosets contained in the defining set of a given BCH code.
Abstract: The entanglement-assisted (EA) formalism generalizes the standard stabilizer formalism. All quaternary linear codes can be transformed into entanglement-assisted quantum error correcting codes (EAQECCs) under this formalism. In this work, we discuss construction of EAQECCs from Hermitian non-dual containing primitive Bose–Chaudhuri–Hocquenghem (BCH) codes over the Galois field GF(4). By a careful analysis of the cyclotomic cosets contained in the defining set of a given BCH code, we can determine the optimal number of ebits that needed for constructing EAQECC from this BCH code, rather than calculate the optimal number of ebits from its parity check matrix, and derive a formula for the dimension of this BCH code. These results make it possible to specify parameters of the obtained EAQECCs in terms of the design parameters of BCH codes.

50 citations


Book ChapterDOI
03 Mar 2014
TL;DR: In this article, a direct construction based on shortened BCH codes is proposed, allowing to efficiently construct recursive MDS matrices for any given set of parameters, however, not all recursive matrices can be obtained from BCH code, and their algorithm is not always guaranteed to find the best matrices.
Abstract: MDS matrices allow to build optimal linear diffusion layers in block ciphers. However, MDS matrices cannot be sparse and usually have a large description, inducing costly software/hardware implementations. Recursive MDS matrices allow to solve this problem by focusing on MDS matrices that can be computed as a power of a simple companion matrix, thus having a compact description suitable even for constrained environments. However, up to now, finding recursive MDS matrices required to perform an exhaustive search on families of companion matrices, thus limiting the size of MDS matrices one could look for. In this article we propose a new direct construction based on shortened BCH codes, allowing to efficiently construct such matrices for whatever parameters. Unfortunately, not all recursive MDS matrices can be obtained from BCH codes, and our algorithm is not always guaranteed to find the best matrices for a given set of parameters.

48 citations


Journal ArticleDOI
TL;DR: Several new families of multi-memory classical convolutional Bose-Chaudhuri-Hocquenghem codes as well as families of unit-memory quantum convolutionsal codes are constructed in this paper.
Abstract: Several new families of multi-memory classical convolutional Bose-Chaudhuri-Hocquenghem codes as well as families of unit-memory quantum convolutional codes are constructed in this paper. Our unit-memory classical and quantum convolutional codes are optimal in the sense that they attain the classical (quantum) generalized Singleton bound. The constructions presented in this paper are performed algebraically and not by computational search.

37 citations


Journal ArticleDOI
TL;DR: This work presents a configurable encoding and decoding architecture for binary Bose–Chaudhuri–Hocquenghem (BCH) codes that supports a wide range of code rates and facilitates a trade-off between throughput and space complexity.
Abstract: Error correction coding (ECC) has become one of the most important tasks of flash memory controllers. The gate count of the ECC unit is taking up a significant share of the overall logic. Scaling the ECC strength to the growing error correction requirements has become increasingly difficult when considering cost and area limitations. This work presents a configurable encoding and decoding architecture for binary Bose–Chaudhuri–Hocquenghem (BCH) codes. The proposed concept supports a wide range of code rates and facilitates a trade-off between throughput and space complexity. Commonly, hardware implementations for BCH decoding perform many Galois field multiplications in parallel. We propose a new decoding technique that uses different parallelization degrees depending on the actual number of errors. This approach significantly reduces the number of required multipliers, where the average number of decoding cycles is even smaller than with a fully parallel implementation.

31 citations


Journal ArticleDOI
TL;DR: The weight distributions of C 2 and C 3 are obtained for the case of m gcd ?

Journal ArticleDOI
TL;DR: This paper presents a high-throughput and low-complexity BCH decoder for NAND flash memory applications, which is developed to achieve a high data rate demanded in the recent serial interface standards.
Abstract: This paper presents a high-throughput and low-complexity BCH decoder for NAND flash memory applications, which is developed to achieve a high data rate demanded in the recent serial interface standards. To reduce the decoding latency, a data sequence read from a flash memory channel is re-encoded by using the encoder that is idle at that time. In addition, several optimizing methods are proposed to relax the hardware complexity of a massive-parallel BCH decoder and increase the operating frequency. In a 130-nm CMOS process, a (8640, 8192, 32) BCH decoder designed as a prototype provides a decoding throughput of 6.4 Gb/s while occupying an area of 0.85 mm2.

Journal ArticleDOI
TL;DR: An improved robust data hiding algorithm based on BCH syndrome code (BCH code) technique and without distortion drift technique is presented that can get more robustness, effectively avert intra-frame distortion drift and get high visual quality.
Abstract: This paper presents an improved robust data hiding algorithm based on BCH syndrome code (BCH code) technique and without distortion drift technique. The BCH code technique, which can correct the error bits caused by network transmission, packet loss, video-processing operations, various attacks, etc., encodes the embedded data by using BCH code before data hiding. The without distortion drift technique in this paper is that we exploit several paired-coefficients of a 4?×?4 discrete cosine transform (DCT) block to accumulate the embedding induced distortion, and the directions of intra-frame prediction are utilized to avert the distortion drift. It is proved analytically and shown experimentally that the new data hiding algorithm can get more robustness, effectively avert intra-frame distortion drift and get high visual quality.

Journal ArticleDOI
TL;DR: The experimental results demonstrate that this algorithm combines the advantages and remove the disadvantages of these two transform techniques and is robust against a number of signal processing attacks without significant degradation of the image quality.
Abstract: In this paper, the effects of different error correction codes on the robustness and the image quality are investigated. Three different error correcting codes such as Hamming, the Bose, Ray-Chaudhuri, Hocquenghem (BCH) and the Reed-Solomon code are considered to encode the watermark. The embedding watermarks method based on the two most popular transform techniques which are discrete wavelet transforms (DWTs) and singular value decomposition (SVD). The proposed algorithm is robust against a number of signal processing attacks without significant degradation of the image quality. The experimental results demonstrate that this algorithm combines the advantages and remove the disadvantages of these two transform. Out of three error correcting codes tested, it has been found that Reed-Solomon shows the best performance. A detailed analysis of the results of implementation is given.

Posted Content
TL;DR: Stabilizer codes obtained via the CSS code construction and the Steane's enlargement of subfield-subcodes and matrix-product codes coming from generalized Reed-Muller, hyperbolic and affine variety codes are studied.
Abstract: Stabilizer codes obtained via CSS code construction and Steane's enlargement of subfield-subcodes and matrix-product codes coming from generalized Reed-Muller, hyperbolic and affine variety codes are studied. Stabilizer codes with good quantum parameters are supplied, in particular, some binary codes of lengths 127 and 128 improve the parameters of the codes in this http URL. Moreover, non-binary codes are presented either with parameters better than or equal to the quantum codes obtained from BCH codes by La Guardia or with lengths that can not be reached by them.

Proceedings ArticleDOI
27 Mar 2014
TL;DR: This paper proposes a steganography scheme based on LDPC codes inspired by the adaptive approach to the calculation of the map detectability and evaluated the performance of the method by applying an algorithm for steganalysis.
Abstract: Steganography is the art of secret communication. Since the advent of modern steganography, in the 2000s, many approaches based on the error correcting codes (Hamming, BCH, RS, STC ...) have been proposed to reduce the number of changes of the cover medium while inserting the maximum bits. The works of LDiop and al [1], inspired by those of T. Filler [2] have shown that the LDPC codes are good candidates in minimizing the impact of insertion. This work is a continuation of the use of LDPC codes in steganography. We propose in this paper a steganography scheme based on these codes inspired by the adaptive approach to the calculation of the map detectability. We evaluated the performance of our method by applying an algorithm for steganalysis.

Proceedings ArticleDOI
04 Dec 2014
TL;DR: Numeric results show that the proposed algorithm enables near-ML decoding of polar codes with BCH kernel.
Abstract: The problem of efficient soft-decision decoding of polar codes with any binary kernel is considered The proposed approach represents a generalization of the sequential decoding algorithm introduced recently for the case of polar codes with Arikan kernel Numeric results show that the proposed algorithm enables near-ML decoding of polar codes with BCH kernel

Proceedings ArticleDOI
06 Apr 2014
TL;DR: The results show that codes with low code rate are optimal for performing long-range communications, while is better to use less coding redundancy when the transmission distance is short, and the transmission range of a low-power communication device can be increased.
Abstract: Error correcting coding is a well-known technique for reducing the required signal-to-noise ratio (SNR) needed to attain a given bit error-rate. Nevertheless, this reduction comes at the cost of extra energy consumption introduced by the baseband processing required for encoding and decoding the data. No complete analysis of the trade-off between coding gain and baseband consumption of communications over wireless channels has been reported so far. In this paper, we study the energy-consumption of BCH codes with various code rates over AWGN and Rayleigh fading channels. Our results show that codes with low code rate are optimal for performing long-range communications, while is better to use less coding redundancy when the transmission distance is short. Our results also show that the transmission range of a low-power communication device can be increased up to 25% for AWGN channels and up to 300% for Rayleigh fading channels by using an optimized BCH code.

Journal ArticleDOI
TL;DR: Several families of good nonbinary asymmetric quantum codes are constructed in this paper, derived from the Calderbank-Shor-Steane (CSS) and Hermitian construction applied to two classical nested Bose-Chaudhuri-Hocquenghem (BCH) codes where one of them are additionally Euclidean (Hermitian) dual-containing.
Abstract: Several families of good nonbinary asymmetric quantum codes are constructed in this paper. These new quantum codes are derived from the Calderbank-Shor-Steane (CSS) construction as well as the Hermitian construction applied respectively to two classical nested Bose-Chaudhuri-Hocquenghem (BCH) codes where one of them are additionally Euclidean (Hermitian) dual-containing. The asymmetric codes constructed here have parameters better than the ones available in the literature.

Patent
09 Jun 2014
TL;DR: In this article, the results of de-mapping QAM constellations recovered from demodulating the COFDM carrier waves are de-interleaved, and the constituent LDPCCC codewords are decoded to recover constituent BCH codes of the multilevel BCH coding.
Abstract: In transmitter apparatus for a digital television (DTV) broadcasting system, internet-protocol (IP) packets of digital television information are subjected to multilevel concatenated Bose-Chaudhuri-Hocquenghem (BCH) coding and low-density parity-check convolutional coding (LDPCCC) before being bit-interleaved and mapped to quadrature-amplitude-modulation (QAM) constellations. The QAM constellations are used in coded orthogonal frequency-division modulation (COFDM) of plural carrier waves up-converted to a radio-frequency broadcast television channel. In receiver apparatus for the DTV broadcasting system the results of de-mapping QAM constellations recovered from demodulating the COFDM carrier waves are de-interleaved, and the constituent LDPCCC codewords are decoded to recover constituent BCH codewords of the multilevel BCH coding. The constituent BCH codewords are decoded to correct remnant bit errors in them. Then, IP packets of digital television information are reconstituted from the systematic data bits in those BCH codewords.

Proceedings ArticleDOI
20 Feb 2014
TL;DR: A universal BCH encoder and decoder that can support multiple error-correction capabilities is presented and a novel encoding architecture and on-demand syndrome calculation technique is proposed to reduce both hardware complexity and power consumption.
Abstract: This paper presents a universal BCH encoder and decoder that can support multiple error-correction capabilities. A novel encoding architecture and on-demand syndrome calculation technique is proposed to reduce both hardware complexity and power consumption. Based on the proposed methods, 32-parallel universal encoder and decoder are designed for BCH (8192+14t, 8192, t) codes, where the error-correction capability t is configurable to 8, 11, 16, 24, 32, and 64. The prototype chip achieves a throughput of 7.3 Gb/s and occupies 2.24 mm2 in 0.13μm CMOS technology.

Book ChapterDOI
01 Jan 2014
TL;DR: A method for preparing a PUF key during fuzzy extractor implementation and the experimental results showed that all possible combinations of input message length and the number of correctable errors were tested for a BCH code with codeword length N, which was the length of the PUF responses.
Abstract: The extraction of a stable signal from noisy data is very useful in applications that aim to combine it with a cryptographic key. An approach based on an error correcting code was proposed by Dodis et al., which is known as a fuzzy extractor. Physical unclonable functions (PUFs) generate device-specific data streams, although PUFs are noisy functions. In this paper, we describe a method for preparing a PUF key during fuzzy extractor implementation. The experimental results showed that all possible combinations of input message length and the number of correctable errors were tested for a BCH code with codeword length N, which was the length of the PUF responses.

Journal ArticleDOI
TL;DR: A simple block error correction codes such as cyclic and Bose Chaudhuri Hocquenghem (BCH) codes are considered to be used in IEEE 802.15.4 RF transceiver based sensor nodes and proves to be an energy efficient code among the codes considered.
Abstract: The multipath nature of the wireless environment does not provide reliable links for robust communication in wireless senor networks (WSNs). These unreliable links increase the error level to a greater extent and therefore, reduces battery life. Hence, there arises a need for developing energy efficient forward error correction code that avoids more energy consuming Automatic Repeat request (ARQ) scheme used in WSNs to improve link reliability. In this paper, we consider a simple block error correction codes such as cyclic and Bose Chaudhuri Hocquenghem (BCH) codes to be used in IEEE 802.15.4 RF transceiver based sensor nodes. The simulations are performed to measure network parameters such as bit error rate, and energy spent per bit under Rayleigh fading channel environment. It is found that BCH code with code rate of 0.8 provides coding gain of 1.6 dB when compared with cyclic and ARQ schemes and proves to be an energy efficient code among the codes considered.

Journal ArticleDOI
TL;DR: In this paper, a new bound on the minimum distance of q-ary cyclic codes is proposed based on the description by another cyclic code with small minimum distance, and the connection to the BCH bound and the Hartmann-Tzeng (HT) bound is formulated explicitly.
Abstract: A new bound on the minimum distance of q-ary cyclic codes is proposed. It is based on the description by another cyclic code with small minimum distance. The connection to the BCH bound and the Hartmann–Tzeng (HT) bound is formulated explicitly. We show that for many cases our approach improves the HT bound. Furthermore, we refine our bound for several families of cyclic codes. We define syndromes and formulate a Key Equation that allows an efficient decoding up to our bound with the Extended Euclidean Algorithm. It turns out that lowest-code-rate cyclic codes with small minimum distances are useful for our approach. Therefore, we give a sufficient condition for binary cyclic codes of arbitrary length to have minimum distance two or three and lowest code-rate.

Patent
30 Sep 2014
TL;DR: In this paper, the BCH decoder is used to determine whether a word is independently decodable using a BCH-decoder and then decode the word using the decoder when the word is determined to be independently decoderable.
Abstract: A memory device may include memory components to store data. The memory device may also include a processor that may decode a codeword associated with the data. The processor may receive the codeword and determine whether the codeword is independently decodable using a BCH decoder. The processor may then decode the codeword using the BCH decoder when the codeword is determined to be independently decodable using the BCH decoder. Otherwise, the processor may decode the codeword using a second decoder and the BCH decoder when the codeword is not determined to be independently decodable using the BCH decoder.

Journal ArticleDOI
TL;DR: It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance.
Abstract: A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

Journal ArticleDOI
TL;DR: A class of Generalized Low-Density Parity-Check (GLDPC) codes is designed for data transmission over a Partial-Band Jamming (PBJ) environment and employs A Posteriori Probability (APP) fast BCH transform for decoding the BCH check nodes at each decoding iteration.

Journal ArticleDOI
TL;DR: This paper presents an IEEE 802.15.6 compliant soft-decision BCH decoder for energy-constrained wireless body area networks that provides a 1 dB coding gain compared to the hard-dec decision decoder (HDD), and an early termination strategy is proposed to reduce the number of redundant test patterns.
Abstract: This paper presents an IEEE 802.15.6 compliant soft-decision BCH decoder for energy-constrained wireless body area networks. The proposed soft-decision decoder (SDD) provides a 1 dB coding gain compared to the hard-decision decoder (HDD). The improvement in BER performance can translate into power savings at the transmitter. The energy dissipation and area of the soft-decision BCH decoder is minimized by jointly considering the algorithm, architecture, and circuit parameters. An early termination strategy is proposed to reduce the number of redundant test patterns. Probabilistic sorting is proposed to determine the test patterns, and its hardware complexity is only 54.7% of the conventional sorting method. The HDD kernel is implemented by adopting the Peterson rule, reducing the area by 44.2%. A pass-transistor logic based Chien search circuit consumes 33.3% less energy compared to the standard-cell based implementation. The chip is designed to operate at the minimum energy point of 0.29 V, yielding an energy reduction of 94% compared to a direct-mapped SDD at SNR=5 dB. Fabricated in 90 nm CMOS, the chip dissipates 5.4 μW at 500 kHz, achieving a throughput of 6.38 Mbps.

Proceedings ArticleDOI
18 May 2014
TL;DR: The key elements to implement a BCH code able to correct 2 errors in a page of 256 data bits in no more than 10ns with 180nm-CMOS logic, and with low energy consumption are shown.
Abstract: Emerging Memories (EMs) could benefit from Error Correcting Codes (ECCs) able to correct a few errors in just a few nanoseconds; for example to cope with failure mechanisms that could arise in new storage physics. Fast ECCs are also desired for eXecuted-in-Place (XiP) and DRAM applications. This paper shows the key elements to implement a BCH code able to correct 2 errors in a page of 256 data bits in no more than 10ns with 180nm-CMOS logic, and with low energy consumption. The decoding time can be further reduced to few ns using smaller gate length logics. Moreover, the proposed solution is soundly rooted in BCH theory, and can be applied to any user data size. Basically the ideas are to avoid the division in the computation of the coefficients of the Error Locator Polynomial (ELP) of the BCH code, to optimize the implementation of the multiplication in the Galois Fields (GF) and to fully implement the decoder in a parallel combinatorial architecture. Such a BCH code has been embedded in a 45nm 1Gbit Phase Change Memory (PCM) device.

Proceedings ArticleDOI
07 Jul 2014
TL;DR: A modification of double error correcting (DEC) BCH codes that allows for a fast correction of arbitrary 1-bit and 2- bit errors, as well as 3-bit errors comprising adjacent 2-biterrors in certain bit positions is proposed.
Abstract: In this paper we propose a modification of double error correcting (DEC) BCH codes that allows for a fast correction of arbitrary 1-bit and 2-bit errors, as well as 3-bit errors comprising adjacent 2-bit errors in certain bit positions. The proposed code has the same number of check bits as a double error correcting, triple error detecting (DEC-TED) BCH code with code distance 6. The proposed code is particularly useful for multi-level memories capable of storing more than one bit of data per memory cell. A method for decoding and a parallel implementation of the codes is described. Experimentally the decoding latency and area consumption is compared to parallel implementations of Hsiao SEC-DED codes, DEC BCH codes and TEC BCH codes for data bit sizes ranging from 8 to 1024 bits commonly used in memory applications.