scispace - formally typeset
Search or ask a question

Showing papers on "BCH code published in 2012"


Journal ArticleDOI
TL;DR: This paper proposes product codes which use Reed-Solomon codes along rows and Hamming codes along columns and have reduced hardware overhead which provide an easy mechanism to increase the lifetime of the Flash memory devices.
Abstract: Error control coding (ECC) is essential for correcting soft errors in Flash memories. In this paper we propose use of product code based schemes to support higher error correction capability. Specifically, we propose product codes which use Reed-Solomon (RS) codes along rows and Hamming codes along columns and have reduced hardware overhead. Simulation results show that product codes can achieve better performance compared to both Bose-Chaudhuri-Hocquenghem codes and plain RS codes with less area and low latency. We also propose a flexible product code based ECC scheme that migrates to a stronger ECC scheme when the numbers of errors due to increased program/erase cycles increases. While these schemes have slightly larger latency and require additional parity bit storage, they provide an easy mechanism to increase the lifetime of the Flash memory devices.

89 citations


Proceedings ArticleDOI
12 Mar 2012
TL;DR: To address the observed error behavior, a new error-correcting scheme for TLC flash is given and is compared with BCH and LDPC codes.
Abstract: Flash memory has become the storage medium of choice in portable consumer electronic applications, and high performance solid state drives (SSDs) are also being introduced into mobile computing, enterprise storage, data warehousing, and data-intensive computing systems On the other hand, flash memory technologies present major challenges in the areas of device reliability, endurance, and energy efficiency In this work, the error behavior of TLC flash is studied through an empirical database of errors which were induced by write, read, and erase operations Based on this database, error characterization at the block and page level is given To address the observed error behavior, a new error-correcting scheme for TLC flash is given and is compared with BCH and LDPC codes

78 citations


Journal ArticleDOI
TL;DR: Experiments conducted on three fingerprint databases demonstrate that the proposed binary feature generation method is effective and promising, and the biometric cryptosystem constructed by the feature outperforms most of the existing biometrics and cryptography systems in terms of ZeroFAR and security strength.
Abstract: With the emergence and popularity of identity verification means by biometrics, the biometric system which can assure security and privacy has received more and more concentration from both the research and industry communities. In the field of secure biometric authentication, one branch is to combine the biometrics and cryptography. Among all the solutions in this branch, fuzzy commitment scheme is a pioneer and effective security primitive. In this paper, we propose a novel binary length-fixed feature generation method of fingerprint. The alignment procedure, which is thought as a difficult task in the encrypted domain, is avoided in the proposed method due to the employment of minutiae triplets. Using the generated binary feature as input and based on fuzzy commitment scheme, we construct the biometric cryptosystems by combining various of error correction codes, including BCH code, a concatenated code of BCH code and Reed-Solomon code, and LDPC code. Experiments conducted on three fingerprint databases, including one in-house and two public domain, demonstrate that the proposed binary feature generation method is effective and promising, and the biometric cryptosystem constructed by the feature outperforms most of the existing biometric cryptosystems in terms of ZeroFAR and security strength. For instance, in the whole FVC2002 DB2, a 4.58% ZeroFAR is achieved by the proposed biometric cryptosystem with the security strength 48 bits.

74 citations


Proceedings ArticleDOI
03 Apr 2012
TL;DR: This paper presents an efficient BCH encoder/decoder architecture achieving a decoding throughput of 6Gb/s, and includes a single BCH decoder and a multi-threaded B CH encoder.
Abstract: Solid-state drives (SSDs), built with many flash memory channels, is usually connected to the host through an advanced high-speed serial interface such as SATA III associated with a transfer rate of 6Gb/s [1–2]. However, the performance of SSD is in general determined by the throughput of the ECC blocks necessary to overcome the high error-rate [3]. The binary BCH code is widely used for the SSD due to its powerful error-correction capability. As it is hard to achieve high-throughput strong BCH decoders [4–5], multiple BCH decoders are typically on a high-performance SSD controller, leading to a significant increase of hardware complexity. This paper presents an efficient BCH encoder/decoder architecture achieving a decoding throughput of 6Gb/s. The overall architecture shown in Fig. 25.3.1 includes a single BCH decoder and a multi-threaded BCH encoder. The single BCH encoder is responsible for all the channels and services a channel at a time in a round-robin manner.

68 citations


Journal ArticleDOI
TL;DR: This paper introduces matrices with wide variety of options for the number of rows based on the binary BCH codes to the p-ary codes, and Simulation results demonstrate that these matrices perform almost as well as random matrices.
Abstract: In contrast to the vast amount of literature in random matrices in the field of compressed sensing, the subject of deterministic matrix design is at its early stages. Since these deterministic matrices are usually constructed using the polynomials in finite Galois fields, the number of rows (number of samples) is restricted to some specific integers such as prime powers. In this paper, besides extending a previous matrix design based on the binary BCH codes to the p-ary codes, we introduce matrices with wide variety of options for the number of rows. Simulation results demonstrate that these matrices perform almost as well as random matrices.

65 citations


Patent
05 Sep 2012
TL;DR: In this article, a base station and terminal use methods of obtaining synchronization and system information in a wireless communication system, such as generating a synchronization signal to be transmitted through a Synchronization Channel (SCH), generating a broadcast signal for transmission through a Broadcast Channel (BCH), and transmitting repetitively the SCH and the BCH by performing beamforming on the channels with different transmission beams.
Abstract: A base station and terminal use methods of obtaining synchronization and system information in a wireless communication system. An operation of a base station includes generating a synchronization signal to be transmitted through a Synchronization Channel (SCH), generating a broadcast signal to be transmitted through a Broadcast Channel (BCH), and transmitting repetitively the SCH and the BCH by performing beamforming on the channels with different transmission beams.

62 citations


Journal ArticleDOI
TL;DR: The BCH syndrome coding for steganography is now viable ascribed to the reduced complexity and its simplicity of the proposed embedder.
Abstract: This paper presents an improved data hiding technique based on BCH (n,k,t ) coding. The proposed embedder hides data into a block of input data by modifying some coefficients in the block in order to null the syndrome. The proposed embedder can hide data with less computational time and less storage capacity compared to the existing methods. The complexity of the proposed method is linear while that of other methods are exponential for any block size n. Thus, it is easy to extend this method to a large n. The BCH syndrome coding for steganography is now viable ascribed to the reduced complexity and its simplicity of the proposed embedder.

45 citations


Patent
Hong-Sil Jeong1, Sung Ryul Yun1, Hyun-Koo Yang1, Alain Mourad1, Ismael Gutierrez1 
20 Sep 2012
TL;DR: In this article, a parity check matrix is used to perform shortening or puncturing when performing encoding or decoding through the use of a parity-check matrix in a communication/broadcasting system.
Abstract: The present invention relates to performing shortening or puncturing when performing encoding or decoding through the use of a parity check matrix in a communication/broadcasting system. The operation method of a transmission terminal includes the steps of: determining the number of bits to be 0-padded; determining the number Npad of bit groups of which all bits are to be 0-padded; padding all bits within 0th to Npad-1st bit groups indicated by a shortening pattern with 0; mapping information bits to the locations of bits which are not padded in Bose Chaudhuri Hocquenghem (BCH) information bits; BCH-encoding the BCH information bits in order to generate Low Density Parity Check (LDPC) information bits; and LDPC-encoding the LDPC information bits in order to generate an O-padded codeword. Here, the shortening pattern is defined in the order of bit groups which are defined to be 9, 8, 15, 10, 0, 12, 5, 27, 6, 7, 19, 22, 1, 16, 26, 20, 21, 18, 11, 3, 17, 24, 2, 23, 25, 14, 28, 4, 13, and 29.

42 citations


Journal ArticleDOI
TL;DR: The code $\mathcal{C}_{1,3,13}$ is proven to have the same weight distribution as the binary triple-error-correcting primitive BCH code $\ Mathbb{F}_{2^m}$ of the same length.

42 citations


Proceedings ArticleDOI
10 Jun 2012
TL;DR: A new construction of punctured-node protograph-based Raptor-like (PN-PBRL) codes that is suitable for long-blocklength applications, and as with the Raptor codes, additional parity bits can be easily produced by exclusive-OR operations on the precoded bits, providing extensive rate compatibility.
Abstract: This paper presents a new construction of punctured-node protograph-based Raptor-like (PN-PBRL) codes that is suitable for long-blocklength applications. As with the Raptor codes, additional parity bits can be easily produced by exclusive-OR operations on the precoded bits, providing extensive rate compatibility. The new construction provides low iterative decoding thresholds that are within 0.45 dB of the capacity for all code rates studied, and the construction is suitable for long blocklengths. Comparing at the same information block size of k = 16368 bits, the PN-PBRL codes are as good as the best known AR4JA codes in the waterfall region. The PN-PBRL codes also perform comparably to DVB-S2 LDPC codes even though the DVB-S2 codes have longer blocklength and outer BCH codes.

33 citations


Journal ArticleDOI
TL;DR: Several new families of multi-memory classical convolutional Bose-Chaudhuri-Hocquenghem codes as well as families of unit-memory quantum convolutionAL codes are constructed in this paper.
Abstract: Several new families of multi-memory classical convolutional Bose-Chaudhuri-Hocquenghem (BCH) codes as well as families of unit-memory quantum convolutional codes are constructed in this paper Our unit-memory classical and quantum convolutional codes are optimal in the sense that they attain the classical (quantum) generalized Singleton bound The constructions presented in this paper are performed algebraically and not by computational search

Journal ArticleDOI
Zhongfeng Wang1
TL;DR: It is shown that the proposed code can achieve near-Shannon limit performance with low decoding complexity and a super-product BCH code together with a novel decoding technique is presented.
Abstract: This letter first introduces a simple approach to evaluate performance bounds at very low bit-error-rate range for binary pseudo-product codes and true-product codes. Then it presents a super-product BCH code together with a novel decoding technique. It is shown that the proposed code can achieve near-Shannon limit performance with low decoding complexity.

Journal ArticleDOI
TL;DR: A framework for multi-label classification extended by Error Correcting Output Codes (ECOCs) is introduced and empirically examined and revealed that the Bode-Chaudhuri-Hocquenghem (BCH) code matched with any multi- label classifier results in better classification quality.
Abstract: A framework for multi-label classification extended by Error Correcting Output Codes ECOCs is introduced and empirically examined in the article. The solution assumes the base multi-label classifiers to be a noisy channel and applies ECOCs in order to recover the classification errors made by individual classifiers. The framework was examined through exhaustive studies over combinations of three distinct classification algorithms and four ECOC methods employed in the multi-label classification problem. The experimental results revealed that i the Bode-Chaudhuri-Hocquenghem BCH code matched with any multi-label classifier results in better classification quality; ii the accuracy of the binary relevance classification method strongly depends on the coding scheme; iii the label power-set and the RAkEL classifier consume the same time for computation irrespective of the coding utilized; iv in general, they are not suitable for ECOCs because they are not capable to benefit from ECOC correcting abilities; v the all-pairs code combined with binary relevance is not suitable for datasets with larger label sets.

Journal ArticleDOI
TL;DR: This article studies in detail the causes of errors for PRAM and STT-RAM and proposes error control coding (ECC) techniques that can be used on top of circuit-level techniques to mitigate some of these errors.
Abstract: Non-volatile resistive memories, such as phase-change RAM (PRAM) and spin transfer torque RAM (STT-RAM), have emerged as promising candidates because of their fast read access, high storage density, and very low standby power. Unfortunately, in scaled technologies, high storage density comes at a price of lower reliability. In this article, we first study in detail the causes of errors for PRAM and STT-RAM. We see that while for multi-level cell (MLC) PRAM, the errors are due to resistance drift, in STT-RAM they are due to process variations and variations in the device geometry. We develop error models to capture these effects and propose techniques based on tuning of circuit level parameters to mitigate some of these errors. Unfortunately for reliable memory operation, only circuit-level techniques are not sufficient and so we propose error control coding (ECC) techniques that can be used on top of circuit-level techniques. We show that for STT-RAM, a combination of voltage boosting and write pulse width adjustment at the circuit-level followed by a BCH-based ECC scheme can reduce the block failure rate (BFR) to 10–8. For MLC-PRAM, a combination of threshold resistance tuning and BCH-based product code ECC scheme can achieve the same target BFR of 10–8. The product code scheme is flexible; it allows migration to a stronger code to guarantee the same target BFR when the raw bit error rate increases with increase in the number of programming cycles.

Journal ArticleDOI
01 Jan 2012
TL;DR: The proposed two-iteration concatenated Bose-Chaudhuri-Hocquenghem (BCH) code and its high-speed low-complexity two-parallel decoder architecture for 100 Gb/s optical communications features a very high data processing rate as well as excellent error correction capability.
Abstract: This paper presents a two-iteration concatenated Bose-Chaudhuri-Hocquenghem (BCH) code and its high-speed low-complexity two-parallel decoder architecture for 100 Gb/s optical communications. The proposed architecture features a very high data processing rate as well as excellent error correction capability. A low-complexity syndrome computation architecture and a high-speed dual-processing pipelined simplified inversonless Berlekamp-Massey (Dual-pSiBM) key equation solver architecture were applied to the proposed concatenated BCH decoder with an aim of implementing a high-speed low-complexity decoder architecture. Two-parallel processing allows the decoder to achieve a high data processing rate required for 100 Gb/s optical communication systems. Also, the proposed two-iteration concatenated BCH code structure with block interleaving methods allows the decoder to achieve 8.91dB of net coding gain performance at 10?15 decoder output bit error rate to compensate for serious transmission quality degradation. Thus, it has potential applications in next generation forward error correction schemes for 100 Gb/s optical communications.

Journal ArticleDOI
TL;DR: A novel approach is proposed to reduce the LUT size by at least four times through making use of the properties of the error locator polynomial and normal basis representation of finite field elements.
Abstract: The forward error correction for 100-Gbit/s optical transport network has received much attention recently. Studies showed that product codes that employ three-error-correcting Bose-Chaudhuri-Hocquenghem (BCH) codes can achieve better performance than other BCH or Reed-Solomon codes. For such codes, the Peterson algorithm can be used to compute the error locator polynomial, and its roots can be found directly using a lookup table (LUT). However, the size of the LUT is quite large for finite fields of high order. In this brief, a novel approach is proposed to reduce the LUT size by at least four times through making use of the properties of the error locator polynomial and normal basis representation of finite field elements. Moreover, hybrid representation of finite field elements is adopted to minimize the complexity of the involved computations. For a (1023, 993) BCH decoder over GF(210), the proposed design can lead to at least 28% complexity reduction.

Journal ArticleDOI
TL;DR: A one-pass Chase decoding algorithm of Reed-Solomon codes such that the desired error locator polynomial by flipping an error pattern is obtained in one pass (when operated in parallel) through utilizing the preceding results.
Abstract: Chase decoding is a prevalent soft-decision decoding method for algebraic codes where an efficient bounded-distance decoder is available. Essentially, it repeatedly applies bounded-distance decoding upon combinatorially flipping certain least reliable bits (or patterns for nonbinary case). In this paper, we devise a one-pass Chase decoding algorithm of Reed-Solomon codes such that the desired error locator polynomial by flipping an error pattern is obtained in one pass (when operated in parallel) through utilizing the preceding results. This is effectively achieved through cumulative interpolation and linear-feedback-shift-register (LFSR) synthesis techniques. Furthermore, through converting the algorithm into the transform domain, exhaustive root search for each error locator polynomial is circumvented. Computationally, the new algorithm exhibits linear complexity , in attempting to determine a candidate codeword associated with each additionally flipped symbol/pattern; it compares favorably to quadratic complexity by straightforwardly utilizing hard-decision decoding, where and denote the code length and minimum distance, respectively. We also reveal a corrected algorithm for one-pass generalized minimum distance (GMD) decoding for Reed-Solomon codes from Koetter's original work. In addition, we devise a highly efficient one-pass Chase decoding algorithm for binary Bose-Chaudhuri-Hocquenghem (BCH) codes by taking advantage of a key characteristic of the Berlekamp algorithm. Finally, we present a systolic very large-scale integration (VLSI) decoder architecture through slightly compromising the proposed one-pass Chase decoding algorithm. It takes clock cycles to complete one iteration by flipping one error pattern. Both circuit and memory complexities are dictated by the dimension of flipping patterns; in particular, they are independent of the code length or the minimum distance , rendering it highly attractive for various applications.

Proceedings ArticleDOI
11 May 2012
TL;DR: This paper has designed and implemented (15, k) a BCH Encoder on FPGA using VHDL for reliable data transfer in AWGN channel with multiple error correction control and a comparative performance based on synthesis & simulation onFPGA is presented.
Abstract: In this paper we have designed and implemented (15, k) a BCH Encoder on FPGA using VHDL for reliable data transfer in AWGN channel with multiple error correction control. The digital logic implementation of binary encoding of multiple error correcting BCH code (15, k) of length n=15 over GF (24) with irreducible primitive polynomial x4+x+1 is organised into shift register circuits. Using the cyclic codes, the reminder b(x) can be obtained in a linear (15-k) stage shift register with feedback connections corresponding to the coefficients of the generated polynomial. Three encoder are designed using VHDL to encode the single, double and triple error correcting BCH code (15, k) corresponding to the coefficient of generated polynomial. Information bit is transmitted in unchanged form up to k clock cycles and during this period parity bits are calculated in the LFSR then the parity bits are transmitted from k+1 to 15 clock cycles. Total 15-k numbers of parity bits with k information bits are transmitted in 15 codeword. Here we have implemented (15, 5, 3), (15, 7, 2) and (15, 11, 1) BCH code encoder on Xilinx Spartan 3 FPGA using VHDL and the simulation & synthesis are done using Xilinx ISE 10.1. Also a comparative performance based on synthesis & simulation on FPGA is presented.

Proceedings ArticleDOI
05 Apr 2012
TL;DR: A new formulation is proposed to modify the parallel LFSR into the form of pipelining and retiming algorithm to reduce the critical path and apply combined parallel and pipeline techniques to eliminate the fan-out effect in long generator polynomials.
Abstract: Linear feedback shift register (LFSR) is an important component of the cyclic redundancy check (CRC) operations and BCH encoders. The contribution of this paper is two fold. First, this paper presents linear transformation of serial LFSR architecture into the transformation of parallel LFSR architecture. This transformation achieves a full speed-up compared to the serial architecture at the cost of an increase in hardware overhead. This method applies to all generator polynomials used in CRC operations and BCH encoders. Second, a new formulation is proposed to modify the parallel LFSR into the form of pipelining and retiming algorithm. We propose a high-speed parallel LFSR architecture based on pipelining and retiming algorithms to reduce the critical path. Finally, we are calculating the Bit Error Rate (BER) tester for the proposed LFSR design. The advantage of this proposed approach over the previous architectures is that it has both feed forward and feedback paths. Also it has the advantage of further increasing the speed-up and it increases the throughput rate. We further propose to apply combined parallel and pipelining techniques to eliminate the fan-out effect in long generator polynomials. The proposed scheme can be applied to any generator polynomial, i.e., any LFSR in general. The proposed parallel architecture achieves better area-time product compared to the previous designs.

Patent
02 Mar 2012
TL;DR: An efficient coding and modulation system for transmission of digital data over plastic optical fibres is disclosed in this paper.The digital signal is coded by a three-level coset coding, which is configurable by selecting the number of bits to be processed in each of the levels.
Abstract: An efficient coding and modulation system for transmission of digital data over plastic optical fibres is disclosed. The digital signal is coded by a three-level coset coding. The spectral efficiency of the system is configurable by selecting the number of bits to be processed in each of the levels. The first level applies to the digital data a binary BCH coding and performs coset partitioning by constellation mapping and lattice transformations. Similarly, second level applies another binary BCH coding, which may be performed selectably in accordance with the desired configuration by two BCH codes with substantially the same coding rate, operating on codewords of different sizes. The third level is uncoded. The second and third levels undergo mapping and lattice transformation. After an addition of the levels, a second-stage lattice transformation is performed to obtain a zero-mean constellation. The symbols output from such three-level coset coder are then further modulated.

Proceedings ArticleDOI
11 Dec 2012
TL;DR: This paper presents constructions of optimum and nearly optimum robust codes based on modifications of the BCH based check matrix codes, puncturing, expurgating and expanding the codes while preserving their robustness.
Abstract: A code that can detect any non-zero error with probability greater than zero is called robust. The set of codewords that mask an error e determine its undetected error probability, Q(e). The maximal error masking probability is denoted by Q mc . A robust code is called optimum if there is no other code with larger number of codewords with the same length and the same Q mc . In this paper we present constructions of optimum and nearly optimum robust codes. The constructions are based on modifications of the BCH based check matrix codes. In particular, puncturing, expurgating and expanding the codes while preserving their robustness.

Journal ArticleDOI
TL;DR: A generalization of the BCH bound to the case of quasi-cyclic codes is proposed, based on eigenvalues of matrix polynomials, which results in improved minimum distance estimates compared to the existing bounds.
Abstract: A generalization of the BCH bound to the case of quasi-cyclic codes is proposed. The new approach is based on eigenvalues of matrix polynomials. This results in improved minimum distance estimates compared to the existing bounds.

Journal ArticleDOI
TL;DR: Two code constructions generating new families of good nonbinary asymmetric quantum BCH codes and good non binary subsystem BCHcodes are presented in this paper.
Abstract: Two code constructions generating new families of good nonbinary asymmetric quantum BCH codes and good nonbinary subsystem BCH codes are presented in this paper. The first one is derived from q-ary Steane’s enlargement of CSS codes applied to nonnarrow-sense BCH codes. The second one is derived from the method of defining sets of classical cyclic codes. The asymmetric quantum BCH codes and subsystem BCH codes here have better parameters than the ones available in the literature.

Journal ArticleDOI
TL;DR: This article presents a simple design technique that can cohesively exploit the NAND flash memory wear-out dynamics and impact of LDPC code structure on decoding performance and carried out simulations to demonstrate the potential effectiveness of this design method.
Abstract: Semiconductor technology scaling makes NAND flash memory subject to continuous raw storage reliability degradation, leading to the demand for more and more powerful error correction codes. This inevitable trend makes conventional BCH code increasingly inadequate, and iterative coding solutions such as low-density parity-check (LDPC) codes become very natural alternative options. However, fine-grained soft-decision memory sensing must be used in order to fully leverage the strong error correction capability of LDPC codes, which results in significant data access latency overhead. This article presents a simple design technique that can reduce such latency overhead. The key is to cohesively exploit the NAND flash memory wear-out dynamics and impact of LDPC code structure on decoding performance. Based upon detailed memory device modeling and ASIC design, we carried out simulations to demonstrate the potential effectiveness of this design method and evaluate the involved trade-offs.

Proceedings ArticleDOI
24 Dec 2012
TL;DR: This study proposes an efficient algorithm for the reconstruction of the BCH (Bose-Chaudhuri-Hocquenghem) code based on the property of cyclic codes, and uses this property to improve the recognition performance and confirms the performance improvement through an intensive computer simulation.
Abstract: If a receiver does not have accurate channel coding parameters, it becomes difficult to decode digitized encoding bits correctly from a noisy intercepted bit stream. To perform decoding without the channel coding information, we should estimate the channel coding parameters and identify the channel coding type. In this study, we propose an efficient algorithm for the reconstruction of the BCH (Bose-Chaudhuri-Hocquenghem) code. We use the property of cyclic codes for the reconstruction of the BCH code. In addition, we present a probability compensation method to improve the reconstruction performance. This is based on the concept that a random data pattern can also be divisible by a minimal polynomial of the generator polynomial of the BCH code. We use this property to improve the recognition performance, and we confirm the performance improvement through an intensive computer simulation.

Proceedings ArticleDOI
Kijun Lee1, Sejin Lim1, Kim Jae-Hong1
20 May 2012
TL;DR: This paper configure the 3-stage pipelined BCH decoder: syndrome computation, Berlekamp-Massey algorithm and Chien search, and implements B CH decoder supporting 800MB/s using the pipelining structure and the early termination method.
Abstract: The BCH codes are widely used as Error Correcting Code (ECC) schemes for NAND Flash Memories. There have been strong demands to implement NAND Flash controller having low cost, low power and high throughput. We focus on BCH implementation since it has the largest portion in the controller. In this paper, we configure the 3-stage pipelined BCH decoder: syndrome computation, Berlekamp-Massey algorithm and Chien search. We implement BCH decoder supporting 800MB/s using the pipelined structure and the early termination method.

Proceedings ArticleDOI
10 Jun 2012
TL;DR: Simulation results show that the proposed design solutions can reduce the data transfer latency by up to 64% for soft-decision memory sensing, and appropriately apply entropy coding to compress the memory sensing results.
Abstract: With the aggressive technology scaling and use of multi-bit per cell storage, NAND flash memory is subject to continuous degradation of raw storage reliability and demands more and more powerful error correction codes (ECC). This inevitable trend makes conventional BCH code increasingly inadequate, and iterative coding solutions such as LDPC codes become very natural alternative options. However, these powerful coding solutions demand soft-decision memory sensing, which results in longer on-chip memory sensing latency and memory-to-controller data transfer latency. This paper presents two simple design techniques that can reduce the memory-to-controller data transfer latency. The key is to appropriately apply entropy coding to compress the memory sensing results. Simulation results show that the proposed design solutions can reduce the data transfer latency by up to 64% for soft-decision memory sensing.

Posted Content
TL;DR: In this paper, an iterative hard-decision decoding (HDD) of generalized product codes with BCH component codes is proposed to reach capacity at high rates using iterative HDD.
Abstract: A variety of low-density parity-check (LDPC) ensembles have now been observed to approach capacity with message-passing decoding. However, all of them use soft (i.e., non-binary) messages and a posteriori probability (APP) decoding of their component codes. In this paper, we show that one can approach capacity at high rates using iterative hard-decision decoding (HDD) of generalized product codes. Specifically, a class of spatially-coupled GLDPC codes with BCH component codes is considered, and it is observed that, in the high-rate regime, they can approach capacity under the proposed iterative HDD. These codes can be seen as generalized product codes and are closely related to braided block codes. An iterative HDD algorithm is proposed that enables one to analyze the performance of these codes via density evolution (DE).

Journal ArticleDOI
TL;DR: Two approaches to attack the hardness of the minimum distance of linear block codes are proposed, based on genetic algorithms and a new randomized algorithm which is called “Multiple Impulse Method (MIM)”.
Abstract: The evaluation of the minimum distance of linear block codes remains an open problem in coding theory, and it is not easy to determine its true value by classical methods, for this reason the problem has been solved in the literature with heuristic techniques such as genetic algorithms and local search algorithms. In this paper we propose two approaches to attack the hardness of this problem. The first approach is based on genetic algorithms and it yield to good results comparing to another work based also on genetic algorithms. The second approach is based on a new randomized algorithm which we call “Multiple Impulse Method (MIM)”, where the principle is to search codewords locally around the all-zero codeword perturbed by a minimum level of noise, anticipating that the resultant nearest nonzero codewords will most likely contain the minimum Hamming-weight codeword whose Hamming weight is equal to the minimum distance of the linear code.

Proceedings ArticleDOI
25 Mar 2012
TL;DR: A double error correcting (DEC) BCH codec is designed for NOR flash memory systems to improve reliability and a new error location polynomial is developed to reduce the number of constant finite filed multipliers (CFFMs) in Chien search.
Abstract: A double error correcting (DEC) BCH codec is designed for NOR flash memory systems to improve reliability. Due to the latency constraint less than 10 ns, the fully parallel architecture with huge hardware cost is utilized to process both the encoding and decoding scheme within one clock cycle. Notice that encoder and decoder will not be activated simultaneously in NOR flash applications, so we combine the encoder and syndrome calculator based on the property of minimal polynomials in order to efficiently arrange silicon area. Furthermore, a new error location polynomial is developed to reduce the number of constant finite filed multipliers (CFFMs) in Chien search. According to 90 nm CMOS technology, our propose DEC BCH codec can achieve 2.5 ns latency with 41,705 µm2 area.