scispace - formally typeset
Search or ask a question

Showing papers on "BCH code published in 2009"


Posted Content
TL;DR: In this paper, the connection between the Orthogonal Optical Codes (OOC) and binary compressed sensing matrices was established and the RIP condition was established by means of coherence, and the simple greedy algorithms such as Matching Pursuit were able to recover the sparse solution from noiseless samples.
Abstract: In this paper we establish the connection between the Orthogonal Optical Codes (OOC) and binary compressed sensing matrices. We also introduce deterministic bipolar $m\times n$ RIP fulfilling $\pm 1$ matrices of order $k$ such that $m\leq\mathcal{O}\big(k (\log_2 n)^{\frac{\log_2 k}{\ln \log_2 k}}\big)$. The columns of these matrices are binary BCH code vectors where the zeros are replaced by -1. Since the RIP is established by means of coherence, the simple greedy algorithms such as Matching Pursuit are able to recover the sparse solution from the noiseless samples. Due to the cyclic property of the BCH codes, we show that the FFT algorithm can be employed in the reconstruction methods to considerably reduce the computational complexity. In addition, we combine the binary and bipolar matrices to form ternary sensing matrices ($\{0,1,-1\}$ elements) that satisfy the RIP condition.

156 citations


Proceedings ArticleDOI
28 Jun 2009
TL;DR: It is first shown that any ℓ ×ℓ matrix none of whose column permutations is upper triangular polarizes binary-input memoryless channels and a general construction based on BCH codes which for large I achieves exponents arbitrarily close to 1 is given.
Abstract: Polar codes were recently introduced by Arikan. They achieve the symmetric capacity of arbitrary binary-input discrete memoryless channels under a low complexity successive cancellation decoding strategy. The original polar code construction is closely related to the recursive construction of Reed-Muller codes and is based on the 2 × 2 matrix of the given equation. It was shown by Arikan and Telatar that this construction achieves an error exponent of 1/2, i.e., that for sufficiently large blocklengths the error probability decays exponentially in the square root of the length. It was already mentioned by Arikan that in principle larger matrices can be used to construct polar codes. A fundamental question then is to see whether there exist matrices with exponent exceeding 1/2. We characterize the exponent of a given square matrix and derive upper and lower bounds on achievable exponents. Using these bounds we show that there are no matrices of size less than 15 with exponents exceeding 1/2. Further, we give a general construction based on BCH codes which for large matrix sizes achieves exponents arbitrarily close to 1 and which exceeds 1/2 for size 16.

134 citations


Journal ArticleDOI
TL;DR: Three code constructions generating new families of good nonbinary quantum codes are presented in this paper and have parameters better than the ones available in the literature.
Abstract: Three code constructions generating new families of good nonbinary quantum codes are presented in this paper. The first two ones are derived from Hermitian self-orthogonal non-narrow-sense Bose-Chaudhuri-Hocquenghem (BCH) codes. The third one is derived from q-ary (q{ne}2 is a prime power) Steane's enlargement of Calderbank-Shor-Steane codes applied to Euclidean self-orthogonal non-narrow-sense BCH codes. The quantum nonbinary BCH codes presented here have parameters better than the ones available in the literature.

113 citations


Proceedings ArticleDOI
07 Sep 2009
TL;DR: This paper presents an efficient JPEG steganography method based on heuristic optimization and BCH syndrome coding that achieves multiple solutions for hiding data to a block of coefficients and significantly decreases detectability of the steganalysis.
Abstract: This paper presents an efficient JPEG steganography method based on heuristic optimization and BCH syndrome coding. The proposed heuristic optimization technique significantly decreases total distortion by inserting and removing AC coefficients 1 or -1 in the most appropriate positions. The implemented data hiding technique is based on structured BCH syndrome coding. This method achieves multiple solutions for hiding data to a block of coefficients. The proposed data hiding method chooses the best solution with minimum distortion effect. As a result, the total distortion can be significantly reduced, which results in less detectability of the steganalysis. The experiments include the steganalysis of the proposed data hiding methods. The experiment results shows that the proposed heuristic optimization significantly decreases detectability of the steganalysis. The proposed methods also outperform the existing steganography methods.

98 citations


Journal ArticleDOI
TL;DR: In this article, a new algorithm for generating the Baker-Campbell-Hausdorff (BCH) series Z = log(eXeY) in an arbitrary generalized Hall basis of the free Lie algebra generated by X and Y is presented.
Abstract: We provide a new algorithm for generating the Baker–Campbell–Hausdorff (BCH) series Z=log(eXeY) in an arbitrary generalized Hall basis of the free Lie algebra L(X,Y) generated by X and Y. It is based on the close relationship of L(X,Y) with a Lie algebraic structure of labeled rooted trees. With this algorithm, the computation of the BCH series up to degree of 20 [111 013 independent elements in L(X,Y)] takes less than 15min on a personal computer and requires 1.5Gbytes of memory. We also address the issue of the convergence of the series, providing an optimal convergence domain when X and Y are real or complex matrices.

94 citations


Book ChapterDOI
03 Sep 2009
TL;DR: The proposed scheme shows that BCH syndrome coding for data hiding is now practical ascribed to the reduced complexity, and it is easy to extend this method to a large n which allows us to hide data with small embedding capacity.
Abstract: This paper presents an improved data hiding technique based on BCH (n,k,t) syndrome coding. The proposed data hiding method embeds data to a block of input data (for example, image pixels, wavelet or DCT coefficients, etc.) by modifying some coefficients in a block in order to null the syndrome of the BCH coding. The proposed data hiding method can hide the same amount of data with less computational time compared to the existed methods. Contributions of this paper include the reduction of both time complexity and storage complexity as well. Storage complexity is linear while that of other methods are exponential. Time complexity of our method is almost negligible and constant for any n. On the other hand, time complexity of the existing methods is exponential. Since the time complexity is constant and storage complexity is linear, it is easy to extend this method to a large n which allows us to hide data with small embedding capacity. Note that small capacities are highly recommended for steganography to survive steganalysis. The proposed scheme shows that BCH syndrome coding for data hiding is now practical ascribed to the reduced complexity.

83 citations


Journal ArticleDOI
TL;DR: The present paper makes strides in the low-floor problem by identifying the weaknesses of the code under study and applying compensatory counter-measures, which demonstrate that each of these techniques can successfully lower an LDPC code's floor.
Abstract: While LDPC codes have been widely acclaimed in recent years for their near-capacity performance, they have not found their way into many important applications. For some cases, this is due to their increased decoding complexity relative to the classical coding techniques. For other cases, this is due to their inability to reach very low bit error rates (e.g., 10-12) at low signal-to-noise ratios (SNRs), a consequence of the error rate floor phenomenon associated with iterative LDPC decoders. In the present paper, we make strides in the low-floor problem by identifying the weaknesses of the code under study and applying compensatory counter-measures. These counter-measures include: modifying the code itself, modifying the decoder, or adding a properly designed outer algebraic code. Our results demonstrate that each of these techniques can successfully lower an LDPC code's floor, and that, for the code under study, an outer BCH code appears to be particularly effective. All of our results are based on FPGA decoder simulations and so they are reliable and repeatable.

77 citations


Journal ArticleDOI
TL;DR: The proposed concatenated structure offers very low error floors and good performance in the waterfall region for all considered spectral efficiencies, and a significant improvement with respect to previous concatenation CPM schemes is shown.
Abstract: In this paper, serially concatenated continuous phase modulation (SCCPM) is considered for the uplink of satellite communications. A three-step design procedure is proposed to optimize the association of the outer code and the CPM for a wide range of spectral efficiencies, ranging from 0.75 to 2.25 bit/s/Hz. Firstly, EXIT chart analysis is applied to derive general guidelines for choosing SCCPM parameters. A significant result is that a high-rate outer code is required to achieve good convergence threshold, given a spectral efficiency. At a second stage, union bounds to the error probability are considered to choose the outer code under the constraints arising from the EXIT charts analysis. From this analysis, extended BCH codes and extended Hamming codes are proposed as outer code for broadband and narrowband transmission, respectively. For the latter, double-binary convolutional codes and symbol interleaving is also proposed as a valid alternative. Finally, combining both EXIT charts and union bounds we optimize the association of code and CPM to achieve both low error rates and good convergence. The proposed concatenated structure offers very low error floors and good performance in the waterfall region for all considered spectral efficiencies. A significant improvement with respect to previous concatenated CPM schemes is shown.

66 citations


Book ChapterDOI
03 Dec 2009
TL;DR: It is conjecture that self-dual skew codes defined as modules must be constacyclic and it is proved that this conjecture for the Hermitian scalar product and under some assumptions for the Euclidean scalar products is true.
Abstract: In previous works we considered codes defined as ideals of quotients of skew polynomial rings, so called Ore rings of automorphism type. In this paper we consider codes defined as modules over skew polynomial rings, removing therefore some of the constraints on the length of the skew codes defined as ideals. The notion of BCH codes can be extended to this new approach and the skew codes whose duals are also defined as modules can be characterized. We conjecture that self-dual skew codes defined as modules must be constacyclic and prove this conjecture for the Hermitian scalar product and under some assumptions for the Euclidean scalar product. We found new [56, 28, 15], [60,30,16], [62,31,17], [66,33,17] Euclidean self-dual skew codes and new [50,25,14], [58,29,16] Hermitian self-dual skew codes over F 4 , improving the best known distances for self-dual codes of these lengths over F 4 .

59 citations


Proceedings ArticleDOI
20 Apr 2009
TL;DR: A programmable Forward Error Correction (FEC) IP for a DVB-S2 receiver is presented, composed of a Low-Density Parity Check (LDPC), a Bose-Chaudhuri-Hoquenghem decoder, and pre- and postprocessing units.
Abstract: In this paper a programmable Forward Error Correction (FEC) IP for a DVB-S2 receiver is presented. It is composed of a Low-Density Parity Check (LDPC), a Bose-Chaudhuri-Hoquenghem (BCH) decoder, and pre- and postprocessing units. Special emphasis is put on LDPC decoding, since it accounts for the most complexity of the IP core by far.We propose a highly efficient LDPC decoder which applies Gauss-Seidel decoding. In contrast to previous publications, we show in detail how to solve the well known problem of superpositions of permutation matrices. The enhanced convergence speed of Gauss-Seidel decoding is used to reduce area and power consumption. Furthermore, we propose a modified version of the λ-Min algorithm which allows to further decrease the memory requirements of the decoder by compressing the extrinsic information.Compared to the latest published DVB-S2 LDPC decoders, we could reduce the clock frequency by 40% and the memory consumption by 16%, yielding large energy and area savings while offering the same throughput.

56 citations


Journal ArticleDOI
TL;DR: The experimental results demonstrate that the developed software cannot only provide sufficient throughput for real-time error correction of NAND flash memory in embedded systems but also enhance the reliability of file systems in general purpose computers.
Abstract: Error correction software for Bose-Chaudhuri-Hochquenghem (BCH) codes is optimized for general purpose processors that do not equip hardware for Galois field arithmetic. The developed software applies parallelization with a table lookup method to reduce the number of iterations, and maximum parallelization under a cache size limitation is sought for a high throughput implementation. Since this method minimizes the number of lookup tables for encoding and decoding processes, a large parallel factor can be chosen for a given cache size. The naive word length of a general purpose CPU is used as a whole by employing the developed mask elimination method. The tradeoff of the algorithm complexity and the regularity is examined for several syndrome generation methods, which leads to a simple error detection scheme that reuses the encoder and a simplified syndrome generation method requiring only a small number of Galois field multiplications. The parallel factor for Chien search is increased much by transforming the error locator polynomial so that it contains symmetric exponents of positive and negative signs. The experimental results demonstrate that the developed software cannot only provide sufficient throughput for real-time error correction of NAND flash memory in embedded systems but also enhance the reliability of file systems in general purpose computers.

Journal ArticleDOI
TL;DR: It is shown that Reed-Solomon codes are twice better with respect to the number of locked positions than binary BCH codes; in fact, they are optimal.
Abstract: The use of syndrome coding in steganographic schemes tends to reduce distortion during embedding. The more complete model comes from the wet papers (J. Fridrich et al., 2005) and allow to lock positions which cannot be modified. Recently, binary BCH codes have been investigated and seem to be good candidates in this context (D. Schonfeld and A. Winkler, 2006). Here, we show that Reed-Solomon codes are twice better with respect to the number of locked positions; in fact, they are optimal. First, a simple and efficient scheme based on Lagrange interpolation is provided to achieve the optimal number of locked positions. We also consider a new and more general problem, mixing wet papers (locked positions) and simple syndrome coding (low number of changes) in order to face not only passive but also active wardens. Using list decoding techniques, we propose an efficient algorithm that enables an adaptive tradeoff between the number of locked positions and the number of changes.

Proceedings ArticleDOI
31 May 2009
TL;DR: This work gives explicit constructions of epsilon nets for linear threshold functions on the binary cube and on the unit sphere and improves upon the well known construction of dual BCH codes that only guarantee covering radius of n/2 - c√n.
Abstract: We give explicit constructions of epsilon nets for linear threshold functions on the binary cube and on the unit sphere. The size of the constructed nets is polynomial in the dimension n and in 1/e. To the best of our knowledge no such constructions were previously known. Our results match, up to the exponent of the polynomial, the bounds that are achieved by probabilistic arguments. As a corollary we also construct subsets of the binary cube that have size polynomial in n and covering radius of n/2 - c√{n log n}, for any constant c. This improves upon the well known construction of dual BCH codes that only guarantee covering radius of n/2 - c√n.

Posted Content
TL;DR: In this paper, it was shown that there are no matrices of size less than 15 with exponents exceeding the error exponent of the original Reed-Muller codes, where the error probability decays exponentially in the square root of the length of the block.
Abstract: Polar codes were recently introduced by Ar\i kan. They achieve the capacity of arbitrary symmetric binary-input discrete memoryless channels under a low complexity successive cancellation decoding strategy. The original polar code construction is closely related to the recursive construction of Reed-Muller codes and is based on the $2 \times 2$ matrix $\bigl[ 1 &0 1& 1 \bigr]$. It was shown by Ar\i kan and Telatar that this construction achieves an error exponent of $\frac12$, i.e., that for sufficiently large blocklengths the error probability decays exponentially in the square root of the length. It was already mentioned by Ar\i kan that in principle larger matrices can be used to construct polar codes. A fundamental question then is to see whether there exist matrices with exponent exceeding $\frac12$. We first show that any $\ell \times \ell$ matrix none of whose column permutations is upper triangular polarizes symmetric channels. We then characterize the exponent of a given square matrix and derive upper and lower bounds on achievable exponents. Using these bounds we show that there are no matrices of size less than 15 with exponents exceeding $\frac12$. Further, we give a general construction based on BCH codes which for large $n$ achieves exponents arbitrarily close to 1 and which exceeds $\frac12$ for size 16.

Journal ArticleDOI
TL;DR: A method to reduce the number of test patterns (TPs) decoded in the Chase-II algorithm for turbo product codes (TPCs) constructed with multi-error-correcting extended Bose-Chaudhuri-Hocquengem (eBCH) codes is presented.
Abstract: We present a method to reduce the number of test patterns (TPs) decoded in the Chase-II algorithm for turbo product codes (TPCs) constructed with multi-error-correcting extended Bose-Chaudhuri-Hocquengem (eBCH) codes. We classify TPs into different conditions based on the relationship between syndromes and the number of errors so that TPs with the same codeword are not decoded except the one with the least number of errors. For eBCH with code length of 64, simulation results show that over 50% of TPs need not to be decoded without any performance degradation.

Journal ArticleDOI
TL;DR: In this article, a Chinese remainder theorem (CRT)-based parallel architecture for long BCH encoding is presented, which can be used to eliminate the fan-out effect of some XOR gates.
Abstract: Bose-Chaudhuri-Hocquenghen (BCH) error-correcting codes are now widely used in communication system and digital technology. The direct linear feedback shifted register (LFSR)-based encoding of a long BCH code suffers from the large fan-out effect of some XOR gates. This makes the LFSR-based encoders of long BCH codes not keep up with the data transmission speed in some applications. The technique for eliminating the large fan-out effect by J-unfolding method and some algebraic manipulation has been proposed. In this brief, we present a Chinese remainder theorem (CRT)-based parallel architecture for long BCH encoding. Our novel technique can be used to eliminate the fan-out bottleneck. The only restriction on the speed of long BCH encoding of our CRT-based architecture is log2 N, where N is the length of the BCH code.

Patent
Hyun-Jeong Kang1, Yeong-Moon Son1, Jung-Je Son1, Jung-Hoon Cheon1, Chan-Ho Min1, Won-Il Roh1 
04 Jun 2009
TL;DR: In this paper, a method and apparatus for supporting an idle mode of a Mobile Station (MS) in a superframe-based wireless communication system is provided, where a paging listening interval is determined based on a Broadcast CHannel (BCH) information Transmit (TX) interval.
Abstract: A method and apparatus for supporting an idle mode of a Mobile Station (MS) in a superframe-based wireless communication system are provided. In a method for operating an MS to support an idle mode in a superframe-based wireless communication system, a paging listening interval is determined based on a Broadcast CHannel (BCH) information Transmit (TX) interval. BCH information including paging information is received during the paging listening interval. The inclusion/non-inclusion of a paging advertisement (MOB_PAG-ADV) message is detected based on the BCH information.

Proceedings ArticleDOI
28 Jun 2009
TL;DR: A new triples leading to triple-error-correcting codes is presented and a new method is presented that may be of interest in finding further such triples.
Abstract: The binary primitive triple-error-correcting BCH code is a cyclic code of minimum distance d = 7 with generator polynomial having zeros α, α3 and α5 where α is a primitive (2n − 1)-root of unity. The zero set of the code is said to be {1, 3, 5}. In the 1970's Kasami showed that one can construct similar triple-error-correcting codes using zero sets consisting of different triples than the BCH codes. Furthermore, in 2000 Chang et. al. found new triples leading to triple-error-correcting codes. In this paper a new such triple is presented. In addition a new method is presented that may be of interest in finding further such triples. The method is illustrated by giving a new and simpler proof of one of the known Kasami triples {1, 2k + 1, 23k + 1} where n is odd and gcd(k, n) = 1 as well as to find the new triple given by {1, 2k + 1, 22k + 1} for any n where gcd(k, n) = 1.

Posted Content
TL;DR: A Chinese remainder theorem (CRT)-based parallel architecture for long BCH encoding is presented and a novel technique can be used to eliminate the fan-out bottleneck.
Abstract: BCH (Bose-Chaudhuri-Hocquenghen) error correcting codes ([1]-[2]) are now widely used in communication systems and digital technology. Direct LFSR(linear feedback shifted register)-based encoding of a long BCH code suffers from serial-in and serial-out limitation and large fanout effect of some XOR gates. This makes the LFSR-based encoders of long BCH codes cannot keep up with the data transmission speed in some applications. Several parallel long parallel encoders for long cyclic codes have been proposed in [3]-[8]. The technique for eliminating the large fanout effect by J-unfolding method and some algebraic manipulation was presented in [7] and [8] . In this paper we propose a CRT(Chinese Remainder Theorem)-based parallel architecture for long BCH encoding. Our novel technique can be used to eliminate the fanout bottleneck. The only restriction on the speed of long BCH encoding of our CRT-based architecture is $log_2N$, where $N$ is the length of the BCH code.

Journal ArticleDOI
22 Dec 2009
TL;DR: The error locator evaluator is proposed to evaluate error locations without the Chien search for higher throughput, and the Björck-Pereyra error magnitude solver is presented to improve decoding efficiency and hardware complexity.
Abstract: This paper provides a soft Bose-Chaudhuri-Hochquenghem (BCH) decoder chip with soft information from the LDPC decoder for the DVB-S2 system. In contrast with the hard BCH decoder, the proposed soft BCH decoder that deals with least reliable bits can provide much lower complexity with similar error-correcting performance. Moreover, the error locator evaluator is proposed to evaluate error locations without the Chien search for higher throughput, and the Bjorck-Pereyra error magnitude solver (BP-EMS) is presented to improve decoding efficiency and hardware complexity. The chip measurement results reveal that our proposed soft (32400, 32208) BCH decoder for DVB-S2 system can achieve 314.5 Mb/s with a gate-count of 26.9 K in standard 90 nm 1P9M CMOS technology. Extended for fully supporting 21 modes in the DVB-S2 system, our approach can achieve 300 MHz operation frequency with a gate-count of 32.4 K.

Journal ArticleDOI
TL;DR: It is proved that a new translation proposed recently yields a correct representation of the BCH and Zassenhaus terms, which entails fewer terms than the well-known Dynkin–Specht–Wever representation, which is of relevance for practical applications.

Book ChapterDOI
07 Jun 2009
TL;DR: A recent survey of list decoding algorithms for binary codes can be found in this paper, where the authors present algorithms to list decode binary Reed-Muller codes of any order up to the minimum distance, generalizing the classical Goldreich-Levin algorithm for RM codes.
Abstract: We briefly survey some recent progress on list decoding algorithms for binary codes. The results discussed include: Algorithms to list decode binary Reed-Muller codes of any order up to the minimum distance, generalizing the classical Goldreich-Levin algorithm for RM codes of order 1 (Hadamard codes). These algorithms are "local" and run in time polynomial in the message length. Construction of binary codes efficiently list-decodable up to the Zyablov (and Blokh-Zyablov) radius. This gives a factor two improvement over the error-correction radius of traditional "unique decoding" algorithms. The existence of binary linear concatenated codes that achieve list decoding capacity, i.e., the optimal trade-off between rate and fraction of worst-case errors one can hope to correct. Explicit binary codes mapping k bits to n ≤ poly(k /*** ) bits that can be list decoded from a fraction (1/2 *** *** ) of errors (even for *** = o (1)) in poly(k /*** ) time. A construction based on concatenating a variant of the Reed-Solomon code with dual BCH codes achieves the best known (cubic) dependence on 1/*** , whereas the existential bound is n = O (k /*** 2). (The above-mentioned result decoding up to Zyablov radius achieves a rate of *** (*** 3) for the case of constant *** .) We will only sketch the high level ideas behind these developments, pointing to the original papers for technical details and precise theorem statements.

01 Nov 2009
TL;DR: This paper uses error correcting codes for multilabel classification using the Bose, Ray-Chaudhuri, Hocquenghem (BCH) code and Random Forests learner to form a method that can deal with multi-label classification problems improving the performance of several popular exiting methods.
Abstract: Multilabel classification deals with problems in which an instance can belong to multiple classes. This paper uses error correcting codes for multilabel classification. BCH code and Random Forests learner are used to form the proposed method. Thus, the advantage of the error-correcting properties of BCH is merged with the good performance of the random forests learner to enhance the multilabel classification results. Three experiments are conducted on three common benchmark datasets. The results are compared against those of several exiting approaches. The proposed method does well against its counterparts for the three datasets of varying characteristics. positive relevance classification is associated with the instance. Zhang and Zhou (3) report a multi-label lazy learning approach which is derived from the traditional k-Nearest Neighbour (kNN) and named ML-kNN. For each unseen instance, its k nearest neighbors in the training set are identified. Then, based on statistical information gained from the label sets of these neighboring instances, maximum a posteriori principle is used to determine the label set for the unseen instance. Zhang and Zhou (4) present a neural network-based algorithm that is Backpropagation for Multi-Label Learning named BP-MLL. It is based on the backpropagation algorithm but uses a specific error function that captures the characteristics of multi-label learning. The labels belonging to an instance are ranked higher than those not belonging to that instance. Tsoumakas and Vlahavas (5) propose RAndom K- labELsets (RAKEL) which is an ensemble method for multilabel classification based on random projections of the label space. An ensemble of Label Powerset (LP) classifiers is trained on smaller size of label subset randomly selected from the training data. The RAKEL takes into account label correlations by using single- label classifiers that are applied on subtasks with manageable number of labels and adequate number of examples per label. It therefore tackles difficulty of learning due to a large number of classes associated with only a few examples. Several other important works can be also found in (6-11). The main motivation behind the work reported in this paper is our desire to improve the performance of the multilabel classification methods. This paper explores the use of error correcting code for multilabel classification. It uses the Bose, Ray-Chaudhuri, Hocquenghem (BCH) code and Random Forests learner to form a method that can deal with multilabel classification problems improving the performance of several popular exiting methods. The description of the theoretical framework as well as the proposed method is given in the following sections.

Journal ArticleDOI
Wang Xueqiang1, Pan Liyang1, Wu Dong1, Hu Chaohong1, Zhou Runde2 
TL;DR: A novel fast-decoding algorithm is developed to speed up the BCH decoding process using iteration-free solutions and division-free transformations in finite fields, and the decoding latency is significantly reduced, which satisfies the fast-access-time requirements of nor flash memories.
Abstract: An on-chip high-speed two-cell Bose-Chaudhuri-Hocquenghen (BCH) decoder for error correction in a multilevel-cell (MLC) NOR flash memory is presented. To satisfy the reliability requirements, a double-error-correcting (DEC) BCH code is required in nor flash memories with the process shrinking beyond 45 nm. A novel fast-decoding algorithm is developed to speed up the BCH decoding process using iteration-free solutions and division-free transformations in finite fields. As a result, the decoding latency is significantly reduced by 80%. Furthermore, a novel architecture of a two-cell decoder that is suitable for an MLC flash memory is proposed to obtain a good time-area tradeoff. Experimental results show that the latency of the proposed two-cell BCH decoder is only 7.5 ns, which satisfies the fast-access-time requirements of nor flash memories.

Book ChapterDOI
TL;DR: In this article, it was shown that the dual of BCH codes of Mersenne prime length with a single low-weight codeword and its cyclic shifts is locally testable.
Abstract: Motivated by questions in property testing, we search for linear error-correcting codes that have the "single local orbit" property: i.e., they are specified by a single local constraint and its translations under the symmetry group of the code. We show that the dual of every "sparse" binary code whose coordinates are indexed by elements of F_{2^n} for prime n, and whose symmetry group includes the group of non-singular affine transformations of F_{2^n} has the single local orbit property. (A code is said to be "sparse" if it contains polynomially many codewords in its block length.) In particular this class includes the dual-BCH codes for whose duals (i.e., for BCH codes) simple bases were not known. Our result gives the first short (O(n)-bit, as opposed to the natural exp(n)-bit) description of a low-weight basis for BCH codes. The interest in the "single local orbit" property comes from the recent result of Kaufman and Sudan (STOC 2008) that shows that the duals of codes that have the single local orbit property under the affine symmetry group are locally testable. When combined with our main result, this shows that all sparse affine-invariant codes over the coordinates F_{2^n} for prime n are locally testable. If, in addition to n being prime, if 2^n-1 is also prime (i.e., 2^n-1 is a Mersenne prime), then we get that every sparse cyclic code also has the single local orbit. In particular this implies that BCH codes of Mersenne prime length are generated by a single low-weight codeword and its cyclic shifts.

Journal ArticleDOI
TL;DR: The Fourier spectra of some recently discovered binomial almost perfect nonlinear (APN) functions are computed to provide an alternative proof of the APN property of the functions.
Abstract: In this paper we compute the Fourier spectra of some recently discovered binomial almost perfect nonlinear (APN) functions One consequence of this is the determination of the nonlinearity of the functions, which measures their resistance to linear cryptanalysis Another consequence is that certain error-correcting codes related to these functions have the same weight distribution as the 2-error-correcting Bose-Chaudury-Hocquenghem (BCH) code Furthermore, for field extensions of $\mathbb{F}_2$ of odd degree, our results provide an alternative proof of the APN property of the functions

Journal ArticleDOI
01 Oct 2009-Optik
TL;DR: The performance analyses for the novel SFEC code show that it has excellent advantages such as the shorter component code and rapid encoding/ decoding speed; thus, both the complexity to implement its software/hardware and the delay time for its encoding/decoding can be greatly reduced.

Proceedings ArticleDOI
28 Jun 2009
TL;DR: A modified version of the extended Euclidean algorithm for polynomial greatest common divisors that is specifically adapted to the RS decoding problem and requires no degree computation or comparison to a threshold.
Abstract: The extended Euclidean algorithm (EEA) for polynomial greatest common divisors is commonly used in solving the key equation in the decoding of Reed-Solomon (RS) codes, and more generally in BCH decoding. For this particular application, the iterations in the EEA are stopped when the degree of the remainder polynomial falls below a threshold. While determining the degree of a polynomial is a simple task for human beings, hardware implementation of this stopping rule is more complicated. This paper describes a modified version of the EEA that is specifically adapted to the RS decoding problem. This modified algorithm requires no degree computation or comparison to a threshold, and it uses a fixed number of iterations. Another advantage of this modified version is in its application to the errors-and-erasures decoding problem for RS codes where significant hardware savings can be achieved via seamless computation.

Patent
17 Jun 2009
TL;DR: In this article, an encoding method of a channel error correcting code BCH code and an RS code, belonging to the digital communication field, was presented. But the encoding method was not suitable for the case where the number of searched error locations is equal to the largest error correcting capability.
Abstract: The invention discloses an encoding method of a channel error correcting code BCH code and an RS code, belonging to the digital communication field, which comprises calculating an adjoint polynomial S(x) through a received code R(x), solving the Berlekamp key equation through the Euclid algorithm to obtain an error location polynomial sigma (x), and correctly encoding if the constant term sigma 0 in the error location polynomial sigma (x) is zero, or calculating the error locations and the relative error values through the Chien search if the constant term sigma 0 in the error location polynomial sigma (x) is not equal to 0, and being able to correctly encode if the number of searched error locations is equal to the largest error correcting capability thereof, or sending a warning indication signal whose error number of receiving signals exceeds the largest error correcting capability, and outputting original receiving codes. The encoding method avoids the condition that the more errors are corrected, the more errors existing when the receiving code error number exceeds the largest error correcting capability of the channel error correcting code, thereby reducing the error code rate of the whole communication system.

Patent
24 Nov 2009
TL;DR: In this paper, a first circuit, a second circuit and a third circuit are configured to calculate a plurality of normal syndromes by modifying the preliminary Syndromes using at most two Galois Field multiplications.
Abstract: An apparatus generally having a first circuit, a second circuit and a third circuit is disclosed. The first circuit may be configured to calculate a plurality of preliminary syndromes from a plurality of received symbols. The second circuit may be configured to calculate a plurality of normal syndromes by modifying the preliminary syndromes using at most two Galois Field multiplications. The third circuit is generally configured to calculate an errata polynomial based on the normal syndromes.