scispace - formally typeset
Search or ask a question

Showing papers on "BCH code published in 2008"


Journal ArticleDOI
20 Jan 2008
TL;DR: The Fast Johnson-Lindenstrauss Transform (FJLT) was recently discovered by Ailon and Chazelle as a novel technique for performing fast dimension reduction with small distortion from ed2 to ed2 in time O(max{d log d,k3}) as mentioned in this paper.
Abstract: The Fast Johnson-Lindenstrauss Transform (FJLT) was recently discovered by Ailon and Chazelle as a novel technique for performing fast dimension reduction with small distortion from ed2 to ed2 in time O(max{d log d,k3}). For k in [Ω(log d), O(d1/2)] this beats time O(dk) achieved by naive multiplication by random dense matrices, an approach followed by several authors as a variant of the seminal result by Johnson and Lindenstrauss (JL) from the mid 80's. In this work we show how to significantly improve the running time to O(d log k) for k = O(d1/2−Δ), for any arbitrary small fixed Δ. This beats the better of FJLT and JL. Our analysis uses a powerful measure concentration bound due to Talagrand applied to Rademacher series in Banach spaces (sums of vectors in Banach spaces with random signs). The set of vectors used is a real embedding of dual BCH code vectors over GF(2). We also discuss the number of random bits used and reduction to e1 space. The connection between geometry and discrete coding theory discussed here is interesting in its own right and may be useful in other algorithmic applications as well.

253 citations


Patent
17 Sep 2008
TL;DR: In this article, a low power Chien searching method employing Chien search circuitry comprising at least two hardware components that compute at least 2 corresponding bits comprising a chien search output, the method comprising activating only a subset of the hardware components and activating hardware components other than those in the subset of hardware components.
Abstract: A low power Chien searching method employing Chien search circuitry comprising at least two hardware components that compute at least two corresponding bits comprising a Chien search output, the method comprising activating only a subset of the hardware components thereby to compute only a subset of the bits of the Chien search output; and activating hardware components other than those in the subset of hardware components, to compute additional bits of the Chien search output other than the bits in the subset of bits, only if a criterion on the subset of the bits of the Chien search output is satisfied.

156 citations


Proceedings ArticleDOI
24 Oct 2008
TL;DR: Two techniques that lower the error-rate floors for LDPC-coded partial response (PR) channels, which are applicable to magnetic and optical storage are introduced.
Abstract: There is a well-known error-floor phenomenon associated with iterative LDPC decoders which has delayed the use of LDPC codes in certain communication and storage systems. Error floors are known to generally be caused by so-called trapping sets, subsets of code bits which induce a subgraph in a codepsilas Tanner graph that have the effect of locking up the decoder. In earlier work, the authors proposed three decoder-based techniques that lower the LDPC error floors on binary-input AWGN channels. In this paper, we introduce two techniques that lower the error-rate floors for LDPC-coded partial response (PR) channels, which are applicable to magnetic and optical storage. The techniques involve, via external measures, ldquopinningrdquo one of the bits in each problematic trapping set and then letting the iterative decoder proceed to correct the rest of the bits. We present two classes of pinning solutions: (1) a pre-pinning technique which fixes (pins) selected trapping set bits prior to transmission and (2) a post-pinning approach which utilizes information from outer BCH decoders to pin bits in trapping sets. Our simulations on PR1 and EPR4 channels demonstrate that the floor for the code chosen for this study, a 0.78(2048,1600) quasi-cyclic LDPC code, is lowered by orders of magnitude, beyond the reach of simulations.

136 citations


Proceedings ArticleDOI
17 Nov 2008
TL;DR: This paper proposes to use Reed-Solomon (RS) codes for error correction in MLC flash memory, and can achieve 0.02 dB and 0.2 dB additional gains by using RS and BCH codes, respectively, without any overhead.
Abstract: Prior research efforts have been focusing on using BCH codes for error correction in multi-level cell (MLC) NAND flash memory. However, BCH codes often require highly parallel implementations to meet the throughput requirement. As a result, large area is needed. In this paper, we propose to use Reed-Solomon (RS) codes for error correction in MLC flash memory. A (828, 820) RS code has almost the same rate and length in terms of bits as a BCH (8248, 8192) code. Moreover, it has at least the same error-correcting performance in flash memory applications. Nevertheless, with 70% of the area, the RS decoder can achieve a throughput that is 121% higher than the BCH decoder. A novel bit mapping scheme using gray code is also proposed in this paper. Compared to direct bit mapping, our proposed scheme can achieve 0.02 dB and 0.2 dB additional gains by using RS and BCH codes, respectively, without any overhead.

103 citations


Proceedings ArticleDOI
17 Nov 2008
TL;DR: This work presents DEC code design that is aligned to typical memory word widths and a parallel decoding implementation approach that operates on complete memory words in a single cycle.
Abstract: Exacerbated SRAM reliability issues, due to soft errors and increased process variations in sub-100 nm technologies, limit the efficacy of conventionally used error correcting codes (ECC). The double error correcting (DEC) BCH codes have not found favorable application in SRAMs due to non-alignment of their block sizes to typical memory word widths and particularly due to the large multi-cycle latency of traditional iterative decoding algorithms. This work presents DEC code design that is aligned to typical memory word widths and a parallel decoding implementation approach that operates on complete memory words in a single cycle. The practicality of this approach is demonstrated through ASIC implementations, in which it incurs only 1.4 ns and 2.2 ns decoding latencies for 16- and 64-bit words, respectively, using 90 nm ASIC technology. A comparative analysis between conventionally used ECC and DEC ECC for reliability gains and costs incurred has also been performed.

99 citations


Journal ArticleDOI
TL;DR: In this paper, a rational curve fitting algorithm was proposed for list decoding of Reed-Solomon and Bose-Chaudhuri-Hocquenghen (BCH) codes.
Abstract: In this paper, we devise a rational curve fitting algorithm and apply it to the list decoding of Reed-Solomon and Bose-Chaudhuri-Hocquenghen (BCH) codes. The resulting list decoding algorithms exhibit the following significant properties.

86 citations


Proceedings ArticleDOI
06 Jul 2008
TL;DR: This work gives systematic constructions of asymmetric quantum stabilizer codes that exploit a significant asymmetry between the probabilities for bit flip and phase flip errors.
Abstract: Recently, quantum error-correcting codes were proposed that capitalize on the fact that many physical error models lead to a significant asymmetry between the probabilities for bit flip and phase flip errors. An example for a channel which exhibits such asymmetry is the combined amplitude damping and dephasing channel, where the probabilities of bit flips and phase flips can be related to relaxation and dephasing time, respectively. We give systematic constructions of asymmetric quantum stabilizer codes that exploit this asymmetry. Our approach is based on a CSS construction that combines BCH and finite geometry LDPC codes.

59 citations


Journal ArticleDOI
TL;DR: Simulation results show that the proposed method yields competitive performance with a good decoding complexity trade-off for the BSC and theoretical results remain useful to derive tight lower bounds on the performance of GLDPC codes over a binary symmetric channel (BSC).
Abstract: A generalized low-density parity check code (GLDPC) is a low-density parity check code in which the constraint nodes of the code graph are block codes, rather than single parity checks In this paper, we study GLDPC codes which have BCH or Reed-Solomon codes as subcodes under bounded distance decoding (BDD) The performance of the proposed scheme is investigated in the limit case of an infinite length (cycle free) code used over a binary erasure channel (BEC) and the corresponding thresholds for iterative decoding are derived The performance of the proposed scheme for finite code lengths over a BEC is investigated as well Structures responsible for decoding failures are defined and a theoretical analysis over the ensemble of GLDPC codes which yields exact bit and block error rates of the ensemble average is derived Unfortunately this study shows that GLDPC codes do not compare favorably with their LDPC counterpart over the BEC Fortunately, it is also shown that under certain conditions, objects identified in the analysis of GLDPC codes over a BEC and the corresponding theoretical results remain useful to derive tight lower bounds on the performance of GLDPC codes over a binary symmetric channel (BSC) Simulation results show that the proposed method yields competitive performance with a good decoding complexity trade-off for the BSC

55 citations


Patent
15 Feb 2008
TL;DR: In this article, the relationship between information transmitted by a primary BCH and non-primary BCH information is considered, where a transmitting device and a receiving device are disclosed for making it possible to improve an error rate of the second information sequence.
Abstract: In such a relationship between information transmitted by a primary BCH, for example, and information transmitted by a non-primary BCH as a case of the transmission for a first information sequence that is easy to keep receiving quality and a second information sequence that is difficult to keep receiving quality, a transmitting device and a receiving device are disclosed for making it possible to improve an error rate of the second information sequence. In the devices, an encoder ( 102 ) encodes a non-primary BCH information sequence (Sn) with a long code length including a primary BCH information sequence (Sp). On the receiving side, a non-primary BCH information sequence is decoded with a long code length by using the received primary BCH value. With this, a higher encoding gain than encoding only with the non-primary BCH information can be obtained, so that a receiving characteristic of the non-primary BCH can be improved.

52 citations


Proceedings ArticleDOI
TL;DR: In this article, an asymmetric quantum stabilizer code for the combined amplitude damping and dephasing channel is proposed, based on a CSS construction that combines BCH and finite geometry LDPC codes.
Abstract: Recently, quantum error-correcting codes were proposed that capitalize on the fact that many physical error models lead to a significant asymmetry between the probabilities for bit flip and phase flip errors. An example for a channel which exhibits such asymmetry is the combined amplitude damping and dephasing channel, where the probabilities of bit flips and phase flips can be related to relaxation and dephasing time, respectively. We give systematic constructions of asymmetric quantum stabilizer codes that exploit this asymmetry. Our approach is based on a CSS construction that combines BCH and finite geometry LDPC codes.

47 citations


Posted Content
TL;DR: In this article, a new integer programming (IP) formulation of the maximum likelihood decoding problem was proposed and a separation algorithm was proposed for finding cuts induced by redundant parity checks (RPC) under certain circumstances.
Abstract: Maximum Likelihood (ML) decoding is the optimal decoding algorithm for arbitrary linear block codes and can be written as an Integer Programming (IP) problem. Feldman et al. relaxed this IP problem and presented Linear Programming (LP) based decoding algorithm for linear block codes. In this paper, we propose a new IP formulation of the ML decoding problem and solve the IP with generic methods. The formulation uses indicator variables to detect violated parity checks. We derive Gomory cuts from our formulation and use them in a separation algorithm to find ML codewords. We further propose an efficient method of finding cuts induced by redundant parity checks (RPC). Under certain circumstances we can guarantee that these RPC cuts are valid and cut off the fractional optimal solutions of LP decoding. We demonstrate on two LDPC codes and one BCH code that our separation algorithm performs significantly better than LP decoding.

Journal ArticleDOI
TL;DR: A novel decoding technique is developed, termed automorphism group decoding, that combines iterative message passing and permutation decoding and demonstrates that for a large number of algebraic codes, the performance of the new decoding method is close to that of maximum-likelihood (ML) decoding.
Abstract: We introduce the notion of the stopping redundancy hierarchy of a linear block code as a measure of the tradeoff between performance and complexity of iterative decoding for the binary erasure channel. We derive lower and upper bounds for the stopping redundancy hierarchy via Lovasz's local lemma (LLL) and Bonferroni-type inequalities, and specialize them for codes with cyclic parity-check matrices. Based on the observed properties of parity-check matrices with good stopping redundancy characteristics, we develop a novel decoding technique, termed automorphism group decoding, that combines iterative message passing and permutation decoding. We also present bounds on the smallest number of permutations of an automorphism group decoder needed to correct any set of erasures up to a prescribed size. Simulation results demonstrate that for a large number of algebraic codes, the performance of the new decoding method is close to that of maximum-likelihood (ML) decoding.

Proceedings ArticleDOI
Pascal Urard1, Laurent Paumier1, Vincent Heinrich1, N. Raina1, Nitin Chawla1 
01 Feb 2008
TL;DR: The design of a full broadcast + interactive services compliant 2nd generation satellite digital video broadcast (DVB-S2) codec is presented.
Abstract: The design of a full broadcast + interactive services compliant 2nd generation satellite digital video broadcast (DVB-S2) codec is presented.

Patent
01 Jul 2008
TL;DR: In this paper, a BCH (Bose, Ray-Chaudhuri, Hocquenghem) code is used for data error detection and correction in non-volatile memory devices.
Abstract: Data error detection and correction in non-volatile memory devices are disclosed. Data error detection and correction can be performed with software, hardware or a combination of both. Generally an error corrector is referred to as an ECC (error correction code). One of the most relevant codes using in non-volatile memory devices is based on BCH (Bose, Ray-Chaudhuri, Hocquenghem) code. In order to correct reasonable number (e.g., up to 8-bit (eight-bit)) of random errors in a chunk of data (e.g., a codeword of 4200-bit with 4096-bit information data), a BCH(4200,4096,8) is used in GF(2 13 ). ECC comprises encoder and decoder. The decoder further comprises a plurality of error detectors and one error corrector. The plurality of error decoders is configured for calculating odd terms of syndrome polynomial for multiple channels in parallel, while the error corrector is configured for sequentially calculating even terms of syndrome polynomial, key solver and error location.

Journal ArticleDOI
TL;DR: The proposed method transforms the expensive modulo-f(x) multiplications into shift operations, by which not only the hardware for multiplications but also that for additions are much reduced.
Abstract: The Chien search process is the most complex block in the decoding of Bose-Chaudhuri-Hochquenghem (BCH) codes. Since the BCH codes conduct the bit-by-bit error correction, they often need a parallel implementation for high throughput applications. The parallel implementation obviously needs much increased hardware. In this paper, we propose a strength reduced architecture for the parallel Chien search process. The proposed method transforms the expensive modulo-f(x) multiplications into shift operations, by which not only the hardware for multiplications but also that for additions are much reduced. One example shows that the hardware complexity is reduced by 90% in the implementation of binary BCH (8191, 7684, 39) code with the parallel factor of 64. Consequently, it is possible to achieve a speedup of 64 with only 13 times of the hardware complexity when compared with the serial processing.

Proceedings ArticleDOI
Salah A. Aly1
01 Nov 2008
TL;DR: This paper establishes a method to construct asymmetric quantum codes based on classical codes from classical BCH and RS codes over finite fields with parameters [[n, k, dz/dx]]q for certain values of code lengths, dimensions, and various minimum distance.
Abstract: Recently, the theory of quantum error control codes has been extended to include quantum codes over asymmetric quantum channels - qubit-flip and phase-shift errors may have equal or different probabilities. Previous work in constructing quantum error control codes has focused on code constructions for symmetric quantum channels. In this paper we establish a method to construct asymmetric quantum codes based on classical codes. We derive, once again, families of asymmetric quantum codes from classical BCH and RS codes over finite fields. Particularly, we present interesting asymmetric quantum codes based on nonprimitive narrow-sense BCH codes with parameters [[n, k, dz/dx]]q for certain values of code lengths, dimensions, and various minimum distance. Finally, our constructions are well explained by an illustrative example.

Journal ArticleDOI
TL;DR: The potential of applying concatenated low-density parity-check (LDPC) and Bose-Chaudhuri-Hocquenghem (BCH) coding for magnetic recording read channel with a 4 kB sector format is examined and the silicon cost is estimated.
Abstract: In this paper, we examine the potential of applying concatenated low-density parity-check (LDPC) and Bose-Chaudhuri-Hocquenghem (BCH) coding for magnetic recording read channel with a 4 kB sector format. One key observation for such concatenated coding systems is that the overall error correction capability can be improved by exploiting the iteration-by-iteration bit error number oscillation behavior in case of inner LDPC code decoding failures. Moreover, assisted by field programmable gate array (FPGA)-based simulation platforms, empirical error-correcting performance analysis can reach a very low sector error rate (e.g., 10-10 and below), which is almost infeasible for LDPC-only coding systems. Finally, concatenated coding can further reduce the silicon cost. By implementing a high-speed FPGA-based perpendicular recording read channel simulator, we investigate a 4 kB rate-15/16 concatenated coding system with a 512-byte rate-19/20 inner LDPC code and an outer 4 kB BCH code. We apply a decoding strategy that can fully utilize the bit error number oscillation behavior of inner LDPC code decoding, and show that its sector error rate drops down to 10-11. For the purpose of comparison, we use the FPGA-based simulator to empirically observe the performance of 4 kB rate-15/16 LDPC and Reed-Solomon (RS) codes down to 10-7-10-8. Finally, we estimate the silicon cost of this concatenated coding system at 65 nm node, and compare it with that of the RS-only and LDPC-only coding systems.

Journal ArticleDOI
TL;DR: This paper analyzes the effects of quantization or other low-level noise on the error correcting capability of a popular class of real-number Bose-Chaudhuri-Hocquenghem (BCH) codes known as discrete Fourier transform (DFT) codes and proves that the optimal bit allocation for DFT codes (in terms of correctly determining the number of errors) is the uniform one.
Abstract: This paper analyzes the effects of quantization or other low-level noise on the error correcting capability of a popular class of real-number Bose-Chaudhuri-Hocquenghem (BCH) codes known as discrete Fourier transform (DFT) codes. In the absence of low-level noise, a modified version of the Peterson-Gorenstein-Zierler (PGZ) algorithm allows the correction of up to corrupted entries in the real-valued code vector of an DFT code. In this paper, we analyze the performance of this modified PGZ algorithm in the presence of low-level (quantization or other) noise that might affect each entry of the code vector (and not simply of them). We focus on the part of the algorithm that determines the number of errors that have corrupted the real-number codeword. Our approach for determining the number of errors is more effective than existing systematic approaches in the literature and results in an explicit lower bound on the precision needed to guarantee the correct determination of the number of errors; our simulations suggest that this bound can be tight. Finally, we prove that the optimal bit allocation for DFT codes (in terms of correctly determining the number of errors) is the uniform one.

Proceedings ArticleDOI
01 Dec 2008
TL;DR: The simulation results of estimated Bit error rate (BER) show that the implementation of concatenated RS(255,239,8) code with frac34-rated Convolutional code under QPSK modulation technique is highly effective to combat inherent interference in the communication system.
Abstract: In this paper, we study the effect of various concatenated forward error correction (FEC) codes on the performance of a wireless orthogonal frequency division multiplexing (OFDM) system. In FEC concatenated channel code, the OFDM system incorporates Reed-Solomon (RS) encoder of (255,239,8), Cyclic Encoder of (15,11), Bose-Chadhuri-Hocquenghem (BCH) encoder of (127,64) with Convolution encoder of 2/3 and frac34-rated codes under different combinations of digital modulation (QPSK, 8PSK, 32-QAM and 64-QAM). The simulation study is made with the development of a computer program written in Matlab source code on the processing of recorded audio signal under additive white Gaussian noise (AWGN) channel. The simulation results of estimated Bit error rate (BER) show that the implementation of concatenated RS(255,239,8) code with frac34-rated Convolutional code under QPSK modulation technique is highly effective to combat inherent interference in the communication system. Due to constraint in data handling capability of the Matlab editor, a segment of the recorded audio signal is used for analysis. The transmitted audio message is found to have retrieved effectively under noisy situation.

Journal ArticleDOI
TL;DR: The results show that high-rate Reed-Solomon turbo product codes offer a better complexity/performance tradeoff than BCH TPCs for low-cost Gbps fiber optic communications.
Abstract: Turbo product codes (TPCs) are an attractive solution to improve link budgets and reduce systems costs by relaxing the requirements on expensive optical devices in high capacity optical transport systems. In this paper, we investigate the use of Reed-Solomon (RS) turbo product codes for 40Gbps transmission over optical transport networks and 10Gbps transmission over passive optical networks. An algorithmic study is first performed in order to design RS TPCs that are compatible with the performance requirements imposed by the two applications. Then, a novel ultrahigh-speed parallel architecture for turbo decoding of product codes is described. A comparison with binary Bose-Chaudhuri-Hocquenghem (BCH) TPCs is performed. The results show that high-rate RS TPCs offer a better complexity/performance tradeoff than BCH TPCs for low-cost Gbps fiber optic communications.

Journal ArticleDOI
TL;DR: A modification for Bluetooth frame structure is presented to improve its performance over both Additive White Gaussian Noise (AWGN) and fading channels to investigate the effect of using different block codes on the performance of the Bluetooth system.
Abstract: This paper presents a modification for Bluetooth frame structure to improve its performance over both Additive White Gaussian Noise (AWGN) and fading channels The paper investigates the effect of using different block codes on the performance of the Bluetooth system Both Hamming and BCH codes with different lengths are studied as error control codes for the Bluetooth frame Experimental results reveal that shorter Hamming codes have a better performance in AWGN channels Also, the BCH (15, 7) code has a better performance for interleaved channels All this work is devoted to Bluetooth 11 version

Proceedings ArticleDOI
22 Jun 2008
TL;DR: The construction with a cubic dependence on epsiv is obtained by concatenating the recent Parvaresh-Vardy codes with dual BCH codes, and crucially exploits the soft decoding algorithm for PV codes, which yields better hardness results for the problem of approximating NP witnesses in the model of Kumar and Sivakumar.
Abstract: We construct binary linear codes that are efficiently list- decodable up to a fraction (1/2 - epsiv) of errors. The codes encode k bits into n = poly(k/epsiv) bits and are constructible and list-decodable in time polynomial in k and 1/epsiv (in particular, in our results epsiv need not be constant and can even be polynomially small in n). Our results give the best known polynomial dependence of n on k and 1/epsiv for such codes. Specifically, we are able to achieve n les O(k3/epsiv3+gamma) or, if a linear dependence on k is required, n les O (k/epsiv5+gamma), where gamma > 0 is an arbitrary constant. The best previously known constructive bounds in this setting were n les O(k2/epsiv4) and n les O(k/ epsiv6). Non-constructively, a random linear encoding of length n = O(k/epsiv2) suffices, but no sub-exponential algorithm is known for list decoding random codes. Our construction with a cubic dependence on epsiv is obtained by concatenating the recent Parvaresh-Vardy (PV) codes with dual BCH codes, and crucially exploits the soft decoding algorithm for PV codes. This result yields better hardness results for the problem of approximating NP witnesses in the model of Kumar and Sivakumar. Our result with the linear dependence on k is based on concatenation of the PV code with an arbitrary inner code of good minimum distance. In addition to being a basic question in coding theory, codes that are list-decodable from a fraction (1/2 - epsiv) of errors for epsiv rarr 0 have found many uses in complexity theory. In addition, our codes have the property that all nonzero codewords have relative Hamming weights in the range (1/2 - epsiv, 1/2 + epsiv); this epsiv-biased property is a fundamental notion in pseudorandomness.

Patent
Da Wang, Wu Guan, Mingke Dong, Ye Jin, Haige Xiang 
09 Jul 2008
TL;DR: In this article, a design scheme of LDPC cascaded code, which is an LDPC-SPC product code, is presented, where each bit of an SPC code word is acquired by the even parity check of the bits in corresponding positions of n LDPC code words.
Abstract: The invention discloses a design scheme of LDPC cascaded code, which is an LDPC-SPC product code which takes LDPC code as horizontal code and SPC code as vertical code, and each bit of an SPC code word is acquired by the even parity check of the bits in corresponding positions of n LDPC code words. The scheme can solve the flat bed of error codes of the LDPC code and has higher flexibility and greater encoding gain than the cascaded methods of BCH code. The invention simultaneously provides an encoding method of the LDPC-SPC product code and two decoding methods (a hard decision method and a soft decision iteration method), and provides corresponding decoders. The LDPC-SPC product code provided by the invention can acquire greater encoding gain with very small redundancy cost and is a signal channel encoding scheme which is applicable to delay-insensitive businesses.

Proceedings ArticleDOI
05 Nov 2008
TL;DR: The orthogonal matching pursuit (OMP) and basis pursuit (BP) algorithms are compared with the syndrome decoding algorithm in terms of mean square reconstruction error and it is seen that, with a Gauss-Markov source and Bernoulli-Gaussian channel noise, the BP outperforms the Syndrome decoding and the OMP at higher noise levels.
Abstract: This paper considers the application of sparse approximations in a joint source-channel (JSC) coding framework. The considered JSC coded system employs a real number BCH code on the input signal before the signal is quantized and further processed. Under an impulse channel noise model, the decoding of error is posed as a sparse approximation problem. The orthogonal matching pursuit (OMP) and basis pursuit (BP) algorithms are compared with the syndrome decoding algorithm in terms of mean square reconstruction error. It is seen that, with a Gauss-Markov source and Bernoulli-Gaussian channel noise, the BP outperforms the syndrome decoding and the OMP at higher noise levels. In the case of image transmission with channel bit errors, the BP outperforms the other two decoding algorithms consistently.

Posted Content
TL;DR: A theory and connection between asymmet- ric quantum codes and subsystem codes is developed and established and an interesting asymmetric and symmetric subsystem codes based on classical BCH codes are derived.
Abstract: Recently, the theory of quantum error control codes has been extended to subsystem codes over symmetric and asymmetric quantum channels - qubit-flip and phase-shift er rors may have equal or different probabilities. Previous work in constructing quantum error control codes has focused on code constructions for symmetric quantum channels. In this paper, we develop a theory and establish the connection between asymmet- ric quantum codes and subsystem codes. We present families of subsystem and asymmetric quantum codes derived, once again, from classical BCH and RS codes over finite fields. Particular ly, we derive an interesting asymmetric and symmetric subsystem codes based on classical BCH codes with parameters ((n, k, r, d))q, ((n, k, r, dz/dx))q and ((n, k ' , 0, dz/dx))q for arbitrary values of code lengths and dimensions. We establish asymmetric Singleton and Hamming bounds on asymmetric quantum and subsystem code parameters; and derive optimal asymmetric MDS subsystem codes. Finally, our constructions are well explained by an illustrative example. This paper is written on the occasion of the 50th anniversary of the discovery of classical BCH codes and their quantum counterparts were derived nearly 10 years ago.

Patent
Haruki Toda1
12 Aug 2008
TL;DR: In this paper, the error detection and correction system was configured to detect and correct errors in read out data by use of a BCH code, and searches error locations in such a way as to divide an error location searching biquadratic equation into two or more factor equations; convert the factor equations to have unknown parts and syndrome parts separated from each other for solving them; and compare indexes of the solution candidates with those of the syndromes, the corresponding relationships being previously obtained as a table.
Abstract: There is disclosed a memory device with an error detection and correction system formed therein, the error detection and correction system being configured to detect and correct errors in read out data by use of a BCH code, wherein the error detection and correction system is 4-bit error correctable, and searches error locations in such a way as to: divide an error location searching biquadratic equation into two or more factor equations; convert the factor equations to have unknown parts and syndrome parts separated from each other for solving them; and compare indexes of the solution candidates with those of the syndromes, the corresponding relationships being previously obtained as a table, thereby obtaining error locations.

Journal ArticleDOI
TL;DR: A family of ternary quasi-perfect BCH codes of minimum distance 5 and covering radius 3 is presented, and the first member of this family is the ternaries quadratic-residue code of length 13.
Abstract: In this paper we present a family of ternary quasi-perfect BCH codes. These codes are of minimum distance 5 and covering radius 3. The first member of this family is the ternary quadratic-residue code of length 13.

Proceedings ArticleDOI
01 Nov 2008
TL;DR: In this article, an efficient Chien search circuit for shortened BCH codes is proposed for DVB-S2 decoding, and the LDPC codec architecture explores the periodicity M = 360 of the special LDPC-IRA codes adopted by the standard.
Abstract: The recent Digital Video Satellite Broadcast Standard (DVB-S2) has adopted a powerful FEC scheme based on the serial concatenation of Bose-Chaudhuri-Hocquenghen (BCH) and low-density parity-check (LDPC) codes. The high-speed requirements, long block lengths and adaptive encoding defined in the DVB-S2 standard, present complex challenges in the design of an efficient codec hardware architecture. In this paper, synthesizable, high throughput, scalable and parallel HDL models supporting the 21 different BCH+LDPC DVB-S2 code configurations are presented. For BCH decoding, an efficient Chien search circuit for shortened BCH codes is proposed. The LDPC codec architecture explores the periodicity M = 360 of the special LDPC-IRA codes adopted by the standard. Synthesis results for an FPGA device from Xilinx show a throughput above the minimal 90 Mbps.

Proceedings ArticleDOI
17 Nov 2008
TL;DR: This article presents an innovative turbo product code (TPC) decoder architecture without any interleaving resource that includes a full-parallel SISO decoder able to process n symbols in one clock period.
Abstract: This article presents an innovative turbo product code (TPC) decoder architecture without any interleaving resource. This architecture includes a full-parallel SISO decoder able to process n symbols in one clock period. Syntheses show the better efficiency of such an architecture compared with existing previous solutions. Considering a 6-iteration turbo decoder of a (32,26)2 BCH product code, synthetized in a 90 nm CMOS technology, the resulting information throughput is 2.5 Gb/s with an area of 233 Kgates. Finally a second architecture enhancing parallelism rate is described. The information throughput is 33.7 Gb/s while an area estimation gives A=10 mum2.

Patent
22 Aug 2008
TL;DR: In this paper, a modified Berlekamp-Massey algorithm is used to perform the decoding process and the efficiency of the decoder can be improved by re-defining the error locating polynomial as a reverse error locating polynomial, while the operation of decoding process can be further realized by a common reconfigurable module.
Abstract: The present invention proposes a method and apparatus for decoding BCH codes and Reed-Solomon codes, in which a modified Berlekamp-Massey algorithm is used to perform the decoding process and the efficiency of the decoder can be improved by re-defining the error locating polynomial as a reverse error locating polynomial, while the operation of the decoding process can be further realized by a common re-configurable module. Furthermore, the architecture of the decoder is consisted of a plurality of sets of re-configurable modules in order to provide parallel operations with different degrees of parallel so that the decoding speed requirement of the decoder in different applications can be satisfied.