scispace - formally typeset
Search or ask a question

Showing papers on "BCH code published in 2005"


Posted Content
TL;DR: In this paper, the theory of stabilizer codes over finite fields is described and bounds on the maximal length of maximum distance separable stabilizer code are given, and a discussion of open problems is given.
Abstract: One formidable difficulty in quantum communication and computation is to protect information-carrying quantum states against undesired interactions with the environment. In past years, many good quantum error-correcting codes had been derived as binary stabilizer codes. Fault-tolerant quantum computation prompted the study of nonbinary quantum codes, but the theory of such codes is not as advanced as that of binary quantum codes. This paper describes the basic theory of stabilizer codes over finite fields. The relation between stabilizer codes and general quantum codes is clarified by introducing a Galois theory for these objects. A characterization of nonbinary stabilizer codes over GF(q) in terms of classical codes over GF(q^2) is provided that generalizes the well-known notion of additive codes over GF(4) of the binary case. This paper derives lower and upper bounds on the minimum distance of stabilizer codes, gives several code constructions, and derives numerous families of stabilizer codes, including quantum Hamming codes, quadratic residue codes, quantum Melas codes, quantum BCH codes, and quantum character codes. The puncturing theory by Rains is generalized to additive codes that are not necessarily pure. Bounds on the maximal length of maximum distance separable stabilizer codes are given. A discussion of open problems concludes this paper.

324 citations


Journal ArticleDOI
TL;DR: A new (randomized) reduction from closest vector problem (CVP) to SVP that achieves some constant factor hardness is given, based on BCH codes, that enables the hardness factor to 2/sup log n1/2-/spl epsi//.
Abstract: Let p > 1 be any fixed real. We show that assuming NP n RP, there is no polynomial time algorithm that approximates the Shortest Vector Problem (SVP) in ep norm within a constant factor. Under the stronger assumption NP n RTIME(2poly(log n)), we show that there is no polynomial-time algorithm with approximation ratio 2(log n)1/2−e where n is the dimension of the lattice and e > 0 is an arbitrarily small constant.We first give a new (randomized) reduction from Closest Vector Problem (CVP) to SVP that achieves some constant factor hardness. The reduction is based on BCH Codes. Its advantage is that the SVP instances produced by the reduction behave well under the augmented tensor product, a new variant of tensor product that we introduce. This enables us to boost the hardness factor to 2(log n)1/2-e.

243 citations


Proceedings ArticleDOI
29 Aug 2005
TL;DR: A CODEC fully compliant to DVB-S2 broadcast standards is implemented in both 0.13 /spl mu/m 8M and 90nm 7M low-leakage CMOS technologies.
Abstract: A CODEC fully compliant to DVB-S2 broadcast standards is implemented in both 013 /spl mu/m 8M and 90nm 7M low-leakage CMOS technologies The system includes encoders and decoders for both LDPC codes and serially concatenated BCH codes This CODEC outperforms the DVB-S2 error performance requirements by up to 01dB The 013 /spl mu/m design occupies 496mm/sup 2/ and operates at 200MHz, while the 90nm design occupies 158mm/sup 2/ and operates at 300MHz

78 citations


Proceedings ArticleDOI
01 Jan 2005
TL;DR: The analysis shows that the performance of the proposed GLDPC scheme on a BEC is poor compared to that of LDPC codes, however, the performance on aBSC is competitive to that that ofLDPC codes.
Abstract: A generalized low-density parity check code (GLDPC) is a low-density parity check code in which the constraint nodes of the code graph are block codes, rather than single parity checks. In this paper, we study GLDPC codes which have BCH or Reed-Solomon codes as subcodes. The performance of the proposed scheme is investigated on a BEC and BSC for infinite and finite code lengths. The analysis shows that the performance of the scheme on a BEC is poor compared to that of LDPC codes. However, the performance on a BSC is competitive to that of LDPC codes. Furthermore, results of the finite length analysis on a BEC can be used under certain conditions as a tight lower bound on the performance of the scheme on a BSC.

77 citations


01 Jan 2005
TL;DR: The generator of 1-generator GQC codes is determined and a BCH type bound for this family of codes is proved.
Abstract: We investigate the structure of generalized quasi cyclic (GQC) codes. We determine the generator of 1-generator GQC codes and prove a BCH type bound for this family of codes.

61 citations


Journal ArticleDOI
TL;DR: Three novel architectures are proposed to reduce the achievable minimum clock period for long BCH encoders after the fanout bottleneck has been eliminated and can achieve a speedup of over 100%.
Abstract: Long Bose-Chaudhuri-Hocquenghen (BCH) codes are used as the outer error correcting codes in the second-generation Digital Video Broadcasting Standard from the European Telecommunications Standard Institute. These codes can achieve around 0.6-dB additional coding gain over Reed-Solomon codes with similar code rate and codeword length in long-haul optical communication systems. BCH encoders are conventionally implemented by a linear feedback shift register architecture. High-speed applications of BCH codes require parallel implementation of the encoders. In addition, long BCH encoders suffer from the effect of large fanout. In this paper, three novel architectures are proposed to reduce the achievable minimum clock period for long BCH encoders after the fanout bottleneck has been eliminated. For an (8191, 7684) BCH code, compared to the original 32-parallel BCH encoder architecture without fanout bottleneck, the proposed architectures can achieve a speedup of over 100%.

60 citations


Posted Content
TL;DR: In this article, it was shown that one can also deduce from the design parameters whether or not a primitive, narrow-sense BCH contains its Euclidean or Hermitian dual code.
Abstract: An attractive feature of BCH codes is that one can infer valuable information from their design parameters (length, size of the finite field, and designed distance), such as bounds on the minimum distance and dimension of the code. In this paper, it is shown that one can also deduce from the design parameters whether or not a primitive, narrow-sense BCH contains its Euclidean or Hermitian dual code. This information is invaluable in the construction of quantum BCH codes. A new proof is provided for the dimension of BCH codes with small designed distance, and simple bounds on the minimum distance of such codes and their duals are derived as a consequence. These results allow us to derive the parameters of two families of primitive quantum BCH codes as a function of their design parameters.

54 citations


Journal ArticleDOI
TL;DR: This paper characterize a class of frames for error correction besides erasure recovery in wired and wireless channels and compares the frames associated with lowpass DFT, DCT, and DST codes, which belong to the defined class, in terms of their error correction efficiency.
Abstract: Joint source-channel coding has been introduced recently as an element of QoS support for IP-based wired and wireless multimedia. Indeed, QoS provisioning in a global mobility context with highly varying channel characteristics is all the most challenging and requires a loosening of the layer and source-channel separation principle. Overcomplete frame expansions have been introduced as joint source-channel codes for erasure channels, that is, to allow for a signal representation that would be resilient to erasures in wired and wireless channels. In this paper, we characterize a class of frames for error correction besides erasure recovery in such channels. We associate the frames with complex number codes and characterize them based on the BCH-like property of the parity check matrices of the associated codes. We show that, in addition to the BCH-type decoding, subspace-based algorithms can also be used to localize errors over such frame expansion coefficients. When the frame expansion coefficients are quantized, we modify these algorithms suitably and compare their performances in terms of the accuracy of error localization and the signal-to-noise ratio of the reconstructed signal. In particular, we compare the frames associated with lowpass DFT, DCT, and DST codes, which belong to the defined class, in terms of their error correction efficiency.

27 citations


Proceedings ArticleDOI
01 Jan 2005
TL;DR: A simplified decoding algorithm, called the reliability-search algorithm, is proposed that uses bit-error probability estimates to cancel the third error and then uses the BCH decoding algorithm to correct the remaining two errors.
Abstract: The (23,12,7) Golay code is a perfect linear error-correcting code that can correct all patterns of three or fewer errors in 23 bit positions. A simple BCH decoding algorithm, given in E. Berlekamp (1968), can decode the (23,12,7) Golay code provided there are no more than two errors. The shift-search algorithm, developed by Reed et a. (1990), sequentially inverts the information bits until the third error is canceled. It then utilizes the BCH decoding algorithm to correct the remaining two errors. In this paper a simplified decoding algorithm, called the reliability-search algorithm, is proposed. This algorithm uses bit-error probability estimates to cancel the third error and then uses the BCH decoding algorithm to correct the remaining two errors. Simulation results show that this new algorithm significantly reduces the decoding complexity for correcting the third error while maintaining the same BER performance.

21 citations


Journal ArticleDOI
TL;DR: In this paper, a construction technique of cyclic, BCH, alternat, Goppa and Srivastava codes over a local finite commutative rings with identity is presented.
Abstract: In this paper we present a construction technique of cyclic, BCH, alternat, Goppa and Srivastava codes over a local finite commutative rings with identity.

20 citations


Proceedings ArticleDOI
28 May 2005
TL;DR: A parallel BCH (2184, 2040) encoder with 8-bit parallelism is realized in TSMC's 0.18 /spl mu/m CMOS technology for high-speed optical communication that can operate at 400 MHz and process data at the rate of 2.5 Gb/s.
Abstract: A new design method for parallel BCH encoder is presented, which can eliminate the bottleneck in long BCH encoder. Based on serial LFSR architecture, a recursive formula which can deduce the parallel BCH encoder was first derived. The complexity and the delay of the critical paths of the circuit could be effectively decreased by using a tree-type structure, sharing sub-expression and limiting its maximum number, and balancing load technique. Finally, a parallel BCH (2184, 2040) encoder with 8-bit parallelism is realized in TSMC's 0.18 /spl mu/m CMOS technology for high-speed optical communication that can operate at 400 MHz and process data at the rate of 2.5 Gb/s.

Journal ArticleDOI
TL;DR: This correspondence obtains transform domain characterization of FqLC codes, using Discrete Fourier Transform (DFT) over an extension field of $$F_{q{^m}}$$ the characterization is in terms of any decomposition of the code into certain subcodes and linearized polynomials over this field.
Abstract: Codes over Fqm that are closed under addition, and multiplication with elements from Fq are called Fq-linear codes over Fqm. For m ≠ 1, this class of codes is a subclass of nonlinear codes. Among Fq-linear codes, we consider only cyclic codes and call them Fq-linear cyclic codes (FqLC codes) over Fqm. The class of FqLC codes includes as special cases (i) group cyclic codes over elementary abelian groups (q = p, a prime), (ii) subspace subcodes of Reed-Solomon codes (n = qm - 1) studied by Hattori, McEliece and Solomon, (iii) linear cyclic codes over Fq (m = 1) and (iv) twisted BCH codes. Moreover, with respect to any particular Fq-basis of Fqm, any FqLC code over Fqm can be viewed as an m-quasi-cyclic code of length mn over Fq. In this correspondence, we obtain transform domain characterization of FqLC codes, using Discrete Fourier Transform (DFT) over an extension field of Fqm. The characterization is in terms of any decomposition of the code into certain subcodes and linearized polynomials over Fqm. We show how one can use this transform domain characterization to obtain a minimum distance bound for the corresponding quasi-cyclic code. We also prove nonexistence of self dual FqLC codes and self dual quasi-cyclic codes of certain parameters using the transform domain characterization.

Proceedings ArticleDOI
31 Oct 2005
TL;DR: Sub-optimal, soft decoders for extended binary BCH codes and certain subcodes of extended RS codes are presented and the coding gains obtained are comparable with other soft decmoders proposed in the literature.
Abstract: Soft-decision decoding of algebraic codes has been an area of active research interest for a long time In this paper, we present sub-optimal, soft decoders for extended binary BCH codes and certain subcodes of extended RS codes Our proposed decoders consist of a soft-information processing block followed by a traditional, hard-decision, bounded-distance decoder for the underlying BCH or RS codes The soft-processor in both cases consists of SISO decoders for extended Hamming codes, which can be implemented with low complexity The coding gains obtained are comparable with other soft decoders proposed in the literature

Proceedings ArticleDOI
05 Dec 2005
TL;DR: A novel blind logo-watermarking algorithm for copyright protection of JPEG-compressed images is proposed that contains much more convincing information than that of a randomly-generated numerical sequence that is widely used in traditional watermarking strategies.
Abstract: A novel blind logo-watermarking algorithm for copyright protection of JPEG-compressed images is proposed. A visually meaningful grayscale logo is encoded by the (255, 9) BCH code into a codeword that is embedded as the watermark into the wavelet domain of the host image using a pixel-wise masking model. A trellis is generated with a specific path corresponding to that codeword. At the receiving end where the watermarked image is stored in a JPEG file, the codeword is recovered approximately through running the viterbi decoder on the trellis even without reference to the original host data, and then decoded back to the embedded logo by the (255, 9) BCH codec. Thanks to the BCH code’s powerful error correction capability and human vision system’s great tolerance to image noise, the extracted logo, possibly degraded, contains much more convincing information than that of a randomly-generated numerical sequence that is widely used in traditional watermarking strategies. Experimental results of successfully hiding a 32 × 32, grayscale logo into 512 × 512 JPEG-compressed images verify our algorithm’s practical performance.

Journal ArticleDOI
TL;DR: Here exact expressions for the number of codewords of weight 4 are obtained in terms of exponential sums of three types, in particular, cubic sums and Kloosterman sums.
Abstract: We study coset weight distributions of binary primitive (narrow-sense) BCH codes of length n = 2 m (m odd) with minimum distance 8. In the previous paper [1], we described coset weight distributions of such codes for cosets of weight j = 1, 2, 3, 5, 6. Here we obtain exact expressions for the number of codewords of weight 4 in terms of exponential sums of three types, in particular, cubic sums and Kloosterman sums. This allows us to find the coset distribution of binary primitive (narrow-sense) BCH codes with minimum distance 8 and also to obtain some new results on Kloosterman sums over finite fields of characteristic 2.

Proceedings ArticleDOI
22 May 2005
TL;DR: The performance of the list decoding algorithm of Guruswami and Sudan is shown to be the best possible in a strong sense; specifically, when l=lceiln/krceil, the list of output polynomials can be superpolynomially large in n.
Abstract: In this paper, we prove the following two results that expose some combinatorial limitations to list decoding Reed-Solomon codes.Given n distinct elements α1,...,αn from a field F, and n subsets S1,...,Sn of F each of size at most l, the list decoding algorithm of Guruswami and Sudan [7] can in polynomial time output all polynomials p of degree at most k which satisfy p(αi) ∈ Si for every i, as long as l √k n'. By our result, an improvement to the Reed-Solomon list decoder of [7] that works with slightly smaller agreement, say t > √kn' - k/2, can only be obtained by exploiting some property of the βi's (for example, their (near) distinctness).For Reed-Solomon codes of block length $n$ and dimension k where k = nδ for small enough δ, we exhibit an explicit received word r with a super-polynomial number of Reed-Solomon codewords that agree with it on $(2 - e) k locations, for any desired e > 0 (we note agreement of k is trivial to achieve). Such a bound was known earlier only for a non-explicit center. We remark that finding explicit bad list decoding configurations is of significant interest --- for example the best known rate vs. distance trade-off is based on a bad list decoding configuration for algebraic-geometric codes [14] which is unfortunately not explicitly known.

Proceedings ArticleDOI
13 Jun 2005
TL;DR: A DVB-S2 compliant codec is implemented in both 130nm-8M and 90nm-7M low-leakage CMOS technologies, which includes encoders and decoders for both low-density parity check (LDPC) codes and serially concatenated BCH codes.
Abstract: A DVB-S2 compliant codec is implemented in both 130nm-8M and 90nm-7M low-leakage CMOS technologies. The system includes encoders and decoders for both low-density parity check (LDPC) codes and serially concatenated BCH codes. All requirements of the DVB-S2 standard are supported including code rates between 1/4 and 9/10, block sizes of either 16,200 bits or 64,800 bits, and four digital modulation options. The 130nm core design occupies 49.6mm2 and operates at 200MHz, while the 90nm core design occupies 15.8mm2 and operates at 300MHz.

Proceedings ArticleDOI
Inwhee Joe1
23 May 2005
TL;DR: It is shown that the BCH code for channel coding can improve energy efficiency significantly compared to the convolutional code, such that the energy efficiency can be maximized.
Abstract: In this paper, we propose to improve energy efficiency in wireless sensor networks using optimal packet length in terms of power management and channel coding. The use of power management cannot improve energy efficiency, but it saves a lot of energy because the transceiver is turned off while it is not used. Also, we evaluate optimal packet length without power management, such that the energy efficiency can be maximized. Finally, we show that the BCH code for channel coding can improve energy efficiency significantly compared to the convolutional code.

01 Jan 2005
TL;DR: One of the algorithms generalizes and strengthens a previous result obtained in the literature before and develops combinatorial algorithms for computing parameters of extensions of BCH codes based on directed graphs.
Abstract: This paper develops combinatorial algorithms for computing parameters of extensions of BCH codes based on directed graphs. One of our algorithms generalizes and strengthens a previous result obtained in the literature before.

Book ChapterDOI
28 Jan 2005


Journal ArticleDOI
TL;DR: An algorithm is described that improves on the standard algorithm for computing the minimal distance of cyclic codes and achieves good results on both the horizontal and the vertical axes.
Abstract: We describe an algorithm that improves on the standard algorithm for computing the minimal distance of cyclic codes.

Patent
30 Jun 2005
TL;DR: In this paper, a method for correcting errors in multilevel memories, both of the NAND and of the NOR type, provides the use of a BCH correction code made parallel by means of a coding and decoding architecture allowing the latency limits of prior art sequential solutions to be overcome.
Abstract: A method for correcting errors in multilevel memories, both of the NAND and of the NOR type provides the use of a BCH correction code made parallel by means of a coding and decoding architecture allowing the latency limits of prior art sequential solutions to be overcome. The method provides a processing with a first predetermined parallelism for the coding step, a processing with a second predetermined parallelism for the syndrome calculation and a processing with a third predetermined parallelism for calculating the error position, each parallelism being defined by a respective integer number being independent from the others.

Proceedings ArticleDOI
25 Mar 2005
TL;DR: A fast calculation algorithm of weight distribution of the dual code which outperforms those of previous studies in time complexity is proposed, and the probability of undetected error of different CRC codes standards under various codeword lengths are also simulated efficiently.
Abstract: The error detecting functions of linear block code can be realized via simple software or hardware. The error detection, which includes long-term theoretical research and many good properties, is often applied widely in digital communication and data storage. The weight distribution of linear block code and its dual code are important parameters of calculating the probability P/sub ud/ of undetected errors. Further, cyclic redundancy check (CRC) codes and Bose, Chaudhuri and Hocquenghem (BCH) cyclic codes are subclasses of linear block codes. This paper proposes a fast calculation algorithm of weight distribution of the dual code which outperforms those of previous studies in time complexity, and the probability of undetected error of different CRC codes standards under various codeword lengths are also simulated efficiently.

Journal ArticleDOI
TL;DR: A simple rule is obtained which can directly determine whether a bit in the received word is correct and the most complex element in the conventional step-by-step decoder is the "matrix-computing" element.
Abstract: A low-complexity step-by-step decoding algorithm for t-error-correcting binary Bose-Chaudhuri-Hocquenghem (BCH) codes is proposed. Using logical analysis, we obtained a simple rule which can directly determine whether a bit in the received word is correct. The computational complexity of this decoder is less than the conventional step-by-step decoding algorithm, since it reduces at least half of the matrix computations and the most complex element in the conventional step-by-step decoder is the "matrix-computing" element.

01 Jan 2005
TL;DR: In this paper, the authors analyse the effect of H-mod on the performance of DVB-S2 and show that using H-Mod can provide a means for significant enhancement of the existing DVB systems.
Abstract: DVB-S2 is the recently defined successor to the current DVB-S standard and DVB-DSNG standard (EN 301210); enabling new services via significantly higher data rates through the application of BCH concatenated with LDPC capacity-approaching codes. Owing to the vast amount of DVB-S receivers already deployed one of the key issues with standardising DVB-S2 has been the case for backwards compatibility and enabling long-term migration. This in turn has led to the introduction of non-uniform, asymmetric 8-PSK hierarchical modulation (H-Mod) scheme; whereby the high performance BCH-LDPC codes are challenged to enable quasi-error-free performance on the H-Mod low priority data stream. This paper aims to analyse this approach and identify the extent to which H-Mod will have an effect on both DVB-S (RS-Viterbi) and DVB-S2 (BCH-LDPC) performance. The paper provides analysis of a future commercial application of capacity-approaching codes within DVB-S2 and most importantly the methodology for DVB-S2 migration; which currently is a key issue for DVB-S2 service providers in order to maintain signal quality and ensure data throughput over various time-Varying channels. Simulation results have been obtained using VHDL FPGA implementations and C++ simulations; these results are individually presented and discussed. From these results it has been concluded that introduction of additional DVB-S2 services using H-Mod can provide a means for significant enhancement of the existing DVB systems, thus proving the viability of H-Mod for long term migration to DVB-S2.

Journal ArticleDOI
27 Dec 2005
TL;DR: From results it has been concluded that introduction of additional DVB-S2 services using H-Mod can provide a means for significant enhancement of the existing DVB systems, thus proving the viability of H- Mod for long term migration to DVS2.
Abstract: DVB-S2 is the recently defined successor to the current DVB-S standard and DVB-DSNG standard (EN 301210); enabling new services via significantly higher data rates through the application of BCH concatenated with LDPC capacity-approaching codes. Owing to the vast amount of DVB-S receivers already deployed one of the key issues with standardising DVB-S2 has been the case for backwards compatibility and enabling long-term migration. This in turn has led to the introduction of non-uniform, asymmetric 8-PSK hierarchical modulation (H-Mod) scheme; whereby the high performance BCH-LDPC codes are challenged to enable quasi-error-free performance on the H-Mod low priority data stream. This paper aims to analyse this approach and identify the extent to which H-Mod will have an effect on both DVB-S (RS-Viterbi) and DVB-S2 (BCH-LDPC) performance. The paper provides analysis of a future commercial application of capacity-approaching codes within DVB-S2 and most importantly the methodology for DVB-S2 migration; which currently is a key issue for DVB-S2 service providers in order to maintain signal quality and ensure data throughput over various time-varying channels. Simulation results have been obtained using VHDL FPGA implementations and C++ simulations; these results are individually presented and discussed. From these results it has been concluded that introduction of additional DVB-S2 services using H-Mod can provide a means for significant enhancement of the existing DVB systems, thus proving the viability of H-Mod for long term migration to DVB-S2.

Proceedings ArticleDOI
30 May 2005
TL;DR: It is shown that, with the proposed construction algorithms, fast construction time and reduced memory without the performance degradation can be achieved.
Abstract: In this paper, we present an algebraic method for constructing regular low-density parity-check (LDPC) codes based on narrow-sense-primitive BCH codes. The construction method results in a class of high rate LDPC codes in Gallager's original form. Codes in this class are free of cycles of length 4 in their Tanner graph and have good minimum distances. They can perform well with the iterative decoding. Also, proposed algebra LDPC codes can be designed for a new class of irregular codes based on a semi-algebraic structure for various code rates. It is shown that, with the proposed construction algorithms, fast construction time and reduced memory without the performance degradation can be achieved.

Proceedings ArticleDOI
01 Jan 2005
TL;DR: A joint source-channel coding method which employs quantized overcomplete frame expansions that are binary transmitted through noisy channels that can be interpreted as real-valued block codes that are directly applied to waveform signals prior to quantization.
Abstract: In this paper, we present a joint source-channel coding method which employs quantized overcomplete frame expansions that are binary transmitted through noisy channels. The frame expansions can be interpreted as real-valued block codes that are directly applied to waveform signals prior to quantization. At the decoder, first the index-based redundancy is used by a soft-input soft-output source decoder to determine the a posteriori probabilities for all possible symbols. Given these symbol probabilities, we then determine least-squares estimates for the reconstructed symbols. The performance of the proposed approach is evaluated for code constructions based on the DFT and is compared to other decoding approaches as well as to classical BCH block codes. The results show that the new technique is superior for a wide range of channel conditions, especially when strict delay constraints for the transmission system are given

Journal ArticleDOI
TL;DR: In this paper, the joint optimization of spreading gain and coding gain of nonbinary BCH coded CDMA communication systems is considered in both single-cell and multi-cell scenarios, and two types of detectors are employed, namely the minimum mean square error multiuser detector and the classic single-user matched filter detector.
Abstract: The joint analytical optimisation of the spreading gain and coding gain of nonbinary BCH coded CDMA communication systems is considered in both single-cell and multi-cell scenarios. Furthermore, two types of detectors were employed, namely the minimum mean square error multiuser detector and the classic single-user matched filter detector. It is shown that the optimum coding rate varied over a wide range.