scispace - formally typeset
Search or ask a question

Showing papers on "BCH code published in 2007"


Journal ArticleDOI
TL;DR: In this article, it was shown that a BCH code of length n can contain its dual code only if its designed distance delta = O(radicn), and the converse was proved in the case of narrow-sense codes.
Abstract: Classical Bose-Chaudhuri-Hocquenghem (BCH) codes that contain their (Euclidean or Hermitian) dual codes can be used to construct quantum stabilizer codes; this correspondence studies the properties of such codes. It is shown that a BCH code of length n can contain its dual code only if its designed distance delta=O(radicn), and the converse is proved in the case of narrow-sense codes. Furthermore, the dimension of narrow-sense BCH codes with small design distance is completely determined, and - consequently - the bounds on their minimum distance are improved. These results make it possible to determine the parameters of quantum BCH codes in terms of their design parameters

336 citations


Patent
03 Oct 2007
TL;DR: A Bose-Chaudhuri-Hocquenghem (BCH) error correction circuit and method including storing normal data and first parity data in a memory cell array is described in this article.
Abstract: A Bose-Chaudhuri-Hocquenghem (BCH) error correction circuit and method including storing normal data and first parity data in a memory cell array, the normal data and first parity data forming BCH encoded data; generating second parity data from the stored normal data; comparing the first parity data with the second parity data; and checking for an error in the normal data in response to the comparing.

85 citations


Posted Content
TL;DR: A rational curve fitting algorithm is devised and applied to the list decoding of Reed-Solomon and Bose-Chaudhuri-Hocquenghen codes and the resulting list decoding algorithms exhibit the following significant properties.
Abstract: In this paper we devise a rational curve fitting algorithm and apply it to the list decoding of Reed-Solomon and BCH codes. The proposed list decoding algorithms exhibit the following significant properties. 1 The algorithm corrects up to $n(1-\sqrt{1-D})$ errors for a (generalized) $(n, k, d=n-k+1)$ Reed-Solomon code, which matches the Johnson bound, where $D\eqdef \frac{d}{n}$ denotes the normalized minimum distance. In comparison with the Guruswami-Sudan algorithm, which exhibits the same list correction capability, the former requires multiplicity, which dictates the algorithmic complexity, $O(n(1-\sqrt{1-D}))$, whereas the latter requires multiplicity $O(n^2(1-D))$. With the up-to-date most efficient implementation, the former has complexity $O(n^{6}(1-\sqrt{1-D})^{7/2})$, whereas the latter has complexity $O(n^{10}(1-D)^4)$. 2. With the multiplicity set to one, the derivative list correction capability precisely sits in between the conventional hard-decision decoding and the optimal list decoding. Moreover, the number of candidate codewords is upper bounded by a constant for a fixed code rate and thus, the derivative algorithm exhibits quadratic complexity $O(n^2)$. 3. By utilizing the unique properties of the Berlekamp algorithm, the algorithm corrects up to $\frac{n}{2}(1-\sqrt{1-2D})$ errors for a narrow-sense $(n, k, d)$ binary BCH code, which matches the Johnson bound for binary codes. The algorithmic complexity is $O(n^{6}(1-\sqrt{1-2D})^7)$.

74 citations


Book ChapterDOI
29 Oct 2007
TL;DR: This work proposes to apply the Baker-Campbell-Hausdorff (BCH) formula, which allowed us to estimate a 3D statistical brain atlas in a reasonable time, including the average and the modes of variation in the Sobolev tangent space of diffeomorphisms.
Abstract: This paper focuses on the estimation of statistical atlases of 3D images by means of diffeomorphic transformations Within a Log-Euclidean framework, the exponential and logarithm maps of diffeomorphisms need to be computed In this framework, the Inverse Scaling and Squaring (ISS) method has been recently extended for the computation of the logarithm map, which is one of the most time demanding stages In this work we propose to apply the Baker-Campbell-Hausdorff (BCH) formula instead In a 3D simulation study, BCH formula and ISS method obtained similar accuracy but BCH formula was more than 100 times faster This approach allowed us to estimate a 3D statistical brain atlas in a reasonable time, including the average and the modes of variation Details for the computation of the modes of variation in the Sobolev tangent space of diffeomorphisms are also provided

73 citations


Patent
29 Oct 2007
TL;DR: A coding system comprises pre-multiply the message u(x) by Xn−k and obtain the remainder b(x), i.e., the parity check digits as discussed by the authors.
Abstract: A coding system comprises pre-multiply the message u(x) by Xn−k. Obtain the remainder b(x), i.e. the parity check digits. And combine b(x) and Xn−ku(x) to obtain the code polynomial. A decoding method comprises calculating a syndrome; finding an error-location polynomial; and computing a set of error location numbers.

72 citations


Journal ArticleDOI
TL;DR: The soft performance degradation observed when the channel worsens gives an additional advantage to the joint source-channel coding scheme for fading channels, since a reconstruction with moderate quality may be still possible, even if the channel is in a deep fade.
Abstract: In this paper, a new still image coding scheme is presented. In contrast with standard tandem coding schemes, where the redundancy is introduced after source coding, it is introduced before source coding using real BCH codes. A joint channel model is first presented. The model corresponds to a memoryless mixture of Gaussian and Bernoulli-Gaussian noise. It may represent the source coder, the channel coder, the physical channel, and their corresponding decoder. Decoding algorithms are derived from this channel model and compared to a state-of-art real BCH decoding scheme. A further comparison with two reference tandem coding schemes and the proposed joint coding scheme for the robust transmission of still images has been presented. When the tandem scheme is not accurately tuned, the joint coding scheme outperforms the tandem scheme in all situations. Compared to a tandem scheme well tuned for a given channel situation, the joint coding scheme shows an increased robustness as the channel conditions worsen. The soft performance degradation observed when the channel worsens gives an additional advantage to the joint source-channel coding scheme for fading channels, since a reconstruction with moderate quality may be still possible, even if the channel is in a deep fade

66 citations


Journal ArticleDOI
TL;DR: It is demonstrated that strong BCH codes can effectively enable the use of a larger number of storage levels per cell and hence improve the effective NAND flash memory storage capacity up to 59.1% without degradation of cell programming time.
Abstract: The design of on-chip error correction systems for multilevel code-storage NOR flash and data-storage NAND flash memories is concerned. The concept of trellis coded modulation (TCM) has been used to design on-chip error correction system for NOR flash. This is motivated by the non-trivial modulation process in multilevel memory storage and the effectiveness of TCM in integrating coding with modulation to provide better performance at relatively short block length. The effectiveness of TCM-based systems, in terms of error-correcting performance, coding redundancy, silicon cost and operational latency, has been successfully demonstrated. Meanwhile, the potential of using strong Bose-Chaudhiri-Hocquenghem (BCH) codes to improve multilevel data-storage NAND flash memory capacity is investigated. Current multilevel flash memories store 2 bits in each cell. Further storage capacity may be achieved by increasing the number of storage levels per cell, which nevertheless will correspondingly degrade the raw storage reliability. It is demonstrated that strong BCH codes can effectively enable the use of a larger number of storage levels per cell and hence improve the effective NAND flash memory storage capacity up to 59.1% without degradation of cell programming time. Furthermore, a scheme to leverage strong BCH codes to improve memory defect tolerance at the cost of increased NAND flash cell programming time is proposed.

66 citations


Journal ArticleDOI
TL;DR: Simulations show that the proposed scheme with simplex code as scrambling subcode achieves good PAPR Reduction in OFDM systems with BPSK subcarriers.
Abstract: In this paper, we consider reduction of PAPR in OFDM systems with BPSK subcarriers by combining SLM and binary cyclic codes. This combining strategy can be used for both error correction and PAPR reduction. We decompose a binary cyclic code into direct sum of two cyclic subcodes: the correction subcode used for error correction and the scrambling subcode for PAPR reduction. The transmitted OFDM signal is selected that achieves minimum PAPR, from the set of binary cyclic codewords. The received signal can be easily decoded without the need of any side information. Simulations show that the proposed scheme with simplex code as scrambling subcode achieves good PAPR reduction.

58 citations


Journal Article
TL;DR: In this paper, it was shown that random sparse binary linear codes are locally testable and locally decodable (under any linear encoding) with constant queries (with probability tending to one) under the assumption that the code should have only polynomially many codewords.
Abstract: We show that random sparse binary linear codes are locally testable and locally decodable (under any linear encoding) with constant queries (with probability tending to one). By sparse, we mean that the code should have only polynomially many codewords. Our results are the first to show that local decodability and testability can be found in random, unstructured, codes. Previously known locally decodable or testable codes were either classical algebraic codes, or new ones constructed very carefully. We obtain our results by extending the techniques of Kaufman and Litsyn [11] who used the MacWilliams Identities to show that "almost-orthogonal" binary codes are locally testable. Their definition of almost orthogonality expected codewords to disagree in n/2 plusmn O(radicn) coordinates in codes of block length n. The only families of codes known to have this property were the dual-BCH codes. We extend their techniques, and simplify them in the process, to include codes of distance at least n/2 - O(n1-gamma) for any gamma > 0, provided the number of codewords is O(nt) for some constant t. Thus our results derive the local testability of linear codes from the classical coding theory parameters, namely the rale and the distance of the codes. More significantly, we show that this technique can also be used to prove the "self-correctability" of sparse codes of sufficiently large distance. This allows us to show that random linear codes under linear encoding functions are locally decodable. This ought to be surprising in that the definition of a code doesn't specify the encoding function used! Our results effectively say that any linear function of the bits of the codeword can be locally decoded in this case.

54 citations


Patent
12 Feb 2007
Abstract: Provided is a method and an apparatus for transmitting/receiving a Broadcast Channel (BCH), by which a User Equipment (UE) can successfully receive system information of neighboring cells in a system supporting the scalability of a UE reception bandwidth and a system bandwidth. The method includes identifying a system bandwidth of a cell by comparing the system bandwidth with reception bandwidths of UEs within the cell, mapping two BCH information blocks including system information to a central band having a bandwidth equal to a transmission bandwidth of the BCH, additionally mapping at least one of the information blocks into each of one-half bands of the system bands, when the system bandwidth is two times an amount of the reception bandwidth, and transmitting a frequency domain signal, to which the information blocks are mapped, to the UEs located within the cell.

39 citations


Journal ArticleDOI
TL;DR: The proposed algorithm successfully decodes the (192,96) Reed-Solomon concatenated code and the (256,147) extended BCH code in near optimal manner (within 0.01 dB at a block-error rate of 10-5) with affordable computational cost.
Abstract: Order-w reprocessing is a suboptimal soft-decision decoding approach for binary linear block codes in which up to w bits are systematically flipped on the so-called most reliable (information) basis (MRB). This correspondence first incorporates two preprocessing rules into order-w reprocessing and shows that, with appropriate choice of parameters, the proposed order-w reprocessing with preprocessing requires comparable complexity to order-w reprocessing but achieves asymptotically the performance of order-(w+2) reprocessing. To complement the MRB, a second basis is employed for practical SNRs and this approach is systematically extended to a multibasis order-w reprocessing scheme for high signal-to-noise ratios (SNRs). It is shown that the proposed multibasis scheme significantly enlarges the error-correction radius, a commonly used measure of performance at high SNRs, over the original (single-basis) order-w reprocessing. As a by-product, this approach also precisely characterizes the asymptotic performance of the well-known Chase and generalized minimum distance (GMD) decoding algorithms. The proposed algorithm successfully decodes the (192,96) Reed-Solomon concatenated code and the (256,147) extended BCH code in near optimal manner (within 0.01 dB at a block-error rate of 10-5) with affordable computational cost

Journal ArticleDOI
TL;DR: Two fault-tolerance design approaches that integrally address the tolerance for defects and transient faults are presented that can achieve much higher storage capacity under high defect densities and/or transient fault rates at the cost of higher implementation complexity and longer memory access latency.
Abstract: Targeting on the future fault-prone hybrid CMOS/nanodevice digital memories, this paper presents two fault-tolerance design approaches that integrally address the tolerance for defects and transient faults. These two approaches share several key features, including the use of a group of Bose-Chaudhuri-Hocquenghem (BCH) codes for both defect tolerance and transient fault tolerance, and integration of BCH code selection and dynamic logical-to-physical address mapping. The first approach is straightforward and easy to implement but suffers from a rapid drop of achievable storage capacity as defect densities and/or transient fault rates increase, while the second approach can achieve much higher storage capacity under high defect densities and/or transient fault rates at the cost of higher implementation complexity and longer memory access latency. Based on extensive computer simulations and BCH decoder circuit design, we have demonstrated the effectiveness of the presented approaches under a wide range of defect densities and transient fault rates, while taking into account of the fault-tolerance storage overhead and BCH decoder implementation cost in CMOS domain

Book ChapterDOI
11 Jun 2007
TL;DR: This paper deals with strategies to dramatically reduce the complexity for embedding based on syndrome coding, concentrating on both syndrome coding based on a parity check matrix and syndrome codingbased on the generator polynomial.
Abstract: This paper deals with strategies to dramatically reduce the complexity for embedding based on syndrome coding. In contrast to existing approaches, our goal is to keep the embedding efficiency constant, i.e., to embed less complexly without increasing the average number of embedding changes, compared to the classic Matrix Embedding scenario. Generally, our considerations are based on structured codes, especially on BCH Codes. However, they are not limited to this class of codes. We propose different approaches to reduce embedding complexity concentrating on both syndrome coding based on a parity check matrix and syndrome coding based on the generator polynomial.

Patent
10 Jan 2007
TL;DR: In this article, a base station consisting of an encoding unit, a modulation unit, and an IFFT unit is used to generate OFDM symbols for BCH data transmission.
Abstract: Provided is a base station capable of effectively transmitting BCH data. The base station (100) includes: an encoding unit (101) for encoding the BCH data; a modulation unit (102) for modulating the BCH data after being encoded; a transmission band setting unit (103) for setting a BCH data transmission band in one of sub carriers constituting an OFDM symbol; encoding units (104-1 to 104-N) for encoding user data (#1 to #N), modulation units (105-1 to 105-N) for modulating user data (#1 to #N) after being encoded; and an IFFT unit (106) for mapping the BCH data and the user data (#1 to #N) to each of the sub carriers (#1 to #K) and performing IFFT to generate an OFDM symbol . Here, the IFFT unit (106) maps the BCH data to the sub carrier existing in the transmission band set by the transmission band setting unit (103) among the plurality of sub carriers (#1 to #K) .

Proceedings ArticleDOI
TL;DR: In this article, the authors introduce two new families of quantum convolutional codes, based on an algebraic method which allows to construct classical convolutions from block codes, in particular BCH codes.
Abstract: Quantum convolutional codes can be used to protect a sequence of qubits of arbitrary length against decoherence. We introduce two new families of quantum convolutional codes. Our construction is based on an algebraic method which allows to construct classical convolutional codes from block codes, in particular BCH codes. These codes have the property that they contain their Euclidean, respectively Hermitian, dual codes. Hence, they can be used to define quantum convolutional codes by the stabilizer code construction. We compute BCH-like bounds on the free distances which can be controlled as in the case of block codes, and establish that the codes have non-catastrophic encoders.

Journal ArticleDOI
TL;DR: The simulation result shows that the novel super forward error correction code type, compared with the RS (255, 239) + convolutional-self-orthogonal-code (CSOC) (k0/n0 = 6/7, J-= 8) code in ITU-T G.975.1, has a lower redundancy and better error-correction capabilities.

Journal Article
TL;DR: This work shows how to significantly improve the running time to O(dlog k) for k=O(d1/2−δ), for any arbitrary small fixed δ, which beats the better of FJLT and JL.
Abstract: The Fast Johnson-Lindenstrauss Transform (FJLT) was recently discovered by Ailon and Chazelle as a novel technique for performing fast dimension reduction with small distortion from ed2 to ed2 in time O(max{d log d,k3}). For k in [Ω(log d), O(d1/2)] this beats time O(dk) achieved by naive multiplication by random dense matrices, an approach followed by several authors as a variant of the seminal result by Johnson and Lindenstrauss (JL) from the mid 80's. In this work we show how to significantly improve the running time to O(d log k) for k = O(d1/2−Δ), for any arbitrary small fixed Δ. This beats the better of FJLT and JL. Our analysis uses a powerful measure concentration bound due to Talagrand applied to Rademacher series in Banach spaces (sums of vectors in Banach spaces with random signs). The set of vectors used is a real embedding of dual BCH code vectors over GF(2). We also discuss the number of random bits used and reduction to e1 space. The connection between geometry and discrete coding theory discussed here is interesting in its own right and may be useful in other algorithmic applications as well.

Book ChapterDOI
11 Jun 2007
TL;DR: In this article, the authors show that Reed-Solomon codes are twice better with respect to the number of locked positions and that in fact, they are optimal for steganographic embedding.
Abstract: The use of syndrome coding in steganographic schemes tends to reduce distortion during embedding. The more complete model comes from the wet papers [FGLS05] which allow to lock positions that cannot be modified. Recently, BCH codes have been investigated, and seem to be good candidates in this context [SW06]. Here, we show that Reed-Solomon codes are twice better with respect to the number of locked positions and that, in fact, they are optimal. We propose two methods for managing these codes in this context: the first one is based on a naive decoding process through Lagrange interpolation; the second one, more efficient, is based on list decoding techniques and provides an adaptive trade-off between the number of locked positions and the embedding efficiency.

Proceedings ArticleDOI
04 Dec 2007
TL;DR: Application of BCH codes to source coding with side information is described for quantum key reconciliation, which aims to reduce public communication with acceptable quantum bit error rate and a number of disclosed bits; therefore, it is suitable to applications of high speed QKD.
Abstract: Due to imperfection in quantum key distribution (QKD) an uncorrected secret key transmitted via quantum channel exists at the receiver side. This paper presents an alternative method to recover those quantum key by using BCH codes - a class of error control coding (ECC). Application of BCH codes to source coding with side information is described for quantum key reconciliation. Unlikely to many previous protocols, the proposed scheme aims to reduce public communication with acceptable quantum bit error rate and a number of disclosed bits; therefore, it is suitable to applications of high speed QKD.

Patent
07 Mar 2007
TL;DR: In this article, the authors proposed a broadcast coding method in digital information transmission technique field, which comprises the following steps: diving transmission flow into 752 bit set and adding 261 zeros to get 1013 bit for BCH coding and removing front 261 bit to get BCH(762,752); forming one set of BCH with certain number to generate matrix of LDPC(7493,3048),LDPC( 7493,4572) or LDPC (7493-6096); deleting output 7493 bit front five correction bit to getting final 7488
Abstract: This invention relates to earth digital television broadcast coding method in digital information transmission technique field, which comprises the following steps: diving transmission flow into 752 bit set and adding 261 zeros to get 1013 bit for BCH coding and removing front 261 bit to get BCH(762,752); forming one set of BCH(762,752) with certain number to generate matrix of LDPC(7493,3048),LDPC(7493,4572) or LDPC(7493,6096); deleting output 7493 bit front five correction bit to get final 7488 output bit.

Proceedings ArticleDOI
01 Oct 2007
TL;DR: A decoding method for cyclic codes up to the Hartmann-Tzeng bound using the discrete Fourier transform and how to make a submatrix of the circulant matrix from an error vector is considered.
Abstract: We consider a decoding method for cyclic codes up to the Hartmann-Tzeng bound using the discrete Fourier transform. Indeed we propose how to make a submatrix of the circulant matrix from an error vector for this decoding method. Moreover an example of a binary cyclic code, which could not be corrected by BCH decoding but be correctable by proposed decoding, is given. It is expected that this decoding method induces universal understanding for decoding method up to the Roos bound and the shift bound.

Journal ArticleDOI
TL;DR: A necessary condition (NC) is derived on the syndromes distribution for decoding BCH codes, which includes the already known Hartmann-Tzeng proposition, and it is proved that when the correction capacity is equal to 2 or 3, the obtained NC becomes also sufficient.
Abstract: This paper presents a new view of the Bose-Chaudhuri-Hocquengem (BCH) code through the addition of some flexibility to the syndromes distribution in the transmitted sequence. In order to get this flexibility, we derive a necessary condition (NC) on the syndromes distribution for decoding BCH codes, which includes the already known Hartmann-Tzeng proposition. This NC is essentially deduced from the decoding process of BCH code, and is related to the locator polynomial and the requested constraints to guarantee a maximal error-correction capacity. The obtained results have the advantage to be applicable for any considered field (finite or not). Furthermore, we prove that when the correction capacity is equal to 2 or 3, the obtained NC becomes also sufficient. This result is very useful in some practical transmission systems such as orthogonal frequency-division multiplexing systems. Once the pilot tones considered in such systems verify the necessary and sufficient condition, it becomes possible to both reduce the peak-to-average-power rate and correct the impulse noise, present in such multicarrier systems. The usefulness of the presented analysis and the exploitation of the derived condition on the pilot tones distribution is illustrated by simulation results in the case of the Hiperlan2 system

Patent
11 Oct 2007
TL;DR: A decoder that decodes Bose, Ray-Chaudhuri, Hocquenghem (BCH) codewords includes an error decoding module that computes error values and an error correction module that employs the decoding module to iteratively correct errors.
Abstract: A decoder that decodes Bose, Ray-Chaudhuri, Hocquenghem (BCH) codewords includes an inner decoding module that decodes inner codes of two dimensional BCH product codewords and that includes an error decoding module that computes error values, an outer decoding module that decodes outer codes of the two dimensional BCH product codewords, and an error correction module that employs the error decoding module to iteratively correct errors in the two-dimensional BCH product codewords.

Journal ArticleDOI
TL;DR: It is shown that maximum-likelihood decoding is realised for a high percentage of decoded codewords and that performance close to the sphere packing bound is attainable for codeword lengths up to 1000 bits.
Abstract: It is shown that the relatively unknown Dorsch decoder may be extended to produce a decoder that is capable of maximum-likelihood decoding. The extension involves a technique for any linear (n, k) code that ensures that n−k less reliable, soft decisions of each received vector may be treated as erasures in determining candidate codewords. These codewords are derived from low information weight codewords and it is shown that an upper bound of this information weight may be calculated from each received vector in order to guarantee that the decoder will achieve maximum-likelihood decoding. Using the cross-correlation function, it is shown that the most likely codeword may be derived from a partial correlation function of these low information weight codewords, which leads to an efficient fast decoder. For a practical implementation, this decoder may be further simplified into a concatenation of a hard-decision decoder and a partial correlation decoder with insignificant performance degradation. Results are presented for some powerful, known codes, including a GF(4) non-binary BCH code. It is shown that maximum-likelihood decoding is realised for a high percentage of decoded codewords and that performance close to the sphere packing bound is attainable for codeword lengths up to 1000 bits.

Journal ArticleDOI
TL;DR: In this paper, the authors developed combinatorial algorithms for computing parameters of extensions of BCH codes based on directed graphs, which generalizes and strengthens a previous result obtained in the literature before.

01 Jan 2007
TL;DR: The decoders based on the genetic algorithms (AG) applied to BCH codes give good performances compared to Chase-2 and the OSD of order-1 and reach the performances of theOSD-3 for some Residue Quadratic codes.
Abstract: The decoders based on the genetic algorithms (AG) applied to BCH codes give good performances compared to Chase-2 and the OSD of order-1 and reach the performances of the OSD-3 for some Residue Quadratic (RQ) codes. These algorithms are less complex for linear block codes of large block length; furthermore their performances can be improved by changing the parameters, in particular the number of individuals by population and the number of generations, which makes them more attractive.

Book
01 Jan 2007
TL;DR: The RSA Cryptosystem with Maple as mentioned in this paper is a cryptosystem based encryption and decryption with MATLAB, which is based on the Euclidean Algorithm for Permutation Group Cosets and Quotient Groups.
Abstract: PRELIMINARY MATHEMATICS Permutation Groups Cosets and Quotient Groups Rings and Euclidean Domains Finite Fields Finite Fields with Maple Finite Fields with MATLAB The Euclidean Algorithm BLOCK DESIGNS General Properties Hadamard Matrices Hadamard Matrices with Maple Hadamard Matrices with MATLAB Difference Sets Difference Sets with Maple Difference Sets with MATLAB ERROR CORRECTING CODES General Properties Hadamard Codes Reed-Muller Codes Reed-Muller Codes with Maple Reed-Muller Codes with MATLAB Linear Codes Hamming Codes with Maple Hamming Codes with MATLAB BCH CODES Construction Error Correction BCH Codes with Maple BCH Codes with MATLAB REED-SOLOMON CODES Construction Error Correction Error Correction Method Proof Reed-Solomon Codes with Maple Reed-Solomon Codes with MATLAB Reed-Solomon Codes in Voyager 2 ALGEBRAIC CRYPTOGRAPHY Two Elementary Cryptosystems Shift and Affine Ciphers with Maple Shift and Affine Ciphers with MATLAB Hill Ciphers Hill Ciphers with Maple Hill Ciphers with MATLAB VIGENERE CIPHERS Encryption and Decryption Cryptanalysis Vigenere Ciphers with Maple Vigenere Ciphers with MATLAB THE RSA CRYPTOSYSTEM Preliminary Mathematics Encryption and Decryption The RSA Cryptosystem with Maple The RSA Cryptosystem with MATLAB A Note on Modular Exponentiation A Note on Primality Testing A Note on Integer Factorization A Note on Digital Signatures The Diffie-Hellman Key Exchange Discrete Logarithms with Maple Discrete Logarithms with MATLAB ELLIPTIC CURVE CRYPTOGRAPHY The ElGamal Cryptosystem The ElGamal Cryptosystem with Maple The ElGamal Cryptosystem with MATLAB Elliptic Curves Elliptic Curves with Maple Elliptic Curves with MATLAB Elliptic Curve Cryptography Elliptic Curve Cryptography with Maple Elliptic Curve Cryptography with MATLAB THE ADVANCED ENCRYPTION STANDARD Alphabet Assignment and Text Setup The S-Box Key Generation Encryption The AES Layers Decryption A Note on Security AES with Maple AES with MATLAB POLYA THEORY Group Actions Burnside's Theorem The Cycle Index The Pattern Inventory The Pattern Inventory with Maple The Pattern Inventory with MATLAB Switching Functions GRAPH THEORY The Cycle Index of Sn The Cycle Index of Sn with Maple The Cycle Index of Sn with MATLAB Counting Undirected Graphs Counting Undirected Graphs with Maple Counting Undirected Graphs with MATLAB Each chapter contains Computer and Research Exercises. APPENDIX A: USER-WRITTEN MAPLE FUNCTIONS APPENDIX B: USER-WRITTEN MATLAB FUNCTIONS BIBLIOGRAPHY HINTS OR ANSWERS FOR SELECTED EXERCISES INDEX

Journal ArticleDOI
TL;DR: The main result is a short and elementary proof for the author's exact asymptotic results on distance chromatic parameters (both number and index) in hypercubes, and a lower bound in terms of A(n,d) is obtained for B(n), the largest size among linear binary codes.

Proceedings ArticleDOI
24 Jun 2007
TL;DR: It is proved that, by computing appropriate Grobner bases, one automatically recovers formulas for the coefficients of the locator polynomial, in terms of the syndromes, by solving the problem of the algebraic decoding of any cyclic code up to the true minimum distance.
Abstract: We address the problem of the algebraic decoding of any cyclic code up to the true minimum distance. For this, we use the classical formulation of the problem, which is to find the error locator polynomial in terms of the syndromes of the received word. This is usually done with the Berlekamp-Massey algorithm in the case of BCH codes and related codes, but for the general case, there is no generic algorithm to decode cyclic codes. Even in the case of the quadratic residue codes, which are good codes with a very strong algebraic structure, there is no available general decoding algorithm. For this particular case of quadratic residue codes, several authors have worked out, by hand, formulas for the coefficients of the locator polynomial in terms of the syndromes, using the Newton identities. This work has to be done for each particular quadratic residue code, and is more and more difficult as the length is growing. Furthermore, it is error-prone. We propose to automate these computations, using elimination theory and Grobner bases. We prove that, by computing appropriate Grobner bases, one automatically recovers formulas for the coefficients of the locator polynomial, in terms of the syndromes.

Proceedings ArticleDOI
24 Jun 2007
TL;DR: In this article, the authors formulated the list decoding of generalized Reed-Solomon codes as a rational curve-fitting problem, utilizing the polynomials constructed by the Berlekamp-Massey algorithm.
Abstract: In this paper we formulate the list decoding of (generalized) Reed-Solomon codes as a rational curve-fitting problem, utilizing the polynomials constructed by the Berlekamp-Massey algorithm. We present a novel list decoding algorithm that ldr corrects up to 1-radic1-D errors for (generalized) Reed-Solomon codes, identical to that of the Guruswami-Sudan algorithm which is built upon the Berlekamp-Welch algorithm, where D denote the normalized minimum distance, ldr with appropriate modifications, corrects up to 1/2(1-radic1-2D) errors for binary BCH codes, which is the best known bound under polynomial complexity, ldr exhibits polynomial complexity in nature, in particular, requires O(n6(1-radic1-D)7) field operations for Reed-Solomon codes in achieving its maximum list error correction capability (n denotes code length), whereas the Guruswami-Sudan algorithm has complexity O(n10(1-D)4).