scispace - formally typeset
Search or ask a question

Showing papers on "BCH code published in 2017"


Journal ArticleDOI
TL;DR: The dimensions and the minimum distances of two special families of LCD cyclic codes, which are both BCH codes, are investigated.
Abstract: Historically, LCD cyclic codes were referred to as reversible cyclic codes, which had applications in data storage. Due to a newly discovered application in cryptography, there has been renewed interest in LCD codes. In this paper, we explore two special families of LCD cyclic codes, which are both BCH codes. The dimensions and the minimum distances of these LCD BCH codes are investigated.

80 citations


Posted Content
TL;DR: A recurrent neural network architecture for decoding linear block codes, which shows comparable bit error rate results compared to the feed-forward neural network with significantly less parameters and demonstrates improved performance over belief propagation on sparser Tanner graph representations of the codes.
Abstract: Designing a practical, low complexity, close to optimal, channel decoder for powerful algebraic codes with short to moderate block length is an open research problem. Recently it has been shown that a feed-forward neural network architecture can improve on standard belief propagation decoding, despite the large example space. In this paper we introduce a recurrent neural network architecture for decoding linear block codes. Our method shows comparable bit error rate results compared to the feed-forward neural network with significantly less parameters. We also demonstrate improved performance over belief propagation on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the RNN decoder can be used to improve the performance or alternatively reduce the computational complexity of the mRRD algorithm for low complexity, close to optimal, decoding of short BCH codes.

78 citations


Journal ArticleDOI
TL;DR: In this paper, the dimension and minimum distance of two classes of narrow-sense primitive cyclic Reed-Solomon codes with designed distances were determined, and the weight distributions of some of these BCH codes were also reported.

58 citations


Journal ArticleDOI
TL;DR: The dimensions of BCH codes over finite fields with three types of lengths n are studied and two general formulas are provided on their dimensions and the minimum distances of some primitive B CH codes are settled.

43 citations


Journal ArticleDOI
TL;DR: A sequence of dual-containing constacyclic codes of designed distances 2q^2<\delta \le \delta _\mathrm{{max}}$$2q2 2q-ary cyclotomic cosets 2q2 can be constructed from these dual- containing codes via Hermitian Construction.
Abstract: Constacyclic BCH codes have been widely studied in the literature and have been used to construct quantum codes in latest years. However, for the class of quantum codes of length $$n=q^{2m}+1$$n=q2m+1 over $$F_{q^2}$$Fq2 with q an odd prime power, there are only the ones of distance $$\delta \le 2q^2$$?≤2q2 are obtained in the literature. In this paper, by a detailed analysis of properties of $$q^{2}$$q2-ary cyclotomic cosets, maximum designed distance $$\delta _\mathrm{{max}}$$?max of a class of Hermitian dual-containing constacyclic BCH codes with length $$n=q^{2m}+1$$n=q2m+1 are determined, this class of constacyclic codes has some characteristic analog to that of primitive BCH codes over $$F_{q^2}$$Fq2. Then we can obtain a sequence of dual-containing constacyclic codes of designed distances $$2q^2 2q^2$$d>2q2 can be constructed from these dual-containing codes via Hermitian Construction. These newly obtained quantum codes have better code rate compared with those constructed from primitive BCH codes.

40 citations


Journal ArticleDOI
TL;DR: The dimension, the minimum distance, and the weight distribution of some ternary BCH codes with length $n=(3^{m}-1)/2$ are determined in this paper.
Abstract: Cyclic codes are widely employed in communication systems, storage devices, and consumer electronics, as they have efficient encoding and decoding algorithms. BCH codes, as a special subclass of cyclic codes, are in most cases among the best cyclic codes. A subclass of good BCH codes are the narrow-sense BCH codes over $ {\mathrm {GF}}(q)$ with length $n=(q^{m}-1)/(q-1)$ . Little is known about this class of BCH codes when $q>2$ . The objective of this paper is to study some of the codes within this class. In particular, the dimension, the minimum distance, and the weight distribution of some ternary BCH codes with length $n=(3^{m}-1)/2$ are determined in this paper. A class of ternary BCH codes meeting the Griesmer bound is identified. An application of some of the BCH codes in secret sharing is also investigated.

37 citations


Journal ArticleDOI
TL;DR: In this article, the minimum distance of narrow-sense primitive BCH codes with special Bose distance was shown to be Ω(m-2 1/2 ) where m-2 2/2 is the size of the BCH code.
Abstract: Due to wide applications of BCH codes, the determination of their minimum distance is of great interest. However, this is a very challenging problem for which few theoretical results have been reported in the last four decades. Even for the narrow-sense primitive BCH codes, which form the most well studied subclass of BCH codes, there are very few theoretical results on the minimum distance. In this paper, we present new results on the minimum distance of narrow-sense primitive BCH codes with special Bose distance. We prove that for a prime power $q$, the $q$-ary narrow-sense primitive BCH code with length $q^m-1$ and Bose distance $q^m-q^{m-1}-q^i-1$, where $\frac{m-2}{2} \le i \le m-\lfloor \frac{m}{3} \rfloor-1$, has minimum distance $q^m-q^{m-1}-q^i-1$. This is achieved by employing the beautiful theory of sets of quadratic forms, symmetric bilinear forms, and alternating bilinear forms over finite fields, which can be best described using the framework of association schemes.

36 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide optimized offset and scaling values for those LDPC decoders based on Advanced Television Systems Committee (ATSC) 3.0 LDPC codes and show that the performance of OMSA and NMSA with the obtained values is close to that of sum-product algorithm, even though the NMSA-based decoder may show a serious error floor due to a channel mismatch effect.
Abstract: Offset min-sum algorithm (OMSA) and normalized min-sum algorithm (NMSA) are widely used in commercial LDPC decoders due to low complexity and reasonable performance. In this paper, we provide optimized offset and scaling values for those LDPC decoders based on Advanced Television Systems Committee (ATSC) 3.0 LDPC codes. Furthermore, according to extensive computer simulations, it is shown that the performance of the OMSA and NMSA with the obtained values is close to that of sum-product algorithm, even though the NMSA-based decoder may show a serious error floor due to a channel mismatch effect. Consequently, we recommend that the ATSC 3.0 transmitters use a concatenation of BCH outer code and LDPC inner code as channel coding, considering the use of NMSA for LDPC decoder in ATSC 3.0 receivers.

33 citations


Proceedings ArticleDOI
25 Jun 2017
TL;DR: BCH codes and their analogous in the Lee space are utilized to construct explicit and systematic codes that are immune to up to r sticky insertions and the ratio of the number of constructed redundancy bits in the construction to a certain upper bound approaches one as the block length grows large, which implies asymptotic optimality of the construction.
Abstract: The problem of constructing sticky-insertion-correcting codes with efficient encoding and decoding is considered. An {n, M, r) sticky-insertion-correcting code consists of M codewords of length n such that any pattern of up to r sticky insertions can be corrected. We utilize BCH codes and their analogous in the Lee space to construct explicit and systematic codes that are immune to up to r sticky insertions. It is shown that the ratio of the number of constructed redundancy bits in the construction to a certain upper bound approaches one as the block length grows large, which implies asymptotic optimality of the construction.

31 citations


Proceedings ArticleDOI
03 Nov 2017
TL;DR: Non-invasive measurement of electromagnetic radiation together with a differential power analysis is shown to be sufficient to extract not only single bits but even the complete key from an error correction used for PUF-based key generation.
Abstract: Physical Unclonable Functions (PUFs) provide a cost-efficient way to store a secure key on a device. But the noisy secret from a PUF must be corrected to generate a stable key. Since the error correction processes secret material, it is a target of attacks. Previous work has shown that single bits of a key can be extracted using a power side-channel attacks. This work enhances the attack idea. Non-invasive measurement of electromagnetic radiation together with a differential power analysis is shown to be sufficient to extract not only single bits but even the complete key from an error correction used for PUF-based key generation. The efficiency of the basic attack is significantly improved over state of the art using public available preknowledge on the PUF, an advanced correlation method, and parallel manipulation of helper data. The attack is practically demonstrated on an FPGA implementation with concatenated BCH and repetition codes. The results show that, compared to state of the art, a significant improvement by a factor of more than 100 in terms of trace reduction can be achieved.

26 citations


Posted Content
TL;DR: It is proved that for a prime power $q$-ary narrow-sense primitive BCH code with length and Bose distance, the minimum distance has minimum distance $q^m-q^{m-1}-q^i-1$.
Abstract: Due to wide applications of BCH codes, the determination of their minimum distance is of great interest. However, this is a very challenging problem for which few theoretical results have been reported in the last four decades. Even for the narrow-sense primitive BCH codes, which form the most well-studied subclass of BCH codes, there are very few theoretical results on the minimum distance. In this paper, we present new results on the minimum distance of narrow-sense primitive BCH codes with special Bose distance. We prove that for a prime power $q$, the $q$-ary narrow-sense primitive BCH code with length $q^m-1$ and Bose distance $q^m-q^{m-1}-q^i-1$, where $\frac{m-2}{2} \le i \le m-\lfloor \frac{m}{3} \rfloor-1$, has minimum distance $q^m-q^{m-1}-q^i-1$. This is achieved by employing the beautiful theory of sets of quadratic forms, symmetric bilinear forms and alternating bilinear forms over finite fields, which can be best described using the framework of association schemes.

Journal ArticleDOI
TL;DR: Based on classical q2-ary constacyclic codes, the Hermitian construction is applied to obtain several classes of q-ary quantum stabilizer codes of length (q^{2m}-1)/(q+1) and these quantum codes have parameters better than those obtained from classical BCH codes.
Abstract: Let q be a prime power and \(m\ge 2\) an integer. In this paper, based on classical \(q^{2}\)-ary constacyclic codes, we apply the Hermitian construction to obtain several classes of q-ary quantum stabilizer codes of length \((q^{2m}-1)/(q+1)\). These quantum codes have parameters better than those obtained from classical BCH codes.

Journal ArticleDOI
TL;DR: Two improved constructions for nonbinary quantum BCH codes of lengths n=q4−12 and n= q4−1q−1 are presented, both of which have parameters better than those obtained from other known constructions.
Abstract: In this work, we present two improved constructions for nonbinary quantum BCH codes of lengths $n=\frac {q^{4}-1}{2}$ and $n=\frac {q^{4}-1}{q-1}$ , where q is an odd prime power. Moreover, the constructed quantum BCH codes have parameters better than those obtained from other known constructions.

Book ChapterDOI
TL;DR: In this paper, it was shown that the automorphism groups of the extended codes of the narrow-sense primitive BCH codes over finite fields are doubly transitive and these extended codes hold 2-designs.

Journal ArticleDOI
TL;DR: This paper characterize the polynomials that yield recursive MDS matrices in a more general setting and proposes three methods for obtaining them, paving the way for new direct constructions.
Abstract: MDS matrices are of great importance in the design of block ciphers and hash functions. MDS matrices are not sparse and have a large description and thus induce costly implementation in software/hardware. To overcome this problem, in particular for applications in light-weight cryptography, it was proposed by Guo et al. to use recursive MDS matrices. A recursive MDS matrix is an MDS matrix which can be expressed as a power of some companion matrix. Following the work of Guo et al., some ad-hoc search techniques are proposed to find recursive MDS matrices which are suitable for hardware/software implementation. In another direction, coding theoretic techniques are used to directly construct recursive MDS matrices: Berger technique uses Gabidulin codes and Augot et al. technique uses shortened BCH codes. In this paper, we first characterize the polynomials that yield recursive MDS matrices in a more general setting. Based on this we provide three methods for obtaining such polynomials. Moreover, the recursive MDS matrices obtained using shortened BCH codes can also be obtained with our first method. In fact we get a larger set of polynomials than the method which uses shortened BCH codes. Our other methods appear similar to the method which uses Gabidulin codes. We get a new infinite class of recursive MDS matrices from one of the proposed methods. Although we propose three methods for the direct construction of recursive MDS matrices, our characterization results pave the way for new direct constructions.

Journal ArticleDOI
TL;DR: It is shown that it is possible to construct QTPCs with parameters better than other classes of quantum error-correction codes (QECC), e.g., CQCs and quantum BCH codes, and many QT PCs are obtained with parametersbetter than previously known quantum codes available in the literature.
Abstract: We present a general framework for the construction of quantum tensor product codes (QTPC). In a classical tensor product code (TPC), its parity check matrix is con- structed via the tensor product of parity check matrices of the two component codes. We show that by adding some constraints on the component codes, several classes of dual-containing TPCs can be obtained. By selecting different types of component codes, the proposed method enables the construction of a large family of QTPCs and they can provide a wide variety of quantum error control abilities. In particular, if one of the component codes is selected as a burst-error-correction code, then QTPCs have quantum multiple-burst-error-correction abilities, provided these bursts fall in distinct subblocks. Compared with concatenated quantum codes (CQC), the component code selections of QTPCs are much more exible than those of CQCs since only one of the component codes of QTPCs needs to satisfy the dual-containing restriction. We show that it is possible to construct QTPCs with parameters better than other classes of quantum error-correction codes (QECC), e.g., CQCs and quantum BCH codes. Many QTPCs are obtained with parameters better than previously known quantum codes available in the literature. Several classes of QTPCs that can correct multiple quantum bursts of errors are constructed based on reversible cyclic codes and maximum-distance-separable (MDS) codes.

Journal ArticleDOI
12 Jun 2017
TL;DR: The class of codes with a t-ECP is proposed for the McEliece cryptosystem and the hardness of distinguishing arbitrary codes from those having a t -error correcting pair is studied.
Abstract: Code-based Cryptography (CBC) is a powerful and promising alternative for quantum resistant cryptography. Indeed, together with lattice-based cryptography, multivariate cryptography and hash-based cryptography are the principal available techniques for post-quantum cryptography. CBC was first introduced by McEliece where he designed one of the most efficient Public-Key encryption schemes with exceptionally strong security guarantees and other desirable properties that still resist to attacks based on Quantum Fourier Transform and Amplitude Amplification. The original proposal, which remains unbroken, was based on binary Goppa codes. Later, several families of codes have been proposed in order to reduce the key size. Some of these alternatives have already been broken. One of the main requirements of a code-based cryptosystem is having high performance t-bounded decoding algorithms which is achieved in the case the code has a t-error-correcting pair (ECP). Indeed, those McEliece schemes that use GRS codes, BCH, Goppa and algebraic geometry codes are in fact using an error-correcting pair as a secret key. That is, the security of these Public-Key Cryptosystems is not only based on the inherent intractability of bounded distance decoding but also on the assumption that it is difficult to retrieve efficiently an error-correcting pair. In this paper, the class of codes with a t-ECP is proposed for the McEliece cryptosystem. Moreover, we study the hardness of distinguishing arbitrary codes from those having a t-error correcting pair.

Journal ArticleDOI
TL;DR: In this article, it was shown that for a monic polynomial g(X) of degree ≥ 2, the matrix m = C_g^k$$M=Cgk is MDS if and only if m has no nonzero multiple of degree
Abstract: MDS matrices allow to build optimal linear diffusion layers in the design of block ciphers and hash functions. There has been a lot of study in designing efficient MDS matrices suitable for software and/or hardware implementations. In particular recursive MDS matrices are considered for resource constrained environments. Such matrices can be expressed as a power of simple companion matrices, i.e., an MDS matrix $$M = C_g^k$$M=Cgk for some companion matrix corresponding to a monic polynomial $$g(X) \in \mathbb {F}_q[X]$$g(X)źFq[X] of degree k. In this paper, we first show that for a monic polynomial g(X) of degree $$k\ge 2$$kź2, the matrix $$M = C_g^k$$M=Cgk is MDS if and only if g(X) has no nonzero multiple of degree $$\le 2k-1$$≤2k-1 and weight $$\le k$$≤k. This characterization answers the issues raised by Augot et al. in FSE-2014 paper to some extent. We then revisit the algorithm given by Augot et al. to find all recursive MDS matrices that can be obtained from a class of BCH codes (which are also MDS) and propose an improved algorithm. We identify exactly what candidates in this class of BCH codes yield recursive MDS matrices. So the computation can be confined to only those potential candidate polynomials, and thus greatly reducing the complexity. As a consequence we are able to provide formulae for the number of such recursive MDS matrices, whereas in FSE-2014 paper, the same numbers are provided by exhaustively searching for some small parameter choices. We also present a few ideas making the search faster for finding efficient recursive MDS matrices in this class. Using our approach, it is possible to exhaustively search this class for larger parameter choices which was not possible earlier. We also present our search results for the case $$k=8$$k=8 and $$q=2^{16}$$q=216.

Proceedings ArticleDOI
01 Sep 2017
TL;DR: The results showed that the perceptual quality of audio in the form of ODG and SNR parameters for DWT (Discrete Wavelet Transormation) level 1 was higher than level 3, and higher decomposition level of wavelet makes the audio host file is less imperceptible to the attack, but higher robustness.
Abstract: In this research, we proposed the audio watermarking system with adaptive multilevel DWT (discrete wavelet transform) — SVD (singular value decomposition) based on BCH (Bose-Chaudhuri-Hocquenghem) code. The host audio signal is embedded with a watermark signal in the form of an image signal accompanied by several attacks on the watermarking system. The results showed that the perceptual quality of audio in the form of ODG (Objective Difference Grade) and SNR (Signal to Noise Ratio) parameters for DWT (Discrete Wavelet Transormation) level 1 was higher than level 3. It can reach ODG level 1 = −0.68 and level 3 = −1.54 on LSC attack type; SNR level 1 = 31 dB and level 3 = 27 dB on LPF attack type. While for watermark image BER parameter, the result of the research shows that BER level 1 is worse than level 3, which can reach BER level 1 = 0.35 and level 3 = 0 on LPF attack. Higher decomposition level of wavelet makes the audio host file is less imperceptible to the attack, but higher robustness.

Journal ArticleDOI
09 May 2017
TL;DR: This paper proposes a novel video steganography method in discrete cosine transform (DCT) domain based on error correcting codes (ECC) based on Hamming and BCH codes to improve the security and robustness of the proposed algorithm.
Abstract: Nowadays, the science of information hiding has gained tremendous significance due to advances in information and communication technology. The performance of any steganographic algorithm relies on the embedding efficiency, embedding payload, and robustness against attackers. Low hidden ratio, less security, and low quality of stego videos are the major issues of many existing steganographic methods. In this paper, we propose a novel video steganography method in discrete cosine transform (DCT) domain based on error correcting codes (ECC). To improve the security of the proposed algorithm, a secret message is first encrypted and encoded by using Hamming and BCH codes. Then, it is embedded into the DCT coefficients of video frames. The hidden message is embedded into DCT coefficients of each Y, U, and V planes excluding DC coefficients. The proposed algorithm is tested under two types of videos that contain slow and fast moving objects. The experiential results of the proposed algorithm are compared with three existing methods. The comparison results show that our proposed algorithm outperformed other algorithms. The hidden ratio of the proposed algorithm is approximately 27.53%, which is considered as a high hiding capacity with a minimal tradeoff of the visual quality. The robustness of the proposed algorithm was tested under different attacks.

Proceedings ArticleDOI
01 Dec 2017
TL;DR: This work uses parallel and serially concatenated convolutional codes known as turbo codes as building blocks for developing randomized turbo codes and demonstrates via several examples that the newly designed schemes can outperform other existing coding methods in the literature in terms of the resulting security gap.
Abstract: We study application of parallel and serially concatenated convolutional codes known as turbo codes to the randomized encoding scheme introduced by Wyner for physical layer security. For this purpose, we first study how randomized convolutional codes can be constructed. Then, we use them as building blocks for developing randomized turbo codes. We also develop iterative low-complexity decoders corresponding to the randomized schemes introduced and evaluate the code performance. We demonstrate via several examples that the newly designed schemes can outperform other existing coding methods in the literature (e.g., punctured low density parity check (LDPC) and scrambled BCH codes) in terms of the resulting security gap.

Journal IssueDOI
TL;DR: In this article, the authors proposed a family of magic state distillation protocols that obtains asymptotic performance that is conjectured to be optimal, and analyzed these protocols in an intermediate size regime, using hundreds to thousands of qubits.
Abstract: Recently we proposed a family of magic state distillation protocols that obtains asymptotic performance that is conjectured to be optimal. This family depends upon several codes, called "inner codes" and "outer codes." We presented some small examples of these codes as well as an analysis of codes in the asymptotic limit. Here, we analyze such protocols in an intermediate size regime, using hundreds to thousands of qubits. We use BCH inner codes, combined with various outer codes. We extend our protocols by adding error correction in some cases. We present a variety of protocols in various input error regimes; in many cases these protocols require significantly fewer input magic states to obtain a given output error than previous protocols.

Proceedings ArticleDOI
01 Nov 2017
TL;DR: A cooperative error correction scheme, called CooECC, is proposed to reduce LDPC decoding latency of the MSB page in NAND Flash by exploiting data error characteristics introduced by retention errors, and enables decoding to converge at a higher rate.
Abstract: The storage capacity of NAND Flash has increased by scaling down to smaller cell size and using multi-level storage technology, but data reliability is degraded by severer retention errors. To ensure data reliability, error correction codes (ECC) are adopted, such as BCH and low-density parity check (LDPC) codes. However, BCH codes are insufficient when raw bit error rates (RBER) caused by retention errors are high. As a result, BCH codes are inevitably replaced with LDPC codes with stronger error correction capability. Traditional LDPC codes are used to independently correct bit errors in the LSB and MSB pages. Unfortunately, decoding latency in such two pages is significantly unbalanced, MSB pages take much higher latency due to higher RBER, leading to suboptimal flash read performance. This paper proposes a cooperative error correction scheme, called CooECC, to reduce LDPC decoding latency of the MSB page in NAND Flash. By exploiting data error characteristics introduced by retention errors, CooECC integrates the decoding result of the LSB page into the initial information of LDPC decoding for the MSB page, making it more accurate. This in turn enables decoding to converge at a higher rate. Simulation results show that for LDPC schemes with information lengths of 2KB and 4KB, the decoding latency can be reduced by up to 87% and 84%, respectively, when RBER is as high as 8.0 × 10^-3.

Journal ArticleDOI
TL;DR: For a proof of concept Bose, Ray-Chaudhuri, and Hocquenghem and Reed-Solomon codes have been used for FEC and results showed that the wireless multipath components significantly improve the performance of FEC.
Abstract: Modern wireless communication systems suffer from phase shifting and, more importantly, from interference caused by multipath propagation. Multipath propagation results in an antenna receiving two or more copies of the signal sequence sent from the same source but that has been delivered via different paths. Multipath components are treated as redundant copies of the original data sequence and are used to improve the performance of forward error correction (FEC) codes without extra redundancy, in order to improve data transmission reliability and increase the bit rate over the wireless communication channel. For a proof of concept Bose, Ray-Chaudhuri, and Hocquenghem (BCH) and Reed-Solomon (RS) codes have been used for FEC to compare their bit error rate (BER) performances. The results showed that the wireless multipath components significantly improve the performance of FEC. Furthermore, FEC codes with low error correction capability and employing the multipath phenomenon are enhanced to perform better than FEC codes which have a bit higher error correction capability and did not utilise the multipath. Consequently, the bit rate is increased, and communication reliability is improved without extra redundancy.

Journal ArticleDOI
TL;DR: A novel folding technique for high-speed and low-cost Bose–Chaudhuri–Hocquenghem decoders is presented and the regularly structured GF multiplier is introduced to be efficiently folded to reduce the complexity and the critical delay.
Abstract: In this brief, we present a novel folding technique for high-speed and low-cost Bose–Chaudhuri–Hocquenghem (BCH) decoders. In the conventional BCH decoder, the critical path lies on the Galois-field (GF) multiplier of the key equation solver, where the speedup of the critical path is very difficult due to a significant area increase. In the proposed work, the regularly structured GF multiplier is introduced to be efficiently folded to reduce the complexity and the critical delay. Moreover, the conventional global folding scheme can be applied to further reduce the hardware costs. The implementation results show that the proposed folding scheme enhances the area efficiency by 1.73 and 1.9 times in the Digital Video Broadcasting–Satellite–Second Generation system and the storage controller, respectively.

Journal ArticleDOI
TL;DR: In this article, the main algebraic and geometric properties of skew polynomial skew codes are studied, and some BCH type lower bounds for their minimum distance are given. And for module skew codes constructed only with an automorphism, the minimum distance is given.
Abstract: After recalling the definition of some codes as modules over skew polynomial rings, whose multiplication is defined by using an endomorphism and a derivation, and some basic facts about them, in the first part of this paper we study some of their main algebraic and geometric properties. Finally, for module skew codes constructed only with an automorphism, we give some BCH type lower bounds for their minimum distance.

Journal ArticleDOI
TL;DR: This paper discussed the construction of quaternary zero radical (ZR) codes of dimension five with length n≥5 and constructed many maximal-entanglement EAQECCs with very good parameters, which are better than those obtained in the literature.
Abstract: Maximal-entanglement entanglement-assisted quantum error-correcting codes (EAQE-CCs) can achieve the EA-hashing bound asymptotically and a higher rate and/or better noise suppression capability may be achieved by exploiting maximal entanglement. In this paper, we discussed the construction of quaternary zero radical (ZR) codes of dimension five with length n≥5. Using the obtained quaternary ZR codes, we construct many maximal-entanglement EAQECCs with very good parameters. Almost all of these EAQECCs are better than those obtained in the literature, and some of these EAQECCs are optimal codes.

Proceedings ArticleDOI
01 Sep 2017
TL;DR: PENCA (programmable encoding architecture) is a new approach in multiple error detection and correction which allows for adaptive error correction, based on the quality of the channel, therefore providing a better overhead / performance ratio than methods that are based on a fixed number of allowed error bits in a symbol, tailored to handle worst-case conditions.
Abstract: Industrial manufacturing is more and more based groups of robots in production cells. The robots consist of moving, bending and rotating arms with multiple joints. Cables that connect sections of robots undergo heavy stress from stretching and twisting, resulting in wear-out and failure. Replacing cables on robots by wireless communication therefore is an alternative that has been investigated for some time. Unfortunately, communication channels in industrial environments suffer from some adversary effect. First, standard industrial communication networks work on rigid time frames which limit allowed latencies in communication systems considerably. Second, multiple path propagation and destructive interferences make such communication channels sensitive to fading problems. Therefore forward error correction (FEC) that can compensate massive variations of signal strength becomes a must. On the other hand, forward error correction using known methods such as BCH codes, Reed-Solomon codes, turbo codes and low-density parity checks (LDPC) is not very fast by nature. Codes for single effort correction and double error detection (SEC-DED-codes) such as Hamming code and Hsiao code are fast, but they are not powerful enough to correct multiple bit errors or restore missing symbols, unless they are applied in a step-wise approximation. PENCA (programmable encoding architecture) is a new approach in multiple error detection and correction which is, at present based on BCH codes, reasonably fast by parallel hardware. Furthermore, it allows for adaptive error correction, based on the quality of the channel, therefore providing a better overhead / performance ratio than methods that are based on a fixed number of allowed error bits in a symbol, tailored to handle worst-case conditions. PENCA is currently becoming part of an industrial communication systems developed in the ParSeC project, which is a cooperative effort of industries, universities and research institutes, funded by the German Ministry of Research and Education (BMBF).

Proceedings ArticleDOI
01 Nov 2017
TL;DR: The processing time is provided that showed Reed Solomon without scrambling is better option for security than other error correcting codes with scrambling such as BCH due to the short time that it takes for processing.
Abstract: Due to the challenges that are facing network security the rise of interest in physical layer security have been increasing in the last decade. Physical layer security is playing a very important role in increasing the security of our network. Most of the security algorithms are implemented in the upper layers of the network. Physical layer security is adding much security to the existing security on the upper layers. There have been lack in the implementation of physical layer security algorithms. This paper evaluate error correcting codes in term of physical layer security. The paper is examining the security of the error correcting codes in term of bit error rate and complexity (processing time). This paper compare non systematic scrambling t-error method with t-error correcting codes without scrambling and try to analyze the result. This paper provides the processing time that showed Reed Solomon without scrambling is better option for security than other error correcting codes with scrambling such as BCH due to the short time that it takes for processing. By showing this result we are proving that Reed Solomon has the least complexity among other error correcting code in term of physical layer security.

Journal ArticleDOI
TL;DR: It is demonstrated, in a 28-nm CMOS process technology, that the introduction of this BCH circuit in an uncoded link leads to energy-per-bit reductions of 25%, even when the impact of code rate on receiver energy dissipation is considered.
Abstract: Since energy dissipation and latency in optical interconnects are of utmost concern, such links are often operated without forward error correction. We propose a low-complexity noniterative two-error correcting Bose–Chaudhuri–Hocquenghem (BCH) decoder circuit that significantly relaxes the stringent optical modulation amplitude requirements on the transmitter, which help to reduce laser and laser driver energy dissipation. We demonstrate, in a 28-nm CMOS process technology, that the introduction of this BCH circuit in an uncoded link leads to energy-per-bit reductions of 25%, even when the impact of code rate on receiver energy dissipation is considered.