scispace - formally typeset
Search or ask a question

Showing papers on "BCH code published in 2021"


Proceedings ArticleDOI
Ken R. Duffy1
06 Jun 2021
TL;DR: In this paper, a soft-detection variant of Guessing Random Additive Noise Decoding (GRAND) called Ordered Reliability Bits GRAND that can decode any moderate redundancy blockcode was proposed.
Abstract: Modern applications are driving demand for ultra-reliable low-latency communications, rekindling interest in the performance of short, high-rate error correcting codes. To that end, here we introduce a soft-detection variant of Guessing Random Additive Noise Decoding (GRAND) called Ordered Reliability Bits GRAND that can decode any moderate redundancy block-code. For a code of n bits, it avails of no more than ⎾log 2 (n)⏋ bits of code-book-independent quantized soft detection information per received bit to determine an accurate decoding while retaining the original algorithm’s suitability for a highly parallelized implementation in hardware. ORBGRAND is shown to provide similar block error performance for codes of distinct classes (BCH, CA-Polar and RLC) with low complexity, while providing better block error rate performance than CA-SCL, a state of the art soft detection CA-Polar decoder.

49 citations


Journal ArticleDOI
TL;DR: This paper settles the long-standing problem by presenting an infinite family of BCH codes of length $2^{2m+1}+1$ .
Abstract: The question as to whether there exists an infinite family of near MDS codes holding an infinite family of $t$ -designs for $t\geq 2$ was answered in the recent paper [Infinite families of near MDS codes holding $t$ -designs, IEEE Trans. Inf. Theory 66(9) (2020)], where an infinite family of near MDS codes holding an infinite family of 3-designs and an infinite family of near MDS codes holding an infinite family of 2-designs were presented, but no infinite family of linear codes holding an infinite family of 4-designs was presented. Hence, the question as to whether there is an infinite family of linear codes holding an infinite family of 4-designs remains open for 71 years. This paper settles this long-standing problem by presenting an infinite family of BCH codes of length $2^{2m+1}+1$ over ${\mathrm {GF}}(2^{2m+1})$ holding an infinite family of 4- $(2^{2m+1}+1, 6, 2^{2m}-4)$ designs. This paper also provides another solution to the first question, as some of the BCH codes presented in this paper are also near MDS. Moreover, an infinite family of linear codes holding the spherical geometry design $S(3, 5, 4^{m}+1)$ is presented. The new direction of searching for $t$ -designs with elementary symmetric polynomials will be further advanced.

46 citations


Journal ArticleDOI
TL;DR: Comparisons with prior state-of-the-art schemes demonstrate that the proposed robust JPEG steganographic algorithm can provide a more robust performance and statistical security.
Abstract: Social networks are everywhere and currently transmitting very large messages. As a result, transmitting secret messages in such an environment is worth researching. However, the images used in transmitting messages are usually compressed with a JPEG compression channel, which is lossy and damages the transmitted data. Therefore, to prevent secret messages from being damaged, a robust JPEG steganography is urgently needed. In this paper, a secure robust JPEG steganographic scheme based on an autoencoder with an adaptive BCH encoding (Bose-Chaudhuri-Hocquenghem encoding) is proposed. In particular, the autoencoder is first pretrained to fit the transformation relationship between the JPEG image before and after compression by the compression channel. In addition, the BCH encoding is adaptively utilized according to the content of cover image to decrease the error rate of secret message extraction. The DCT (Discrete Cosine Transformation) coefficient adjustment based on practical JPEG channel characteristics further improves the robustness and statistical security. Comparisons with prior state-of-the-art schemes demonstrate that the proposed robust JPEG steganographic algorithm can provide a more robust performance and statistical security.

44 citations


Proceedings ArticleDOI
14 Jun 2021
TL;DR: In this article, the authors show that the CRC is a better short code than either Polar or CA-Polar codes, and that the standard CA-SCL decoder only uses the CRC for error detection and therefore suffers severe performance degradation in short, high rate settings when compared with the performance of the GRAND decoder, which uses all of the CRC bits for error correction.
Abstract: CRC codes have long since been adopted in a vast range of applications. The established notion that they are suitable primarily for error detection can be set aside through use of the recently proposed Guessing Random Additive Noise Decoding (GRAND). Hard-detection (GRAND-SOS) and soft-detection (ORBGRAND) variants can decode any short, high-rate block code, making them suitable for error correction of CRC-coded data. When decoded with GRAND, short CRC codes have error correction capability that is at least as good as popular codes such as BCH codes, but with no restriction on either code length or rate.The state-of-the-art CA-Polar codes are concatenated CRC and Polar codes. For error correction, we find that the CRC is a better short code than either Polar or CA-Polar codes. Moreover, the standard CA-SCL decoder only uses the CRC for error detection and therefore suffers severe performance degradation in short, high rate settings when compared with the performance GRAND provides, which uses all of the CA-Polar bits for error correction.Using GRAND, existing systems can be upgraded from error detection to low-latency error correction without re-engineering the encoder, and additional applications of CRCs can be found in IoT, Ultra-Reliable Low Latency Communication (URLLC), and beyond. The universality of GRAND, its ready parallelized implementation in hardware, and the good performance of CRC as codes make their combination a viable solution for low-latency applications.

22 citations


DOI
22 Nov 2021
TL;DR: In this article, the performance of small and medium length quantum LDPC (QLDPC) codes in the depolarizing channel is studied. But the performance is limited to degenerate codes with the maximal stabilizer weight much smaller than their minimum distance, and it is shown that with the help of an OSD-like post-processing, the standard belief propagation (BP) decoder can be improved by several orders of magnitude.
Abstract: We study the performance of small and medium length quantum LDPC (QLDPC) codes in the depolarizing channel. Only degenerate codes with the maximal stabilizer weight much smaller than their minimum distance are considered. It is shown that with the help of an OSD-like post-processing the performance of the standard belief propagation (BP) decoder on many QLDPC codes can be improved by several orders of magnitude. Using this new BP-OSD decoder we study the performance of several known classes of degenerate QLDPC codes including the hypergraph product codes, the hyperbicycle codes, the homological product codes, and the Haah's cubic codes. We also construct several interesting examples of short generalized bicycle codes. Some of them have an additional property that their syndromes are protected by small BCH codes, which may be useful for the fault-tolerant syndrome measurement. We also propose a new large family of QLDPC codes that contains the class of hypergraph product codes, where one of the used parity-check matrices is square. It is shown that in some cases such codes have better performance than the hypergraph product codes. Finally, we demonstrate that the performance of the proposed BP-OSD decoder for some of the constructed codes is better than for a relatively large surface code decoded by a near-optimal decoder.

19 citations


Journal ArticleDOI
TL;DR: The best practical cyclic codes are the BCH codes used in consumer electronics, communication systems, and storage devices as mentioned in this paper, however, not much is known about BCH code with large minimum di
Abstract: BCH codes are among the best practical cyclic codes widely used in consumer electronics, communication systems, and storage devices However, not much is known about BCH codes with large minimum di

11 citations


Journal ArticleDOI
TL;DR: Three low-complexity methods are developed to identify and mitigate the miscorrections in GII decoding by utilizing the nested syndromes, adding one single parity to each sub-codeword, and keeping track of the error locator polynomial degree.
Abstract: The generalized integrated interleaved (GII) codes nest BCH sub-codewords to form codewords of more powerful BCH codes. They can achieve hyper-throughput decoding with excellent error-correcting performance and are among the most suitable codes for next-generation memories. However, the new storage class memories require high code rate and short codeword length. In this case, the sub-codewords have small correction capability. The miscorrections of the sub-codewords lead to much more severe degradation on the GII error-correcting performance compared to the miscorrections of classic BCH codes. This letter investigates the miscorrections in GII decoding. Three low-complexity methods are developed to identify and mitigate the miscorrections by utilizing the nested syndromes, adding one single parity to each sub-codeword, and keeping track of the error locator polynomial degree. Besides, formulas for estimating the miscorrection rates are given. Using the proposed mitigation methods, the actual GII decoding performance becomes very close to that of the case without any miscorrections.

9 citations


Journal ArticleDOI
TL;DR: Tables are provided showing that sum-rank BCH codes beat previously known codes for the sum-Rank metric for binary matrices (i.e., codes whose codewords are lists of lists of Binary matrices, for a wide range of list lengths that correspond to the code length).
Abstract: In this work, cyclic-skew-cyclic codes and sum-rank BCH codes are introduced. Cyclic-skew-cyclic codes are characterized as left ideals of a suitable non-commutative finite ring, constructed using skew polynomials on top of polynomials (or vice versa). Single generators of such left ideals are found, and they are used to construct generator matrices of the corresponding codes. The notion of defining set is introduced, using pairs of roots of skew polynomials on top of poynomials. A lower bound (called sum-rank BCH bound) on the minimum sum-rank distance is given for cyclic-skew-cyclic codes whose defining set contains certain consecutive pairs. Sum-rank BCH codes, with prescribed minimum sum-rank distance, are then defined as the largest cyclic-skew-cyclic codes whose defining set contains such consecutive pairs. The defining set of a sum-rank BCH code is described, and a lower bound on its dimension is obtained. Thanks to it, tables are provided showing that sum-rank BCH codes beat previously known codes for the sum-rank metric for binary $2 \times 2 $ matrices (i.e., codes whose codewords are lists of $2 \times 2 $ binary matrices, for a wide range of list lengths that correspond to the code length). Finally, a decoder for sum-rank BCH codes up to half their prescribed distance is obtained.

8 citations


Journal ArticleDOI
TL;DR: This paper presents a tractable channel model for VANET in which system performance degradation due to error is addressed by concatenated Alamouti space-time block coding (ASTBC) and Bose–Chaudhuri–Hocquenghem (BCH) coding.
Abstract: The increase in capacity demand for vehicular communication is generating interest among researchers. The standard spectra allocated to VANET tend to be saturated and are no longer enough for real-time applications. Millimeter-wave is a potential candidate for VANET applications. However, millimeter-wave is susceptible to pathloss and fading, which degrade system performance. Beamforming, multi-input multi-output (MIMO) and diversity techniques are being employed to minimize throughput, reliability and data rate issues. This paper presents a tractable channel model for VANET in which system performance degradation due to error is addressed by concatenated Alamouti space-time block coding (ASTBC) and Bose–Chaudhuri–Hocquenghem (BCH) coding. Two closed-form approximations of bit error rate (BER), one for BCH in Rayleigh fading and the second for BCH with ASTBC, are derived. These expressions comprise SNR and code rate and can be utilized in designing VANET architectures. The results show that the BER using concatenated ASTBC and BCH outmatches the conventional BER ASTBC expression. The analytical results are compared with numerical results, thereby showing the accuracy of our closed-form expressions. The performance of the proposed expressions is evaluated using different code rates.

7 citations


Journal ArticleDOI
TL;DR: Euclidean (or Hermitian) self-orthogonal codes are presented and their parameters are determined by investigating the Euclidean hulls of some primitive BCH codes.
Abstract: Self-orthogonal codes are an important type of linear codes due to their wide applications in communication and cryptography. The Euclidean (or Hermitian) hull of a linear code is defined to be the intersection of the code and its Euclidean (or Hermitian) dual. It is clear that the hull is self-orthogonal. The main goal of this paper is to obtain self-orthogonal codes by investigating the hulls. Let $\mathcal {C}_{(r,r^{m}-1,\delta,b)}$ be the primitive BCH code over $\mathbb {F}_{r}$ of length $r^{m}-1$ with designed distance $\delta $ , where $\mathbb {F}_{r}$ is the finite field of order $r$ . In this paper, we will present Euclidean (or Hermitian) self-orthogonal codes and determine their parameters by investigating the Euclidean (or Hermitian) hulls of some primitive BCH codes. Several sufficient and necessary conditions for primitive BCH codes with large Hermitian hulls are developed by presenting lower and upper bounds on their designed distances. Furthermore, some Hermitian self-orthogonal codes are proposed via the hulls of BCH codes and their parameters are also investigated. In addition, we determine the dimensions of the code $\mathcal {C}_{(r,r^{2}-1,\delta,1)}$ and its hull in both Hermitian and Euclidean cases for $2 \le \delta \le r^{2}-1$ . We also present two sufficient and necessary conditions on designed distances such that the hull has the largest dimension.

7 citations


Journal ArticleDOI
TL;DR: This work is the first to leverage the benefits of the neural Transformer networks in physical layer communication systems, and presents a data-driven framework for permutation selection, combining domain knowledge with machine learning concepts such as node embedding and self-attention.
Abstract: Error correction codes are an integral part of communication applications and boost the reliability of transmission. The optimal decoding of transmitted codewords is the maximum likelihood rule, which is NP-hard. For practical realizations, suboptimal decoding algorithms are employed; however, the lack of theoretical insights currently impedes the exploitation of the full potential of these algorithms. One key insight is the choice of permutation in permutation decoding. We present a data-driven framework for permutation selection combining domain knowledge with machine learning concepts such as node embedding and self-attention. Significant and consistent improvements in the bit error rate are shown for the simulated Bose Chaudhuri Hocquenghem (BCH) code as compared to the baseline decoders. To the best of our knowledge, this work is the first to leverage the benefits of self-attention networks in physical layer communication systems.

Journal ArticleDOI
TL;DR: This work proposes a soft-input hard-output low-complexity decoding algorithm for the inner Hamming code and demonstrates that the algorithm leads to an efficient hardware design with low silicon area and power dissipation.
Abstract: We focus on a hardware implementation of the concatenated forward error-correction (FEC) decoder defined in 400ZR implementation agreement to provide a throughput of 400 Gbps over fiber-optical communication links. We propose a soft-input hard-output low-complexity decoding algorithm for the inner Hamming code. We demonstrate that the algorithm leads to an efficient hardware design with low silicon area and power dissipation. We then propose a hardware implementation architecture of the outer staircase decoder. It features a highly optimized low-power implementation of Bose-Chaudhuri-Hocquenghem (BCH) component decoders and the staircase decoder memory that can be efficiently accessed by either vertical or horizontal component decoders. Finally, we analyze the hardware implementation of the entire 400ZR decoder and investigate the trade-off in terms of power, area, and speed, that results from the inner/outer decoder concatenation and is dictated by the bit-error rate at the output of the inner decoder.

Journal ArticleDOI
TL;DR: In this paper, a blind reconstruction method of Bose-Chaudhuri-Hocquenghem (BCH) and Reed-Solomon (RS) codes is proposed, which aims to reconstruct the generator polynomials of BCH and RS codes from the intercepted noisy bitstream in non-cooperative environments.
Abstract: In this paper, a blind reconstruction method of Bose-Chaudhuri-Hocquenghem (BCH) and Reed-Solomon (RS) codes is proposed, which aims to reconstruct the generator polynomials of BCH and RS codes from the intercepted noisy bitstream in non-cooperative environments. The previous blind reconstruction methods try to select correct (or noiseless) codewords or extract code information from the received bitstream. However, they need to have sufficiently correct codewords for successful reconstruction. Therefore, the existing methods are highly dependent on the channel conditions because a sufficient number of correct codewords should be available for achieving good reconstruction performance. To overcome such high dependency on channel conditions, the proposed blind reconstruction method performs error correction of single-error codewords through the proposed single-error correction method (SCM). Then, both the correct codewords and the corrected single-error codewords are utilized for blind reconstruction. Specifically, the correct codewords are selected based on the algebraic property of BCH and RS codes that codeword polynomials have consecutive roots. At the same time, the codewords presumably with a single error are selected and corrected based on the fact that the most of BCH and RS codeword polynomials with either a single error or double errors do not have consecutive roots. Based on this fact, SCM is devise, which selects and corrects the codewords with a single error with high probability. Finally, it is verified that the proposed blind reconstruction method shows much better reconstruction performance by using only one-tenth of the received codewords compared to the existing best methods even under harsh channel conditions.


Journal ArticleDOI
TL;DR: In this article, the authors proposed a Perturbed ABP (P-ABP) to incorporate a small number of unstable bits with large log-likelihood-ratio (LLR) values into the sparsification operation of the parity-check matrix.
Abstract: Algebraic codes such as BCH code are receiving renewed interest as their short block lengths and low/no error floors make them attractive for ultra-reliable low-latency communications (URLLC) in 5G wireless networks. This article aims at enhancing the traditional adaptive belief propagation (ABP) decoding, which is a soft-in-soft-out (SISO) decoding for high-density parity-check (HDPC) algebraic codes, such as Reed-Solomon (RS) codes, Bose-Chaudhuri-Hocquenghem (BCH) codes, and product codes. The key idea of traditional ABP is to sparsify certain columns of the parity-check matrix corresponding to the least reliable bits with small log-likelihood-ratio (LLR) values. This sparsification strategy may not be optimal when some bits have large LLR magnitudes but wrong signs. Motivated by this observation, we propose a Perturbed ABP (P-ABP) to incorporate a small number of unstable bits with large LLRs into the sparsification operation of the parity-check matrix. In addition, we propose to apply partial layered scheduling or hybrid dynamic scheduling to further enhance the performance of P-ABP. Simulation results show that our proposed decoding algorithms lead to improved error correction performances and faster convergence rates than the prior-art ABP variants.

Journal ArticleDOI
TL;DR: In this article, a low-density BCH-constraint (LDBCH) code is introduced, which generalizes low density parity-check codes by replacing the single parity check code constraints with the BCH code constraints.
Abstract: In ultra reliable low latency communications, the demanding requirements on high reliability and low latency make it a challenge to design capable channel codes. In this work, a candidate named low-density BCH-constraint (LDBCH) code is introduced, which generalizes low-density parity-check codes by replacing the single parity check code constraints with the BCH code constraints. Specifically, a design methodology is introduced to construct efficient LDBCH codes of short information lengths and low coding rates. Moreover, the message passing decoding of LDBCH is enhanced by max-log approximation and weighted scaling when updating the BCH check nodes. Numerical results show that, with tolerable increase of decoding complexity, LDBCH codes with the proposed decoding enhancement outperforms CA-Polar and 5G LDPC by about 0.3 dB and 0.5 dB in SNR, respectively, for the information length of 104 and the block error rate of ${10^{{\mathrm{- }}6}}$ over an AWGN channel with QPSK modulation.

Journal ArticleDOI
TL;DR: In this article, a complete and explicit formula for the parameters of EAQECC coming from any Reed-Solomon code, for the Hermitian metric, and from any BCH code with extension degree 2 and consecutive cyclotomic cosets, for both the Euclidean and the HerMITian metric.
Abstract: Entanglement-assisted quantum error-correcting codes (EAQECCs) constructed from Reed–Solomon codes and BCH codes are considered in this work. It is provided a complete and explicit formula for the parameters of EAQECCs coming from any Reed–Solomon code, for the Hermitian metric, and from any BCH code with extension degree 2 and consecutive cyclotomic cosets, for both the Euclidean and the Hermitian metric. The main task in this work is the computation of a completely general formula for c, the minimum number of required maximally entangled quantum states.

Journal ArticleDOI
TL;DR: In this article, a primitive rateless (PR) code was proposed, which is characterized by the message length and a primitive polynomial over GF(2), which can generate a potentially limitless number of coded symbols.
Abstract: In this paper, we propose primitive rateless (PR) codes. A PR code is characterized by the message length and a primitive polynomial over $\mathbf {GF}(2)$ , which can generate a potentially limitless number of coded symbols. We show that codewords of a PR code truncated at any arbitrary length can be represented as subsequences of a maximum-length sequence ( $m$ -sequence). We characterize the Hamming weight distribution of PR codes and their duals and show that for a properly chosen primitive polynomial, the Hamming weight distribution of the PR code can be well approximated by the truncated binomial distribution. We further find a lower bound on the minimum Hamming weight of PR codes and show that there always exists a PR code that can meet this bound for any desired codeword length. We provide a list of primitive polynomials for message lengths up to 40 and show that the respective PR codes closely meet the Gilbert-Varshamov bound at various rates. Simulation results show that PR codes can achieve similar block error rates as their BCH counterparts at various signal-to-noise ratios (SNRs) and code rates. PR codes are rate-compatible and can generate as many coded symbols as required; thus, demonstrating a truly rateless performance.

Posted Content
TL;DR: In this paper, narrow-sense antiprimitive BCH codes over the Galois field were studied and the duals of these codes were derived, including almost MDS codes.
Abstract: The Bose-Chaudhuri-Hocquenghem (BCH) codes are a well-studied subclass of cyclic codes that have found numerous applications in error correction and notably in quantum information processing. A subclass of attractive BCH codes is the narrow-sense BCH codes over the Galois field $\mathrm{GF}(q)$ with length $q+1$, which are closely related to the action of the projective general linear group of degree two on the projective line. This paper aims to study some of the codes within this class and specifically narrow-sense antiprimitive BCH codes (these codes are also linear complementary duals (LCD) codes that have interesting practical recent applications in cryptography, among other benefits). We shall use tools and combine arguments from algebraic coding theory, combinatorial designs, and group theory (group actions, representation theory of finite groups, etc.) to investigate narrow-sense antiprimitive BCH Codes and extend results from the recent literature. Notably, the dimension, the minimum distance of some $q$-ary BCH codes with length $q+1$, and their duals are determined in this paper. The dual codes of the narrow-sense antiprimitive BCH codes derived in this paper include almost MDS codes. Furthermore, the classification of $\mathrm{PGL} (2, p^m)$-invariant codes over $\mathrm{GF} (p^h)$ is completed. As an application of this result, the $p$-ranks of all incidence structures invariant under the projective general linear group $\mathrm{ PGL }(2, p^m)$ are determined. Furthermore, infinite families of narrow-sense BCH codes admitting a $3$-transitive automorphism group are obtained. Via these BCH codes, a coding-theory approach to constructing the Witt spherical geometry designs is presented. The BCH codes proposed in this paper are good candidates for permutation decoding, as they have a relatively large group of automorphisms.

Journal ArticleDOI
TL;DR: This work investigates the performance of a coded communication system under various channel conditions, viz., slow fading and fast fading gamma–gamma channels for different turbulence regimes and obtains the coding gain and concludes that ECCs deteriorate the bit error rate performance in a slow fading channel.
Abstract: Free space optical (FSO) communication is of paramount importance for the design of next-generation 5G+ wireless networks. These are, however, susceptible to degrading effects of atmospheric turbulence. The use of error correcting codes (ECCs) can mitigate the effects of fading caused by atmospheric turbulence. We evaluate the performance of uncoded and coded FSO communication system, where we use Bose Chaudhuri Hocquenghem (BCH) and low-density parity-check codes for the coded system. Both the systems under consideration are based on intensity-modulation and direct detection. We investigate the performance of a coded communication system under various channel conditions, viz., slow fading and fast fading gamma–gamma channels for different turbulence regimes and subsequently, obtain the coding gain. We derive the expressions of channel capacity for a fast fading channel and outage capacity for a slow fading channel. We conclude that ECCs deteriorate the bit error rate performance in a slow fading channel. However, for this kind of channel, the combination of ECCs with an interleaver reduces the burst errors and improves the performance which leads to positive coding gain. In a fast fading channel, coding gain is not much and it remains almost same in different turbulence conditions. The analytical results are supported by simulation study.

Journal ArticleDOI
TL;DR: This paper proposes to use polar codes with flexible fast-adaptive SCL decoders in Digital Video Broadcasting (DVB) systems to meet the growing demand for more bitrates and presents the possible performance enhancement of DVB systems in terms of decoding latency and complexity when using optimized polar codes as a Forward Error Correction (FEC) technique.
Abstract: Polar codes are featured by their low encoding/decoding complexity for symmetric binary input-discrete memoryless channels. Recently, flexible generic Successive Cancellation List (SCL) decoders for polar codes were proposed to provide different throughput, latency, and decoding performances. In this paper, we propose to use polar codes with flexible fast-adaptive SCL decoders in Digital Video Broadcasting (DVB) systems to meet the growing demand for more bitrates. In addition, they can provide more interactive services with less latency and more throughput. First, we start with the construction of polar codes and propose a new mathematical relation to get the optimized design point for the polar code. We prove that our optimized design point is too close to the one that achieves minimum Bit Error Rate (BER). Then, we compare the performance of polar and Low-Density Parity Check (LDPC) codes in terms of BER, encoder/decoder latencies, and throughput. The results show that both channel coding techniques have comparable BER. However, polar codes are superior to LDPC in terms of decoding latency, and system throughput. Finally, we present the possible performance enhancement of DVB systems in terms of decoding latency and complexity when using optimized polar codes as a Forward Error Correction (FEC) technique instead of Bose Chaudhuri Hocquenghem (BCH) and LDPC codes that are currently adopted in DVB standards.

Journal ArticleDOI
TL;DR: This letter presents a fast blind recognition method for Bose, Chaudhuri, and Hocquenghem (BCH) code from noisy intercepted bit-streams by using the Galois field Fourier transform to recognize the code parameters without traversing the whole candidate datasets.
Abstract: This letter presents a fast blind recognition method for Bose, Chaudhuri, and Hocquenghem (BCH) code from noisy intercepted bit-streams. This approach employs the fact that the t-error-correcting BCH code has 2t continuous roots, and the roots correspond with the zero spectral. Specifically, the Galois field Fourier transform is simplified (S-GFFT) by using the property of conjugate roots. Then, the distribution characteristics of continuous zero spectra are analyzed. If the continuous zero spectra satisfy the threshold condition which is derived by the probability of zero spectrum component, the corresponding estimated length is the code length, and the relevant positions correspond to the continuous roots of the generated polynomial. Finally, the generator polynomial can be reconstructed based on algebras theory. Compared to the existing solutions, the computational amount of S-GFFT at most accounts for 60% of GFFT, and the proposed scheme manages to recognize the code parameters without traversing the whole candidate datasets.

Proceedings ArticleDOI
03 Apr 2021
TL;DR: In this article, a multi-bit error detection and correction (MEDC) system was proposed for improving the performance, increase the rate of the transmitted data, and improve the error detection performance.
Abstract: Generally in every communication medium, error correction codes were employed for correcting the obtained errors that are caused in the data due to transmission. The essence of these codes is to add redundant bits using mathematical algorithms at the transmission end and decode the received data to retrieve the original data. This paper presents the design of a Multi bit Error Detection and Correction (MEDC) system which is utilized in the microprocessors along with memory that performs error detection as well as error correction of data which was written and was read from memory using Bose, Ray-Chaudhuri, Hocquenghem (BCH) codes. The proposed MEDC system is synthesized and also simulated through Vivado design suit 2018.3 and was implemented using Kintex-7 FPGA board. Compared with the obtained results with well-known error detection codes, the proposed method is utilized for improving the performance, increase the rate of the error detection and correction in the transmitted data.

Journal ArticleDOI
TL;DR: In this paper, the authors used the CSS and Steane's constructions to establish quantum error-correcting codes (briefly, QEC codes) from cyclic codes of length $6p^{s}$ over $\mathbb F_{p^{m}}$.
Abstract: In this paper, we use the CSS and Steane’s constructions to establish quantum error-correcting codes (briefly, QEC codes) from cyclic codes of length $6p^{s}$ over $\mathbb F_{p^{m}}$ . We obtain several new classes of QEC codes in the sense that their parameters are different from all the previous constructions. Among them, we identify all quantum MDS (briefly, qMDS) codes, i.e., optimal quantum codes with respect to the quantum Singleton bound. In addition, we construct quantum synchronizable codes (briefly, QSCs) from cyclic codes of length $6p^{s}$ over $\mathbb F_{p^{m}}$ . Furthermore, we give many new QSCs to enrich the variety of available QSCs. A lot of them are QSCs codes with shorter lengths and much larger minimum distances than known non-primitive narrow-sense BCH codes.

Proceedings ArticleDOI
01 Oct 2021
TL;DR: In this article, the authors proposed a new hardware architecture called GRAND-MO (Guessing Random Additive Noise Decoding), which achieves an average throughput of up to 52 Gbps and 64 Gbps for a code length of 128 and 79 respectively.
Abstract: Guessing Random Additive Noise Decoding (GRAND) is a recently proposed Maximum Likelihood (ML) decoding technique. Irrespective of the structure of the error correcting code, GRAND tries to guess the noise that corrupted the codeword in order to decode any linear error-correcting block code. GRAND Markov Order (GRAND-MO) is a variant of GRAND that is useful to decode error correcting code transmitted over communication channels with memory which are vulnerable to burst noise. Usually, interleavers and de-interleavers are used in communication systems to mitigate the effects of channel memory. Interleaving and de-interleaving introduce undesirable latency, which increases with channel memory. To prevent this added latency penalty, GRAND-MO can be directly used on the hard demodulated channel signals. This work reports the first GRAND-MO hardware architecture which achieves an average throughput of up to 52 Gbps and 64 Gbps for a code length of 128 and 79 respectively. Compared to the GRANDAB, hard-input variant of GRAND, the proposed architecture achieves 3 dB gain in decoding performance for a target FER of 10−5. Similarly, comparing the GRAND-MO decoder with a decoder tailored for a (79,64) BCH code showed that the proposed architecture achieves 33% higher worst case throughput and 2 dB gain in decoding performance.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed fast and efficient hardware architectures for a 2D Bose-Chaudhuri-Hocquenghem (BCH) code with a quasi-cyclic burst error correction capability of $t\times t$, in the frequency domain for data storage applications.
Abstract: We propose fast and efficient hardware architectures for a 2-D Bose–Chaudhuri–Hocquenghem (BCH) code of size $n\times n$ , with a quasi-cyclic burst error correction capability of $t\times t$ , in the frequency domain for data storage applications. A fully parallel encoder with the ability to produce an output every clock cycle was designed. Using conjugate class properties of finite fields, the algorithmic complexity of the encoder was significantly reduced, leading to a reduction in the number of gates by about 94% of the brute-force implementation per 2-D inverse discrete finite field Fourier transform (IDFFFT) point for a $15\times 15$ , $t=2$ 2-D BCH code. We also designed a pipelined, low-latency decoder for the above encoder. The algorithmic complexity of various pipeline stages of the decoder was reduced significantly using finite field properties, reducing the space complexity of the entire decoder. For a particular case of $n=15$ and $t=2$ , the architectures were implemented on a Kintex 7 KC-705 field-programmable gate array (FPGA) kit, giving high throughputs of 22.5 and 5.6 Gb/s at 100 MHz for the encoder and decoder, respectively.

Journal ArticleDOI
TL;DR: The results showed that using BCH codes in UFMC contributed in enhancing BER performance while could decreasing both of PAPR and OOBE values better than OFDM system.
Abstract: Nowadays, fifth generation (5G) wireless network is considered one of the most important research topics in wireless industry and it will be substituting with fourth generation (4G) in several aspects. Although the robustness of orthogonal frequency division multiplexing (OFDM) system against channel delays which is the reason behind using it in LTE/LTE Advanced however, it is suffering from high peak to average power ration (PAPR) and out of band side lobes. So, universal filtered multi-carrier (UFMC) technique is considered a new modulation scheme for 5G wireless communication system to overcome on the common OFDM demits. In contrast, to achieve reliable data transmission in digital communication systems, using error correcting codes are considered an essential over noisy channels. In this paper, BCH code has been used for UFMC system over AWGN. The results showed that using BCH codes in UFMC contributed in enhancing BER performance while could decreasing both of PAPR and OOBE values better than conventional OFDM system.

Journal ArticleDOI
TL;DR: This brief places non-binary LDPC codes as the excellent candidates for future versions of the telecommand uplink standard by proposing a highly efficient decoding architecture and proving the feasibility of NB-LDPC coding for space TC applications.
Abstract: In the framework of error correction in space telecommand (TC) links, the Consultative Committee for Space Data Systems (CCSDS) currently recommends short block-length BCH and binary low-density parity-check (LDPC) codes. Other alternatives have been discarded due to their high decoding complexity, such as non-binary LDPC (NB-LDPC) codes. NB-LDPC codes perform better than their binary counterparts over AWGN and jamming channels, being great candidates for space communications. We show the feasibility of NB-LDPC coding for space TC applications by proposing a highly efficient decoding architecture. The proposed decoder is implemented for a (128,64) NB-LDPC code over GF(16) and the design is particularized for a space-certified Virtex-5QV FPGA. The results prove that NB-LDPC coding is an alternative that outperforms the standardized binary LDPC, with a coding gain of 0.7 dB at a reasonable implementation cost. Given that the maximum rate for TC recommended by the CCSDS is 2 Mbps, the proposed architecture achieves a throughput of 2.03 Mbps using only 9615 LUTs and 5637 FFs (no dedicated memories are used). In addition, this architecture is suitable for any regular (2,4) NB-LDPC (128,64) code over GF(16) independently of the H matrix, allowing flexibility in the choice of the code. This brief places NB-LDPC codes as the excellent candidates for future versions of the telecommand uplink standard.

Journal ArticleDOI
TL;DR: This letter mainly determines the maximum designed distance of Hermitian dual-containing Bose-Chaudhuri-Hocquenghem (BCH) codes of length $\Bbb F_{q^{2}}$ overtex-math> over the dimensions of some non-primitive BCH codes.
Abstract: Let $q$ be a prime power and $m\ge 4$ an even integer. Suppose that $n = \frac {q^{2m}-1}a$ such that $m$ is the multiplicative order of $q^{2}$ modulo $n$ , where $a\ge 2$ is a positive integer. This letter mainly determines the maximum designed distance of Hermitian dual-containing Bose-Chaudhuri-Hocquenghem (BCH) codes of length $n$ over $\Bbb F_{q^{2}}$ . Our results show that the designed distances of non-primitive BCH codes in this letter are larger. Moreover, we obtain the dimensions of some non-primitive BCH codes.

Journal ArticleDOI
04 Jun 2021-Sensors
TL;DR: In this paper, the authors proposed a method to identify the types of FEC codes based on Recurrent Neural Network (RNN) under the condition of non-cooperative communication.
Abstract: Forward error correction coding is the most common way of channel coding and the key point of error correction coding. Therefore, the recognition of which coding type is an important issue in non-cooperative communication. At present, the recognition of FEC codes is mainly concentrated in the field of semi-blind identification with known types of codes. However, the receiver cannot know the types of channel coding previously in non-cooperative systems such as cognitive radio and remote sensing of communication. Therefore, it is important to recognize the error-correcting encoding type with no prior information. In the paper, we come up with a neoteric method to identify the types of FEC codes based on Recurrent Neural Network (RNN) under the condition of non-cooperative communication. The algorithm classifies the input data into Bose-Chaudhuri-Hocquenghem (BCH) codes, Low-density Parity-check (LDPC) codes, Turbo codes and convolutional codes. So as to train the RNN model with better performance, the weight initialization method is optimized and the network performance is improved. The experimental result indicates that the average recognition rate of this model is 99% when the signal-to-noise ratio (SNR) ranges from 0 dB to 10 dB, which is in line with the requirements of engineering practice under the condition of non-cooperative communication. Moreover, the comparison of different parameters and models show the effectiveness and practicability of the algorithm proposed.