scispace - formally typeset
Search or ask a question

Showing papers on "BCH code published in 2015"


Book
01 Jan 2015
TL;DR: This is the revised edition of Berlekamp's famous book, "Algebraic Coding Theory," originally published in 1968, wherein he introduced several algorithms which have subsequently dominated engineering practice in this field.
Abstract: This is the revised edition of Berlekamp's famous book, "Algebraic Coding Theory," originally published in 1968, wherein he introduced several algorithms which have subsequently dominated engineering practice in this field. One of these is an algorithm for decoding Reed-Solomon and Bose–Chaudhuri–Hocquenghem codes that subsequently became known as the Berlekamp–Massey Algorithm. Another is the Berlekamp algorithm for factoring polynomials over finite fields, whose later extensions and embellishments became widely used in symbolic manipulation systems. Other novel algorithms improved the basic methods for doing various arithmetic operations in finite fields of characteristic two. Other major research contributions in this book included a new class of Lee metric codes, and precise asymptotic results on the number of information symbols in long binary BCH codes.Selected chapters of the book became a standard graduate textbook.Both practicing engineers and scholars will find this book to be of great value.

2,912 citations


Book
26 Mar 2015
TL;DR: This is the revised edition of Berlekamp's famous book, "Algebraic Coding Theory", originally published in 1968, wherein he introduced several algorithms which have subsequently dominated engineering practice in this field.
Abstract: This is the revised edition of Berlekamp's famous book, "Algebraic Coding Theory", originally published in 1968, wherein he introduced several algorithms which have subsequently dominated engineering practice in this field. One of these is an algorithm for decoding Reed-Solomon and BoseChaudhuriHocquenghem codes that subsequently became known as the BerlekampMassey Algorithm. Another is the Berlekamp algorithm for factoring polynomials over finite fields, whose later extensions and embellishments became widely used in symbolic manipulation systems. Other novel algorithms improved the basic methods for doing various arithmetic operations in finite fields of characteristic two. Other major research contributions in this book included a new class of Lee metric codes, and precise asymptotic results on the number of information symbols in long binary BCH codes. Selected chapters of the book became a standard graduate textbook. Both practicing engineers and scholars will find this book to be of great value. Readership: Researchers in coding theory and cryptography, algebra and number theory, and software engineering.

111 citations


Journal ArticleDOI
TL;DR: The experimental results demonstrate that this algorithm provides better robustness without affecting the quality of watermarked image, and combines the advantages and removes the disadvantages of the two transform techniques.
Abstract: In this paper, the effects of different error correction codes on the robustness and imperceptibility of discrete wavelet transform and singular value decomposition based dual watermarking scheme is investigated. Text and image watermarks are embedded into cover radiological image for their potential application in secure and compact medical data transmission. Four different error correcting codes such as Hamming, the Bose, Ray-Chaudhuri, Hocquenghem (BCH), the Reed---Solomon and hybrid error correcting (BCH and repetition code) codes are considered for encoding of text watermark in order to achieve additional robustness for sensitive text data such as patient identification code. Performance of the proposed algorithm is evaluated against number of signal processing attacks by varying the strength of watermarking and covers image modalities. The experimental results demonstrate that this algorithm provides better robustness without affecting the quality of watermarked image.This algorithm combines the advantages and removes the disadvantages of the two transform techniques. Out of the three error correcting codes tested, it has been found that Reed---Solomon shows the best performance. Further, a hybrid model of two of the error correcting codes (BCH and repetition code) is concatenated and implemented. It is found that the hybrid code achieves better results in terms of robustness. This paper provides a detailed analysis of the obtained experimental results.

103 citations


Journal ArticleDOI
TL;DR: The objective of this paper is to determine the Bose and minimum distances of a class of narrow-sense primitive BCH codes.
Abstract: Cyclic codes are an interesting class of linear codes due to their efficient encoding and decoding algorithms. Bose-Ray-Chaudhuri-Hocquenghem (BCH) codes form a subclass of cyclic codes and are very important in both theory and practice as they have good error-correcting capability and are widely used in communication systems, storage devices, and consumer electronics. However, the dimension and minimum distance of BCH codes are not known in general. The objective of this paper is to determine the Bose and minimum distances of a class of narrow-sense primitive BCH codes.

83 citations


Journal ArticleDOI
TL;DR: The dimension and minimum distances of a subclass of the narrow-sense primitive BCH codes with design distance are studied, and some open problems are proposed in this paper.
Abstract: Because of their efficient encoding and decoding algorithms, cyclic codes—an interesting class of linear codes—are widely used in communication systems, storage devices, and consumer electronics. BCH codes form a special class of cyclic codes, and are usually among the best cyclic codes. A subclass of good BCH codes is the narrow-sense primitive BCH codes. However, the dimension and minimum distance of these codes are not known in general. The main objective of this paper is to study the dimension and minimum distances of a subclass of the narrow-sense primitive BCH codes with design distance $\delta =(q-\ell _{0})q^{m-\ell _{1}-1}-1$ for certain pairs $(\ell _{0}, \ell _{1})$ , where $0 \leq \ell _{0} \leq q-2$ and $0 \leq \ell _{1} \leq m-1$ . The parameters of other related classes of BCH codes are also investigated, and some open problems are proposed in this paper.

75 citations


Proceedings ArticleDOI
15 Apr 2015
TL;DR: A high embedding payload of video steganography algorithm has been proposed based on the BCH coding to improve the security of the algorithm and is compared to both the Least Significant Bit (LSB) and [1] algorithms.
Abstract: Video steganography has become a popular topic due to the significant growth of video data over the Internet. The performance of any steganography algorithm depends on two factors: embedding efficiency and embedding payload. In this paper, a high embedding payload of video steganography algorithm has been proposed based on the BCH coding. To improve the security of the algorithm, a secret message is first encoded by BCH(n, k, t) coding. Then, it is embedded into the discrete wavelet transform (DWT) coefficients of video frames. As the DWT middle and high frequency regions are considered to be less sensitive data, the secret message is embedded only into the middle and high frequency DWT coefficients. The proposed algorithm is tested under two types of videos that contain slow and fast motion objects. The results of the proposed algorithm are compared to both the Least Significant Bit (LSB) and [1] algorithms. The results demonstrate better performance for the proposed algorithm than for the others. The hiding ratio of the proposed algorithm is approximately 28%, which is evaluated as a high embedding payload with a minimal tradeoff of visual quality. The robustness of the proposed algorithm was tested under various attacks. The results were consistent.

56 citations


Book
24 Jul 2015
TL;DR: This chapter discusses the development of Decoding Algorithms for Non-Binary Decoding of LDPC Codes and their applications in the context of VLSI Architecture Design.
Abstract: Preface List of Figures List of Tables Finite Field Arithmetic Definitions, Properties and Element Representations Finite Field Arithmetic Multiplications Using Basis Representations Inversions Using Basis Representations Mapping Between Finite Field Element Representations Mapping Between Standard Basis and Composite Field Representations Mapping Between Power and Standard Basis Representations Mapping Between Standard and Normal Basis Representations VLSI Architecture Design Fundamentals Definitions and Graph Representation Pipelining and Retiming Parallel Processing and Unfolding Folding Root Computations for Polynomials Over Finite Fields Root Computation for General Polynomials Root Computation for Linearized and Affine Polynomials Root Computation for Polynomials of Degree Two or Three Reed-Solomon Encoder and Hard-Decision and Erasure Decoder Architectures Reed-Solomon Codes Reed-Solomon Encoder Architectures Hard-Decision Reed-Solomon Decoding Algorithms and Architectures Peterson-Gorenstein-Zierler Algorithm Berlekamp-Massey Algorithm Reformulated Inversionless Berlekamp-Massey Algorithm and Architectures Syndrome, Error Location, and Magnitude Computation Architectures Pipelined Decoder Architecture Error-and-Erasure Reed-Solomon Decoders Algebraic Soft-Decision Reed-Solomon Decoder Architectures Algebraic Soft-Decision Decoding Algorithms Re-Encoded Algebraic Soft-Decision Decoder Re-Encoding Algorithms and Architectures Interpolation Algorithms and Architectures Kotter's Interpolation Algorithm and Architectures Lee-O'Sullivan Interpolation Algorithm and Architectures Kotter's and Lee-O'Sullivan Interpolation Comparisons Factorization Algorithm and Architectures Prediction-Based Factorization Architecture Partial-Parallel Factorization Architecture Interpolation-Based Chase and Generalized Minimum Distance Decoders Interpolation-Based Chase Decoder Backward-Forward Interpolation Algorithms and Architectures Eliminated Factorization Polynomial Selection Schemes Chien-Search-Based Codeword Recovery Systematic Re-Encoding Generalized Minimum Distance Decoder Kotter's One-Pass GMD Decoder Interpolation-Based One-Pass GMD Decoder BCH Encoder and Decoder Architectures BCH Codes BCH Encoder Architectures Hard-Decision BCH Decoding Algorithms and Architectures Peterson's Algorithm The Berlekamp's Algorithm and Implementation Architectures 3-Error-Correcting BCH Decoder Architectures Chase BCH Decoder Based on Berlekamp's Algorithm Interpolation-Based Chase BCH Decoder Architectures Binary LDPC Codes and Decoder Architectures LDPC Codes LDPC Decoding Algorithms Belief Propagation Algorithm Min-Sum Algorithm Majority-Logic and Bit-Flipping Algorithms Finite Alphabet Iterative Decoding Algorithm LDPC Decoder Architectures Scheduling Schemes VLSI Architectures for CNUs and VNUs Low-Power LDPC Decoder Design Non-Binary LDPC Decoder Architectures Non-Binary LDPC Codes and Decoding Algorithms Belief Propagation Decoding Algorithms Extended Min-Sum and Min-Max Algorithms Iterative Reliability-Based Majority-Logic Decoding Min-Max Decoder Architectures Forward-Backward Min-Max Check Node Processing Trellis-Based Path-Construction Min-Max Check Node Processing Simplified Min-Max Check Node Processing Syndrome-Based Min-Max Check Node Processing Basis-Construction Min-Max Check Node Processing Variable Node Unit Architectures Overall NB-LDPC Decoder Architectures Extended Min-Sum Decoder Architectures Extended Min-Sum Elementary Step Architecture Trellis-Based Path-Construction Extended Min-Sum Check Node Processing Iterative Majority-Logic Decoder Architectures IHRB Decoders for QCNB-LDPC Codes IHRB Decoders for Cyclic NB-LDPC Codes Enhanced IHRB Decoding Scheme and Architectures Bibliography Index

47 citations


Proceedings ArticleDOI
01 May 2015
TL;DR: A novel video steganography algorithm in the wavelet domain based on the KLT tracking algorithm and BCH codes is proposed, which has demonstrated a high embedding efficiency and a highembedding payload.
Abstract: Recently, video steganography has become a popular option for a secret data communication. The performance of any steganography algorithm is based on the embedding efficiency, embedding payload, and robustness against attackers. In this paper, we propose a novel video steganography algorithm in the wavelet domain based on the KLT tracking algorithm and BCH codes. The proposed algorithm includes four different phases. First, the secret message is preprocessed, and BCH codes (n, k, t) are applied in order to produce an encoded message. Second, face detection and face tracking algorithms are applied on the cover videos in order to identify the facial regions of interest. Third, the process of embedding the encoded message into the high and middle frequency wavelet coefficients of all facial regions is performed. Forth, the process of extracting the secret message from the high and middle frequency wavelet coefficients for each RGB components of all facial regions is accomplished. Experimental results of the proposed video steganography algorithm have demonstrated a high embedding efficiency and a high embedding payload.

41 citations


Journal ArticleDOI
TL;DR: The experimental results show that this new robust reversible data hiding algorithm can get more robustness, effectively avert intra-frame distortion drift and get good visual quality.

38 citations


Journal ArticleDOI
TL;DR: The experimental results show that this new data hiding algorithm can get more robustness, effectively avert intra-frame distortion drift and get high visual quality.

37 citations


Book ChapterDOI
23 Mar 2015
TL;DR: This paper considers, for the first time, the regime of arbitrary positive constant error probability e in combination with unbounded cardinality M of the message space and proposes a novel constructive method based on symmetries of codes that leads to an explicit construction based on certain BCH codes that improves the parameters of the polynomial construction and to an efficient randomized construction of optimal AMD codes.
Abstract: Algebraic manipulation detection (AMD) codes, introduced at EUROCRYPT 2008, may, in some sense, be viewed as keyless combinatorial authentication codes that provide security in the presence of an oblivious, algebraic attacker. Its original applications included robust fuzzy extractors, secure message transmission and robust secret sharing. In recent years, however, a rather diverse array of additional applications in cryptography has emerged. In this paper we consider, for the first time, the regime of arbitrary positive constant error probability e in combination with unbounded cardinality M of the message space. There are several applications where this model makes sense. Adapting a known bound to this regime, it follows that the binary length ρ of the tag satisfies ρ ≥ loglogM + Ω e (1). In this paper, we shall call AMD codes meeting this lower bound optimal. Known constructions, notably a construction based on dedicated polynomial evaluation codes, are a multiplicative factor 2 off from being optimal. By a generic enhancement using error-correcting codes, these parameters can be further improved but remain suboptimal. Reaching optimality efficiently turns out to be surprisingly nontrivial. We propose a novel constructive method based on symmetries of codes. This leads to an explicit construction based on certain BCH codes that improves the parameters of the polynomial construction and to an efficient randomized construction of optimal AMD codes based on certain quasi-cyclic codes. In all our results, the error probability e can be chosen as an arbitrarily small positive real number.

Proceedings ArticleDOI
01 Jan 2015
TL;DR: Improved PTS technique has better performance in reducing PAPR as compare to conventional OFDM system and existing PTS techniques, MATLAB simulation results reveals.
Abstract: Orthogonal frequency division multiplexing (OFDM), also known as multi-carrier modulation scheme, is an attractive technology for upcoming communication system, to enhance data rate, higher spectral efficiency and better quality of service (QOS). In OFDM, first higher data stream has been divided into lower data streams and after modulation these data streams are together transmitted by orthogonal sub-carriers. Due to orthogonal nature of sub-carriers proper utilization of bandwidth is achieved along with reduced cost of OFDM communication system. A major challenge in OFDM system is its higher Peak to Average Power ratio (PAPR), which degrades system efficiency. Due to larger PAPR, High power amplifier (HPA) starts to operate in non linear region hence distortion introduced in transmitted data. In this paper, combination of higher order partitioned partial transmitted sequence (PTS) along with Bose Chaudhuri Hocquenghem Code (BCH), have been proposed to diminish the PAPR significantly. This proposed scheme is used to minimize PAPR by choosing the signal which is having less PAPR among many signals. At transmitter side, scrambling process uses Coset leader of Bose Chaudhuri Hocquenghem Codes (BCH) and syndrome decoding method to recover the transmitted sequence are being used at receiver side. Finally, MATLAB (Matrix Laboratory) simulation results reveals that our improved PTS technique has better performance in reducing PAPR as compare to conventional OFDM system and existing PTS techniques.

Proceedings ArticleDOI
01 Sep 2015
TL;DR: Through some subtle manipulation on solving certain equations over finite fields, a conjecture proposed by Ding and Helleseth in 2013 about a class of optimal ternary cyclic codes C(1,e) for e = 2(1+3h) with parameters [3-1, 3-1-2m, 4] is settled, where m > 1 is an odd integer and 0 ≤ h ≤ m-1.
Abstract: Cyclic codes are an important class of linear codes and have been widely used in many areas such as consumer electronics, data storage and communication systems. Let C(1,e) denote the cyclic code with generator polynomial mα(x)mαe (x), where α is a primitive element of F3m and mαi (x) denotes the minimal polynomial of αi over F3 for 1 ≤ i ≤ 3m − 1. In this paper, through some subtle manipulation on solving certain equations over finite fields, a conjecture proposed by Ding and Helleseth in 2013 about a class of optimal ternary cyclic codes C(1,e) for e = 2(1+3h) with parameters [3m − 1, 3m − 1 − 2m, 4] is settled, where m > 1 is an odd integer and 0 ≤ h ≤ m − 1.

Journal ArticleDOI
TL;DR: A hardware implementation of a pipelined GC decoder is presented and the cell area, cycle counts as well as the timing constraints are investigated and the results are compared to a decoder for long BCH codes with similar error correction performance.
Abstract: This paper proposes a pipelined decoder architecture for generalised concatenated (GC) codes. These codes are constructed from inner binary Bose–Chaudhuri–Hocquenghem (BCH) and outer Reed–Solomon codes. The decoding of the component codes is based on hard decision syndrome decoding algorithms. The concatenated code consists of several small BCH codes. This enables a hardware architecture where the decoding of the component codes is pipelined. A hardware implementation of a GC decoder is presented and the cell area, cycle counts as well as the timing constraints are investigated. The results are compared to a decoder for long BCH codes with similar error correction performance. In comparison, the pipelined GC decoder achieves a higher throughput and has lower area consumption.

Proceedings ArticleDOI
14 Jun 2015
TL;DR: In this article, the authors proposed a rewriting model that combines rewriting and error correction for mitigating the reliability and the endurance problems in flash memory, where only the second write uses WOM codes.
Abstract: This paper constructs WOM codes that combine rewriting and error correction for mitigating the reliability and the endurance problems in flash memory.We consider a rewriting model that is of practical interest to flash applications where only the second write uses WOM codes. Our WOM code construction is based on binary erasure quantization with LDGM codes, where the rewriting uses message passing and has potential to share the efficient hardware implementations with LDPC codes in practice. We show that the coding scheme achieves the capacity of the rewriting model. Extensive simulations show that the rewriting performance of our scheme compares favorably with that of polar WOM code in the rate region where high rewriting success probability is desired. We further augment our coding schemes with error correction capability. By drawing a connection to the conjugate code pairs studied in the context of quantum error correction, we develop a general framework for constructing error-correction WOM codes. Under this framework, we give an explicit construction of WOM codes whose codewords are contained in BCH codes

Journal ArticleDOI
TL;DR: A byte-reconfigurable cost-effective high-throughput QC-LDPC codec design for NAND Flash memory systems that is implemented in TSMC 90 nm technology and can save on-chip memory cost.
Abstract: The reliability of NAND Flash memory deteriorates due to multi-level cell technique and advanced manufacturing technology. To deal with more errors, LDPC codes show superior performance to conventional BCH codes as ECC of NAND Flash memory systems. However, LDPC codec for NAND Flash memory systems faces problems of high redesign effort, high on-chip memory cost and high-throughput demand. This paper presents a byte-reconfigurable cost-effective high-throughput QC-LDPC codec design for NAND Flash memory systems. Reconfigurable codec design is proposed to support various QC-LDPC codes for different Flash memories. To save on-chip memory cost, shared-memory architecture and rescheduling architecture are presented for encoder and decoder, respectively. The shared-memory architecture can save 23% area cost of the encoder and the rescheduling architecture reduces 15% area cost of decoder. In addition, the proposed sub-iteration based early termination (SIB-ET) scheme reduces 29.6% decoding iteration counts compare with the state-of-the-art early termination scheme when raw BER of Flash memory is $3\times 10^{-3}$ . Finally, the QC-LDPC codec for NAND Flash memory systems is implemented in TSMC 90 nm technology. The post-layout result shows that the core size is only 6.72 ${\rm mm}^{2}$ at 222 MHz operating frequency.

Journal ArticleDOI
TL;DR: In this paper, an extension of polar codes with dynamic frozen symbols is proposed, which allows some of the frozen symbols to be data-dependent, and the proposed codes have higher minimum distance than classical polar codes, but still can be efficiently decoded using the successive cancellation algorithm.
Abstract: An extension of polar codes is proposed, which allows some of the frozen symbols, called dynamic frozen symbols, to be data-dependent A construction of polar codes with dynamic frozen symbols, being subcodes of extended BCH codes, is proposed The proposed codes have higher minimum distance than classical polar codes, but still can be efficiently decoded using the successive cancellation algorithm and its extensions The codes with Arikan, extended BCH and Reed-Solomon kernel are considered The proposed codes are shown to outperform LDPC and turbo codes, as well as polar codes with CRC

Journal ArticleDOI
TL;DR: A modification of such concatenated coding system in Chen et al. is proposed, which improves the error correcting capability in the waterfall region while keeps low error floor.
Abstract: As adopting a very powerful error-correcting code gradually becomes a strategic demand for the endurance of nowadays flash memory, LDPC codes are recently proposed due to their outstanding error correcting capability. However, the error floor phenomenon of LDPC codes might not meet the extreme low error rate requirement of flash memory applications. Thus, concatenation of BCH and LDPC codes that strikes a balance between superb error correcting capability and low error floor becomes an alternative system structure. In this work, a modification of such concatenated coding system in Chen et al. [IEEE Commun. Lett., vol. 17, no. 5, pp. 980–983, May 2013] is proposed. Compared with the previous concatenated coding system via simulations, our design improves the error correcting capability in the waterfall region while keeps low error floor.

Journal ArticleDOI
Daesung Kim1, Jeongseok Ha1
TL;DR: This work proposes a novel design rule of block-wise concatenated Bose-Chaudhuri-Hocquenghem (BC-BCH) codes for storage devices using multi-level per cell (MLC) NAND flash memories and introduces a novel collaborative decoding algorithm which targets at resolving dominant error patterns associated with the IHDD.
Abstract: In this work, we propose a novel design rule of block-wise concatenated Bose-Chaudhuri-Hocquenghem (BC-BCH) codes for storage devices using multi-level per cell (MLC) NAND flash memories. BC-BCH codes designed in accordance with the proposed design rule are called quasi-primitive BC-BCH codes in which constituent BCH codes are deliberately chosen for their lengths to be as close to primitive BCH codes as possible. It will be shown that such quasi-primitive BC-BCH codes can achieve significant improvements of error-correcting capability over the existing BC-BCH codes when an iterative hard-decision based decoding (IHDD) is assumed. In addition, we propose a novel collaborative decoding algorithm which targets at resolving dominant error patterns associated with the IHDD. Error-rate performances of error-control systems with the proposed quasi-primitive BC-BCH and existing BC-BCH codes are compared. For more comprehensive performance comparisons, systems with a hypothetically long BCH code and a product code are also considered in the comparisons.

Proceedings ArticleDOI
09 Jul 2015
TL;DR: A spread-spectrum watermarking algorithm for embedding text watermark in to digital images in discrete wavelet transform (DWT) domain and it is observed that the use of BCH code improves the performance by reducing bit error rate (BER) performance.
Abstract: This paper presents a spread-spectrum watermarking algorithm for embedding text watermark in to digital images in discrete wavelet transform (DWT) domain. The algorithm is applied for embedding text file represented in binary arrays using ASCII code into host digital radiological image for potential telemedicine applications. In order to enhance the robustness of text watermarks like patient identity code, BCH (Bose, Ray-Chaudhuri, Hocquenghem) error correcting code (ECC) is applied to the ASCII representation of the text watermark before embedding. Performance of the algorithm is analysed by varying the gain factor, subband decomposition levels, and length of watermark. Robustness of the scheme is tested against various attacks like compression, filtering, noise, sharpening, scaling and histogram equalization. Simulation results show that the proposed method achieves imperceptible watermarking for string watermarks. It is also observed that the use of BCH code improves the performance by reducing bit error rate (BER) performance.

Journal ArticleDOI
TL;DR: This paper constructs a decision statistic for SM based on the SU's receiver error count, and evaluates the detection probability of SM in the presence of interference from the PU signal, and derives closed-form formulas for channel utilization and detection delay using two Markov chain models.
Abstract: In-band spectrum sensing (SS) requires that secondary users (SUs) periodically interrupt their communication to detect the emergence of the primary users (PUs) in the channel. A new approach referred to as spectrum monitoring (SM) was proposed by Boyd et al. , which allows the SU to employ its receiver statistics to detect the emergence of the PU during its own communication periods. In this paper, we construct a decision statistic for SM based on the SU's receiver error count. We then evaluate the detection probability of SM in the presence of interference from the PU signal. Next, we derive closed-form formulas for channel utilization and detection delay using two Markov chain models. Upper and lower bounds on channel utilization and detection delay are derived, and an optimization problem is formulated and solved to maximize channel utilization with a constraint on detection delay. Numerical results from analysis are compared with simulation results obtained for a BCH code and a convolutional code, which show the accuracy of the analysis and the significant improvement of a hybrid SM/SS over SS alone.

Journal ArticleDOI
TL;DR: The methodology introduced herein offers a new perspective on the joint queueing-coding analysis of finite-state channels with memory, and it is supported by numerical simulations.
Abstract: This paper examines the queueing performance of communication systems that transmit encoded data over unreliable channels. A fading formulation suitable for wireless mobile applications is considered, where errors are caused by a discrete channel with correlated behavior over time. For carefully selected channel models and arrival processes, a tractable Markov structure composed of queue length and channel state is identified. This facilitates the analysis of the stationary behavior of the system, leading to evaluation criteria such as bounds on the probability of the queue exceeding a threshold. Specifically, this paper focuses on system models with scalable arrival profiles, which are based on Poisson processes, and finite-state channels with memory. These assumptions permit the rigorous comparison of system performance for codes with arbitrary block lengths and code rates. Based on the resulting characterizations, it is possible to select the best code parameters for delay-sensitive applications over various channels. Random codes and BCH codes are then employed as a means to study the relationship between code parameter selection and the queueing performance of point-to-point data links. The methodology introduced herein offers a new perspective on the joint queueing–coding analysis of finite-state channels with memory, and it is supported by numerical simulations.

01 Jan 2015
TL;DR: In this paper, a novel design rule of block-wise concatenated Bose-Chaudhuri-Hocquenghem (BC-BCH) codes for storage devices using multi-level per cell (MLC) NAND flash memories is proposed.
Abstract: In this work, we propose a novel design rule of block- wise concatenated Bose-Chaudhuri-Hocquenghem (BC-BCH) codes for storage devices using multi-level per cell (MLC) NAND flash memories. BC-BCH codes designed in accordance with the proposed design rule are called quasi-primitive BC-BCH codes in which constituent BCH codes are deliberately chosen for their lengths to be as close to primitive BCH codes as possible. It will be shown that such quasi-primitive BC-BCH codes can achieve signif- icant improvements of error-correcting capability over the existing BC-BCH codes when an iterative hard-decision based decoding (IHDD) is assumed. In addition, we propose a novel collaborative decoding algorithm which targets at resolving dominant error patterns associated with the IHDD. Error-rate performances of error-control systems with the proposed quasi-primitive BC-BCH and existing BC-BCH codes are compared. For more compre- hensive performance comparisons, systems with a hypothetically long BCH code and a product code are also considered in the comparisons.

Proceedings ArticleDOI
24 May 2015
TL;DR: The proposed multimode BCH encoder architecture also provides the reconfigurable error correction capability for 1 ≤ tsel ≤ 32 and the experimental results show that, in case of BCH (8640, 8192, 32) codes, the total area of SC modules are reduced by 96% compared to the previous re-encoding based SC module design.
Abstract: This paper presents a hybrid multimode Bose Chaudhuri Hocquenghem (BCH) encoder for reducing the input length of Syndrome calculation (SC) based on re-encoding approach. In previous re-encoding approaches, a conventional BCH encoder with long generator polynomials is used as a remainder operator to reduce the input length of SC. However, the input length is still large since long polynomial is used as a denominator of remainder operator for re-encoding. In the proposed approach, several minimal polynomials are employed as the denominators of remainder operators by utilizing the hardware of hybrid multimode BCH encoder. As a result, the minimum input length for SC can be employed for SC implementation through reencoding scheme, which leads to considerable area and latency reduction in SC module design. The proposed BCH encoder architecture and reduced SC modules are implemented using Samsung 65nm technology. The experimental results show that, in case of BCH (8640, 8192, 32) codes, the total area of SC modules are reduced by 96% compared to the previous re-encoding based SC module design, while the proposed multimode BCH encoder architecture also provides the reconfigurable error correction capability for 1 ≤ t sel ≤ 32.

30 Apr 2015
TL;DR: The pixels’ positions of the video frames’ components are randomly permuted by using a private key, and the bits’ locations of the secret message are also permuted using the same private key to protect the message from being read.
Abstract: In this paper, in order to improve the security and efficiency of the steganography algorithm, we propose an efficient video steganography algorithm based on the binary BCH codes. First the pixels’ positions of the video frames’ components are randomly permuted by using a private key. Moreover, the bits’ positions of the secret message are also permuted using the same private key. Then, the secret message is encoded by applying BCH codes (n, k, t), and XORed with random numbers before the embedding process in order to protect the message from being read. The selected embedding area in each Y, U, and V frame components is randomly chosen, and will differ from frame to frame. The embedding process is achieved by hiding each of the encoded blocks into the 3-2-2 least significant bit (LSB) of the selected YUV pixels. Experimental results have demonstrated that the proposed algorithm have a high embedding efficiency, high embedding payload, and resistant against hackers.

Proceedings ArticleDOI
01 Oct 2015
TL;DR: A novel method for blind recognition of Reed-Solomon (RS) codes is proposed in this paper, which is based on Galois Field Fourier Transform of code polynomial, which allows for an optimum decision threshold (ODT) with the rule of minimum error probability.
Abstract: In this paper, a new method for blind recognition of BCH code from an intercepted sequence of noise affected codewords is proposed. The proposed method recovers the parameters of a BCH code by finding the roots of it's generator polynomial. Firstly, the Galois Field Fourier Transform (GFFT) operation is carried out for each sequence of an estimated length. Then, find the positions of common zero spectral components of all sequences' GFFT. If such positions exist, the corresponding estimated length is the code length and the roots of the underlying generator polynomial are found. Furthermore, the theoretical analysis of the proposed method is given in detail and an optimal threshold is derived to minimize the summation of the false alarm and miss detection probability for distinguishing the root and non-root of the generator polynomial. Simulation results show that the proposed method outperforms the previous ones.

Journal ArticleDOI
TL;DR: This work presents a methodology to decide optimal supply voltage with respect to standby power under radiation, and visualize the methodology under solar max/min galactic cosmic ray radiation environment of geosynchronous earth orbit and three error correction code (ECC) scenarios: Hamming code, double-error-correction (DEC) Bose-Chaudhuri-Hocquenghem (BCH) code, and triple- Error Correction Code (TEC) BCH code.
Abstract: In static random access memory, standby power is the summation of scrubbing and leakage powers. In terrestrial environments, the leakage power is more dominant than the scrubbing power. Hence, the conventional methodology to reduce SRAM standby power is to lower ${V_{DD}}$ to possible minimum voltage under process, voltage, and temperature variations. However, under severe radiation environments such as space, high scrubbing rate is indispensable to prevent the accumulation of soft-errors, making the scrubbing power have a substantial portion of total standby power. Since the soft-error rate becomes higher with the ${V_{DD}}$ scaling, the conventional methodology may not be valid under radiation environments. We present a methodology to decide optimal supply voltage with respect to standby power under radiation. We visualize our methodology under solar max/min galactic cosmic ray radiation environment of geosynchronous earth orbit and three error correction code (ECC) scenarios: Hamming code, double-error-correction (DEC) Bose–Chaudhuri–Hocquenghem (BCH) code, and triple-error-correction (TEC) BCH code. In 65 nm CMOS, Hamming code fails to deliver our target decoded bit-error-rate. Under other ECCs, the proposed methodology shows that 0.97 V (for DEC BCH) and 0.8 V (for TEC BCH) are optimal. Here, we can obtain 30% (for DEC BCH) and 60% (for TEC BCH) standby power savings compared to nominal voltage (= 1.2 V), respectively.


Journal ArticleDOI
12 Mar 2015-Entropy
TL;DR: An information hiding method that satisfies the IHC evaluation criteria uses the difference of the frequency coefficients derived from a discrete cosine transform or a discrete wavelet transform to find the best positions in the frequency domains for watermark insertion.
Abstract: In recent years, information hiding and its evaluation criteria have been developed by the IHC (Information Hiding and its Criteria) Committee of Japan. This committee was established in 2011 with the aim of establishing standard evaluation criteria for robust watermarks. In this study, we developed an information hiding method that satisfies the IHC evaluation criteria. The proposed method uses the difference of the frequency coefficients derived from a discrete cosine transform or a discrete wavelet transform. The algorithm employs a statistical analysis to find the best positions in the frequency domains for watermark insertion. In particular, we use the BCH (Bose-Chaudhuri-Hocquenghem) (511,31,109) code to error correct the watermark bits and the BCH (63,16,11) code as the sync signal to withstand JPEG (Joint Photographic Experts Group) compression and cropping attacks. Our experimental results showed that there were no errors in 10 HDTV-size areas after the second decompression. It should be noted that after the second compression, the file size should be less than 1 25 of the original size to satisfy the IHC evaluation criteria.

Proceedings ArticleDOI
03 Dec 2015
TL;DR: The optical transmittance and bit error rate (BER) of focused and collimated laser beams are experimentally examined in an underwater optical wireless communication link with different water types and results show that salt and maalox content decreases theTransmittance, the convolution codes have better BER performance than BCH codes under the same modulation scheme.
Abstract: In this paper, the optical transmittance and bit error rate (BER) of focused and collimated laser beams are experimentally examined in an underwater optical wireless communication link with different water types. The water types used are fresh water, salty water and their variations with maalox in order to obtain turbid water. In bit error rate (BER) analysis, on-off keying (OOK) is used together with Bose-Chaudhuri-Hocquenghem (BCH) and convolutional codes. Results show that salt and maalox content decreases the transmittance, the convolution codes have better BER performance than BCH codes under the same modulation scheme (i.e., OOK) and focusing improves both the transmittance and BER performance as compared to collimated beams.