scispace - formally typeset
Search or ask a question

Showing papers on "Error detection and correction published in 2014"


01 Jan 2014
TL;DR: In this paper all error detecting and error correcting mechanisms are studied and best mechanism on the basis of accuracy, complexity and power consumption is selected.
Abstract: In most communication system convolutional encoders are used and AWGN introduces errors during transmission. In this paper all error detecting and error correcting mechanisms are studied and best mechanism on the basis of accuracy, complexity and power consumption is selected. Error detection and correction mechanisms are vital and numerous techniques exist for reducing the effect of bit - errors and trying to ensure that the receiver eventually gets an error free version of the packet. In order to protect memories against MCUs as well as SEUs is to make use of advanced Error detecting and correcting codes that can correct more than one error per word. There should be tradeoff between complexity of hardware and power consumption in decoder. Index Term:- Error correcting codes, error detecting codes, hamming codes, block codes INTRODUCTION: Error detection and correction mechanisms are vital and numerous techniques exist for reducing the effect of bit -errors and trying to ensure that the receiver eventually gets an error free version of the packet. The major techniques used are error detection with Automatic Repeat Request, Forward Error Correction and hybrid forms. Forward Error Correction is the method of transmitting error correction information along with the message. To prevent soft errors from causing data corruption, memories are typically protected with error correction codes. Error correcting codes (ECCs) are commonly used to protect against soft errors and thereby enhance system reliability and data integrity. Single error detecting and single error correcting codes are used for this purpose, these codes are able to correct single bit errors and detect double bit errors in a codeword. The code is often designed first with the goal of minimizing the gap from Shannon capacity and attaining the target error probability. ECC protects against undetected data corruption, and is used in computers where such corruption is unacceptable, as with some scientific and financial computing applications and as file servers. ECC also reduces the number of crashes, particularly unacceptable in multi-user server applications and maximum-availability systems. Error detection is most commonly realized using a suitable hash function (or checksum algorithm). A hash function adds a fixed-length tag to a message, which enables receivers to verify the delivered message by recomputing the tag and comparing it with the one provided.

309 citations


Journal ArticleDOI
TL;DR: High-fidelity parity detection of two code qubits via measurement of a third syndrome qubit is demonstrated and a measurement tomography protocol is developed to fully characterize this parity readout.
Abstract: Quantum error correction protocols aim at protecting quantum information from corruption due to decoherence and imperfect control. Using three superconducting transmon qubits, Chow et al. demonstrate necessary elements for the implementation of the surface error correction code on a two-dimensional lattice.

297 citations


Journal ArticleDOI
TL;DR: A half-quadratic (HQ) framework to solve the robust sparse representation problem is developed and it is shown that the ℓ1-regularization solved by soft-thresholding function has a dual relationship to Huber M-estimator, which theoretically guarantees the performance of robust sparse Representation in terms of M-ESTimation.
Abstract: Robust sparse representation has shown significant potential in solving challenging problems in computer vision such as biometrics and visual surveillance. Although several robust sparse models have been proposed and promising results have been obtained, they are either for error correction or for error detection, and learning a general framework that systematically unifies these two aspects and explores their relation is still an open problem. In this paper, we develop a half-quadratic (HQ) framework to solve the robust sparse representation problem. By defining different kinds of half-quadratic functions, the proposed HQ framework is applicable to performing both error correction and error detection. More specifically, by using the additive form of HQ, we propose an l1-regularized error correction method by iteratively recovering corrupted data from errors incurred by noises and outliers; by using the multiplicative form of HQ, we propose an l1-regularized error detection method by learning from uncorrupted data iteratively. We also show that the l1-regularization solved by soft-thresholding function has a dual relationship to Huber M-estimator, which theoretically guarantees the performance of robust sparse representation in terms of M-estimation. Experiments on robust face recognition under severe occlusion and corruption validate our framework and findings.

257 citations


Journal ArticleDOI
TL;DR: The butterfly structure of polar codes introduces correlation among source bits, justifying the use of the SC algorithm for efficient decoding, and state-of-the-art decoding algorithms, such as the BP and some generalized SC decoding, are explained in a broad framework.
Abstract: Polar codes represent an emerging class of error-correcting codes with power to approach the capacity of a discrete memoryless channel. This overview article aims to illustrate its principle, generation and decoding techniques. Unlike the traditional capacity-approaching coding strategy that tries to make codes as random as possible, the polar codes follow a different philosophy, also originated by Shannon, by creating a jointly typical set. Channel polarization, a concept central to polar codes, is intuitively elaborated by a Matthew effect in the digital world, followed by a detailed overview of construction methods for polar encoding. The butterfly structure of polar codes introduces correlation among source bits, justifying the use of the SC algorithm for efficient decoding. The SC decoding technique is investigated from the conceptual and practical viewpoints. State-of-the-art decoding algorithms, such as the BP and some generalized SC decoding, are also explained in a broad framework. Simulation results show that the performance of polar codes concatenated with CRC codes can outperform that of turbo or LDPC codes. Some promising research directions in practical scenarios are also discussed in the end.

207 citations


Journal ArticleDOI
TL;DR: An implementable sensing protocol is developed that incorporates error correction, and it is shown that measurement precision can be enhanced for both one-directional and general noise.
Abstract: The signal to noise ratio of quantum sensing protocols scales with the square root of the coherence time Thus, increasing this time is a key goal in the field By utilizing quantum error correction, we present a novel way of prolonging such coherence times beyond the fundamental limits of current techniques We develop an implementable sensing protocol that incorporates error correction, and discuss the characteristics of these protocols in different noise and measurement scenarios We examine the use of entangled versue untangled states, and error correction's reach of the Heisenberg limit The effects of error correction on coherence times are calculated and we show that measurement precision can be enhanced for both one-directional and general noise

178 citations


Journal ArticleDOI
TL;DR: A novel reformulation for the last stage of SC decoding is presented, which leads to a significant reduction in critical path and hardware complexity and 2 bits can be decoded simultaneously instead of 1 bit in this new decoder.
Abstract: Polar codes have emerged as important error correction codes due to their capacity-achieving property. Successive cancellation (SC) algorithm is viewed as a good candidate for hardware design of polar decoders due to its low complexity. However, for (n, k) polar codes, the long latency of SC algorithm of (2n-2) is a bottleneck for designing high-throughput polar decoder. In this paper, we present a novel reformulation for the last stage of SC decoding. The proposed reformulation leads to two benefits. First, critical path and hardware complexity in the last stage of SC algorithm is significantly reduced. Second, 2 bits can be decoded simultaneously instead of 1 bit. As a result, this new decoder, referred to as 2b-SC decoder, reduces latency from (2n-2) to (1.5n-2) without performance loss. Additionally, overlapped-scheduling, precomputation and look-ahead techniques are used to design two additional decoders referred to as 2b-SC-Overlapped-scheduling decoder and 2b-SC-Precomputation decoder, respectively. All three architectures offer significant advantages with respect to throughput and hardware efficiency. Compared to known prior least-latency SC decoder, the 2b-SC-Precomputation decoder has 25% less latency. Synthesis results show that the proposed (1024, 512) 2b-SC-Precomputation decoder can achieve at least 4 times increase in throughput and 40% increase in hardware efficiency.

109 citations


Patent
18 Feb 2014
TL;DR: In this article, a read operation with respect to a particular group of memory cells of a memory device and, if the read operation results in an uncorrectable error, determining whether to retire the memory cells in response to a status of an indicator corresponding to the particular group.
Abstract: The present disclosure includes methods, devices, and systems for error detection/correction based memory management. One embodiment includes performing a read operation with respect to a particular group of memory cells of a memory device and, if the read operation results in an uncorrectable error, determining whether to retire the particular group of memory cells in response to a status of an indicator corresponding to the particular group of memory cells, wherein the status of the indicator indicates whether the particular group of memory cells has a previous uncorrectable error associated therewith.

104 citations


Journal ArticleDOI
TL;DR: This paper gives a tutorial-style introduction of one class of commonly used codes, namely low-density parity-check (LDPC) codes, and discusses new developments such as convolutional LDPC codes and show how they can be employed as potential candidates for future optical communication systems.
Abstract: Since the introduction of coherent transponders, forward error correction based on soft decision is now established in optical communication. In this paper, we give a tutorial-style introduction of one class of commonly used codes, namely low-density parity-check (LDPC) codes. Also we discuss new developments such as convolutional LDPC codes and show how they can be employed as potential candidates for future optical communication systems.

93 citations


Proceedings ArticleDOI
18 Oct 2014
TL;DR: These are the first explicit codes of constant rate in the C-split-state model for any C = o(n), that do not rely on any unproven assumptions and improve the error in the explicit nonmalleable codes constructed in the bit tampering model by Cheraghchi and Guruswami.
Abstract: Non-malleable codes were introduced by Dziembowski, Pietrzak and Wichs[1] as an elegant generalization of the classical notions of error detection, where the corruption of a codeword is viewed as a tampering function acting on it. Informally, a non-malleable code with respect to a family of tampering functions F consists of a randomized encoding function Enc and a deterministic decoding function Dec such that for any m, Dec(Enc(m)) = m. Further, for any tampering function f ∈ F and any message m, Dec(f(Enc(m))) is either m or is e-close to a distribution Df independent of m, where e is called the error. Of particular importance are non-malleable codes in the C-split-state model. In this model, the codeword is partitioned into C equal sized blocks and the tampering function family consists of functions (f1,…,fC) such that fi acts on the ith block. For C = 1 there cannot exist non-malleable codes. For C = 2, the best known explicit construction is by Aggarwal, Dodis and Lovett [2] who achieve rate = Ω(n -- 6/7) and error =2-Ω(n-1/7), where n is the block length of the code. In our main result, we construct efficient non-malleable codes in the C-split-state model for C = 10 that achieve constant rate and error =2-Ω(n). These are the first explicit codes of constant rate in the C-split-state model for any C = o(n), that do not rely on any unproven assumptions. We also improve the error in the explicit non-malleable codes constructed in the bit tampering model by Cheraghchi and Guruswami [3]. Our constructions use an elegant connection found between seedless non-malleable extractors and non-malleable codes by Cheraghchi and Guruswami [4]. We explicitly construct such seedless non-malleable extractors for 10 independent sources and deduce our results on non-malleable codes based on this connection. Our constructions of extractors use encodings and a new variant of the sum-product theorem.

88 citations


Journal ArticleDOI
TL;DR: This paper addresses the prediction of error floors of low-density parity-check codes transmitted over the additive white Gaussian noise channel with modifications that account for the behavior of a nonsaturating decoder and presents the resulting error floor estimates for the Margulis code.
Abstract: This paper addresses the prediction of error floors of low-density parity-check codes transmitted over the additive white Gaussian noise channel. Using a linear state-space model to estimate the behavior of the sum-product algorithm (SPA) decoder in the vicinity of trapping sets (TSs), we study the performance of the SPA decoder in the log-likelihood ratio (LLR) domain as a function of the LLR saturation level. When applied to several widely studied codes, the model accurately predicts a significant decrease in the error floor as the saturation level is allowed to increase. For nonsaturating decoders, however, we find that the state-space model breaks down after a small number of iterations due to the strong correlation of LLR messages. We then revisit Richardson's importance-sampling methodology for estimating error floors due to TSs when those floors are too low for Monte Carlo simulation. We propose modifications that account for the behavior of a nonsaturating decoder and present the resulting error floor estimates for the Margulis code. These estimates are much lower, significantly steeper, and more sensitive to iteration count than those previously reported.

83 citations


Proceedings ArticleDOI
01 Dec 2014
TL;DR: The proposed rate-matching system is combined with channel interleaving and a bit-mapping procedure that preserves the polarization of the rate-compatible polar code family over bit-interleaved coded modulation systems.
Abstract: A design of rate-compatible polar codes suitable for HARQ communications is proposed in this paper. An important feature of the proposed design is that the puncturing order is chosen with low complexity on a base code of short length, which is then further polarized to the desired length. A practical rate-matching system that has the flexibility to choose any desired rate through puncturing or repetition while preserving the polarization is suggested. The proposed rate-matching system is combined with channel interleaving and a bit-mapping procedure that preserves the polarization of the rate-compatible polar code family over bit-interleaved coded modulation systems. Simulation results on AWGN and fast fading channels with different modulation orders show the robustness of the proposed rate-compatible polar code in both Chase combining and incremental redundancy HARQ communications.

Journal ArticleDOI
TL;DR: Two renormalization group decoders for qudit codes are introduced and error correction thresholds and efficiency are estimated and a comparative analysis of the performance of both these approaches to error correction of qudit toric codes is provided.
Abstract: Qudit toric codes are a natural higher-dimensional generalization of the wellstudied qubit toric code. However, standard methods for error correction of the qubit toric code are not applicable to them. Novel decoders are needed. In this paper we introduce two renormalization group decoders for qudit codes and analyse their error correction thresholds and efficiency. The first decoder is a generalization of a ‘hard-decisions’ decoder due to Bravyi and Haah (arXiv:1112.3252). We modify this decoder to overcome a percolation effect which limits its threshold performance for many-level quantum systems. The second decoder is a generalization of a ‘soft-decisions’ decoder due to Poulin and Duclos-Cianci (2010 Phys. Rev. Lett. 104 050504), with a small cell size to optimize the efficiency of implementation in the high dimensional case. In each case, we estimate thresholds for the uncorrelated bit-flip error model and provide a comparative analysis of the performance of both these approaches to error correction of qudit toric codes.

Journal ArticleDOI
TL;DR: The theory of entanglement-assisted quantum error-correcting (EAQEC) codes was developed in this article, which is a generalization of the stabilizer formalism to the setting in which the sender and receiver have access to preshared Entanglement.
Abstract: We develop the theory of entanglement-assisted quantum error-correcting (EAQEC) codes, a generalization of the stabilizer formalism to the setting in which the sender and receiver have access to preshared entanglement. Conventional stabilizer codes are equivalent to self-orthogonal symplectic codes. In contrast, EAQEC codes do not require self-orthogonality, which greatly simplifies their construction. We show how any classical binary or quaternary block code can be made into an EAQEC code. We provide a table of best known EAQEC codes with code length up to 10. With the self-orthogonality constraint removed, we see that the distance of an EAQEC code can be better than any standard quantum error-correcting code with the same fixed net yield. In a quantum computation setting, EAQEC codes give rise to catalytic quantum codes, which assume a subset of the qubits are noiseless. We also give an alternative construction of EAQEC codes by making classical entanglement-assisted codes coherent.

Journal ArticleDOI
TL;DR: An MCMC algorithm is presented that achieves significantly lower logical error rates than MWPM at the cost of a runtime complexity increased by a factor O(L-2) for depolarizing noise with error rate p, an exponential improvement over all previously existing efficient algorithms.
Abstract: Minimum-weight perfect matching (MWPM) has been the primary classical algorithm for error correction in the surface code, since it is of low runtime complexity and achieves relatively low logical error rates [Phys. Rev. Lett. 108, 180501 (2012)]. A Markov chain Monte Carlo (MCMC) algorithm [Phys. Rev. Lett. 109, 160503 (2012)] is able to achieve lower logical error rates and higher thresholds than MWPM, but requires a classical runtime complexity, which is super-polynomial in L, the linear size of the code. In this work we present an MCMC algorithm that achieves significantly lower logical error rates than MWPM at the cost of a runtime complexity increased by a factor O(L-2). This advantage is due to taking correlations between bit-and phase-flip errors (as they appear, for example, in depolarizing noise) as well as entropic factors (i.e., the numbers of likely error paths in different equivalence classes) into account. For depolarizing noise with error rate p, we present an efficient algorithm for which the logical error rate is suppressed as O((p/3)(L/2)) for p -< 0-an exponential improvement over all previously existing efficient algorithms. Our algorithm allows for tradeoffs between runtime and achieved logical error rates as well as for parallelization, and can be also used for correction in the case of imperfect stabilizer measurements.

Journal ArticleDOI
TL;DR: In this article, a framework for classifying rank-metric and matrix codes based on their structure and distance properties has been proposed, and the set of equivalence maps that fix the prominent class of Gabidulin codes known as rank-matric codes has been characterized.
Abstract: For a growing number of applications, such as cellular, peer-to-peer, and sensor networks, efficient error-free transmission of data through a network is essential. Toward this end, Kotter and Kschischang propose the use of subspace codes to provide error correction in the network coding context. The primary construction for subspace codes is the lifting of rank-metric or matrix codes, a process that preserves the structural and distance properties of the underlying code. Thus, to characterize the structure and error-correcting capability of these subspace codes, it is valuable to perform such a characterization of the underlying rank-metric and matrix codes. This paper lays a foundation for this analysis through a framework for classifying rank-metric and matrix codes based on their structure and distance properties. To enable this classification, we extend work by Berger on equivalence for rank-metric codes to define a notion of equivalence for matrix codes, and we characterize the group structure of the collection of maps that preserve such equivalence. We then compare the notions of equivalence for these two related types of codes and show that matrix equivalence is strictly more general than rank-metric equivalence. Finally, we characterize the set of equivalence maps that fix the prominent class of rank-metric codes known as Gabidulin codes. In particular, we give a complete characterization of the rank-metric automorphism group of Gabidulin codes, correcting work by Berger, and give a partial characterization of the matrix-automorphism group of the expanded matrix codes that arise from Gabidulin codes.

Journal ArticleDOI
TL;DR: It is shown that, for a large range of performance metrics, the data transmission efficiency of the ARQ schemes is determined by a set of parameters which are scheme-dependent and not metric-dependent.
Abstract: This paper investigates the performance of multiple-input-multiple-output (MIMO) systems in the presence of automatic repeat request (ARQ) feedback. We show that, for a large range of performance metrics, the data transmission efficiency of the ARQ schemes is determined by a set of parameters which are scheme-dependent and not metric-dependent. Then, the results are used to study different aspects of MIMO-ARQ such as the effect of nonlinear power amplifiers, large-scale MIMO-ARQ, adaptive power allocation and different data communication models. The results, which are valid for various forward and feedback channel models, show the efficiency of the MIMO-ARQ techniques in different conditions.

Journal ArticleDOI
TL;DR: This paper proposes a parameterization-based method that allows (semi-)closed-form expressions, linking optimized throughput, optimal rate, and mean SNR, to be derived for any ARQ and repetition redundancy-HARQ method even when a non-parameterized closed-form does not exist.
Abstract: This paper examines throughput performance, and its optimization, for lossless and truncated automatic repeat request (ARQ) schemes in Gaussian block fading channels Specifically, ARQ, repetition redundancy, and in part also incremental redundancy-hybrid ARQ, are considered with various diversity schemes We propose a parameterization-based method that allows (semi-)closed-form expressions, linking optimized throughput, optimal rate, and mean SNR, to be derived for any ARQ and repetition redundancy-HARQ method even when a non-parameterized closed-form does not exist We derive numerous throughput and optimal throughput expressions for various ARQ schemes and diversity scenarios, potentially useful for benchmarking purposes or as design guidelines

Proceedings ArticleDOI
06 May 2014
TL;DR: It is shown that the combination of PUFs with repetition code approaches is not without risk and must be approached carefully, and a conservative estimation of entropy loss based on the theoretical work of fuzzy extractors is recommended.
Abstract: One of the promising usages of Physically Unclonable Functions (PUFs) is to generate cryptographic keys from PUFs for secure storage of key material. This usage has attractive properties such as physical unclonability and enhanced resistance against hardware attacks. In order to extract a reliable cryptographic key from a noisy PUF response a fuzzy extractor is used to convert non-uniform random PUF responses into nearly uniform randomness. Bosch et al. in 2008 proposed a fuzzy extractor suitable for efficient hardware implementation using two-stage concatenated codes, where the inner stage is a conventional error correcting code and the outer stage is a repetition code. In this paper we show that the combination of PUFs with repetition code approaches is not without risk and must be approached carefully. For example, PUFs with min-entropy lower than 66% may yield zero leftover entropy in the generated key for some repetition code configurations. In addition, we find that many of the fuzzy extractor designs in the literature are too optimistic with respect to entropy estimation. For high security applications, we recommend a conservative estimation of entropy loss based on the theoretical work of fuzzy extractors and present parameters for generating 128-bit keys from memory based PUFs.

Journal ArticleDOI
TL;DR: A rate-0.96 (68254, 65536) shortened Euclidean geometry low-density parity-check code and its VLSI implementation for high-throughput NAND Flash memory systems is presented and compared with a BCH (Bose-Chaudhuri-Hocquenghem) decoding circuit showing comparable error- correcting performance and throughput.
Abstract: The reliability of data stored in high-density Flash memory devices tends to decrease rapidly because of the reduced cell size and multilevel cell technology. Soft-decision error correction algorithms that use multiple-precision sensing for reading memory can solve this problem; however, they require very complex hardware for high-throughput decoding. In this paper, we present a rate-0.96 (68254, 65536) shortened Euclidean geometry low-density parity-check code and its VLSI implementation for high-throughput NAND Flash memory systems. The design employs the normalized a posteriori probability (APP)-based algorithm, serial schedule, and conditional update, which lead to simple functional units, halved decoding iterations, and low-power consumption, respectively. A pipelined-parallel architecture is adopted for high-throughput decoding, and memory-reduction techniques are employed to minimize the chip size. The proposed decoder is implemented in 0.13-μm CMOS technology, and the chip size and energy consumption of the decoder are compared with those of a BCH (Bose-Chaudhuri-Hocquenghem) decoding circuit showing comparable error-correcting performance and throughput.

Journal ArticleDOI
14 Jun 2014
TL;DR: It is found that the metrics considered and their various linear combinations are unable to adequately predict an instruction's vulnerability to SDCs, further motivating the use of Relyzer+GangES style techniques as valuable solutions for the hardware error resiliency evaluation problem.
Abstract: As technology scales, the hardware reliability challenge affects a broad computing market, rendering traditional redundancy based solutions too expensive. Software anomaly based hardware error detection has emerged as a low cost reliability solution, but suffers from Silent Data Corruptions (SDCs). It is crucial to accurately evaluate SDC rates and identify SDC producing software locations to develop software-centric low-cost hardware resiliency solutions.A recent tool, called Relyzer, systematically analyzes an entire application's resiliency to single bit soft-errors using a small set of carefully selected error injection sites. Relyzer provides a practical resiliency evaluation mechanism but still requires significant evaluation time, most of which is spent on error simulations.This paper presents a new technique called GangES (Gang Error Simulator) that aims to reduce error simulation time. GangES observes that a set or gang of error simulations that result in the same intermediate execution state (after their error injections) will produce the same error outcome; therefore, only one simulation of the gang needs to be completed, resulting in significant overall savings in error simulation time. GangES leverages program structure to carefully select when to compare simulations and what state to compare. For our workloads, GangES saves 57% of the total error simulation time with an overhead of just 1.6%This paper also explores pure program analyses based techniques that could obviate the need for tools such as GangES altogether. The availability of Relyzer+GangES allows us to perform a detailed evaluation of such techniques. We evaluate the accuracy of several previously proposed program metrics. We find that the metrics we considered and their various linear combinations are unable to adequately predict an instruction's vulnerability to SDCs, further motivating the use of Relyzer+GangES style techniques as valuable solutions for the hardware error resiliency evaluation problem

Journal ArticleDOI
TL;DR: Low-weight channel codes can be used to reduce the CER without compromising the achievable information rate or even increasing it, especially for the hard-receiver architecture, and it is shown that there is an optimal code weight, for which the information rate is maximized.

Journal Article
TL;DR: The first explicit non-malleable codes in the C-split-state model for any C = o(n) were constructed by Aggarwal, Dodis and Lovett as discussed by the authors.
Abstract: Non-malleable codes were introduced by Dziembowski, Pietrzak and Wichs[1] as an elegant generalization of the classical notions of error detection, where the corruption of a codeword is viewed as a tampering function acting on it. Informally, a non-malleable code with respect to a family of tampering functions F consists of a randomized encoding function Enc and a deterministic decoding function Dec such that for any m, Dec(Enc(m)) = m. Further, for any tampering function f ∈ F and any message m, Dec(f(Enc(m))) is either m or is e-close to a distribution Df independent of m, where e is called the error. Of particular importance are non-malleable codes in the C-split-state model. In this model, the codeword is partitioned into C equal sized blocks and the tampering function family consists of functions (f1,…,fC) such that fi acts on the ith block. For C = 1 there cannot exist non-malleable codes. For C = 2, the best known explicit construction is by Aggarwal, Dodis and Lovett [2] who achieve rate = Ω(n -- 6/7) and error =2-Ω(n-1/7), where n is the block length of the code. In our main result, we construct efficient non-malleable codes in the C-split-state model for C = 10 that achieve constant rate and error =2-Ω(n). These are the first explicit codes of constant rate in the C-split-state model for any C = o(n), that do not rely on any unproven assumptions. We also improve the error in the explicit non-malleable codes constructed in the bit tampering model by Cheraghchi and Guruswami [3]. Our constructions use an elegant connection found between seedless non-malleable extractors and non-malleable codes by Cheraghchi and Guruswami [4]. We explicitly construct such seedless non-malleable extractors for 10 independent sources and deduce our results on non-malleable codes based on this connection. Our constructions of extractors use encodings and a new variant of the sum-product theorem.

Journal ArticleDOI
TL;DR: By utilizing the proposed high-performance concurrent error detection scheme, more reliable and robust hardware implementations for the newly-standardized SHA-3 are realized.
Abstract: The secure hash algorithm (SHA)-3 has been selected in 2012 and will be used to provide security to any application which requires hashing, pseudo-random number generation, and integrity checking. This algorithm has been selected based on various benchmarks such as security, performance, and complexity. In this paper, in order to provide reliable architectures for this algorithm, an efficient concurrent error detection scheme for the selected SHA-3 algorithm, i.e., Keccak, is proposed. To the best of our knowledge, effective countermeasures for potential reliability issues in the hardware implementations of this algorithm have not been presented to date. In proposing the error detection approach, our aim is to have acceptable complexity and performance overheads while maintaining high error coverage. In this regard, we present a low-complexity recomputing with rotated operands-based scheme which is a step-forward toward reducing the hardware overhead of the proposed error detection approach. Moreover, we perform injection-based fault simulations and show that the error coverage of close to 100% is derived. Furthermore, we have designed the proposed scheme and through ASIC analysis, it is shown that acceptable complexity and performance overheads are reached. By utilizing the proposed high-performance concurrent error detection scheme, more reliable and robust hardware implementations for the newly-standardized SHA-3 are realized.

Journal ArticleDOI
TL;DR: This study suggests that degenerate stabilizer codes and self-complementary nonadditive codes are especially suitable for the error correction of the GAD noise model.
Abstract: We present analytic estimates of the performances of various approximate quantum error-correction schemes for the generalized amplitude-damping (GAD) qubit channel. Specifically, we consider both stabilizer and nonadditive quantum codes. The performance of such error-correcting schemes is quantified by means of the entanglement fidelity as a function of the damping probability and the nonzero environmental temperature. The recovery scheme employed throughout our work applies, in principle, to arbitrary quantum codes and is the analog of the perfect Knill-Laflamme recovery scheme adapted to the approximate quantum error-correction framework for the GAD error model. We also analytically recover and/or clarify some previously known numerical results in the limiting case of vanishing temperature of the environment, the well-known traditional amplitude-damping channel. In addition, our study suggests that degenerate stabilizer codes and self-complementary nonadditive codes are especially suitable for the error correction of the GAD noise model. Finally, comparing the properly normalized entanglement fidelities of the best performant stabilizer and nonadditive codes characterized by the same length, we show that nonadditive codes outperform stabilizer codes not only in terms of encoded dimension but also in terms of entanglement fidelity.

Proceedings ArticleDOI
18 Oct 2014
TL;DR: In this paper, the authors proposed a coding scheme for the standard setting which performs optimally in all three measures: maximum tolerable error rate, communication complexity, and computational complexity.
Abstract: We study coding schemes for error correction in interactive communications. Such interactive coding schemes simulate any n-round interactive protocol using N rounds over an adversarial channel that corrupts up to ρ N transmissions. Important performance measures for a coding scheme are its maximum tolerable error rate ρ, communication complexity N, and computational complexity. We give the first coding scheme for the standard setting which performs optimally in all three measures: Our randomized non-adaptive coding scheme has a near-linear computational complexity and tolerates any error rate δ < 1/4 with a linear N = Θ(n) communication complexity. This improves over prior results [1]–[4] which each performed well in two of these measures. We also give results for other settings of interest, namely, the first computationally and communication efficient schemes that tolerate ρ < 2/7 adaptively, ρ < 1/3 if only one party is required to decode, and ρ < 1/2 if list decoding is allowed. These are the optimal tolerable error rates for the respective settings. These coding schemes also have near linear computational and communication complexity. These results are obtained via two techniques: We give a general black-box reduction which reduces unique decoding, in various settings, to list decoding. We also show how to boost the computational and communication efficiency of any list decoder to become near linear1.

Journal ArticleDOI
TL;DR: A novel decimal matrix code (DMC) based on divide-symbol based on decimal algorithm to obtain the maximum error detection capability and the encoder-reuse technique (ERT) is proposed to minimize the area overhead of extra circuits without disturbing the whole encoding and decoding processes.
Abstract: Transient multiple cell upsets (MCUs) are becoming major issues in the reliability of memories exposed to radiation environment. To prevent MCUs from causing data corruption, more complex error correction codes (ECCs) are widely used to protect memory, but the main problem is that they would require higher delay overhead. Recently, matrix codes (MCs) based on Hamming codes have been proposed for memory protection. The main issue is that they are double error correction codes and the error correction capabilities are not improved in all cases. In this paper, novel decimal matrix code (DMC) based on divide-symbol is proposed to enhance memory reliability with lower delay overhead. The proposed DMC utilizes decimal algorithm to obtain the maximum error detection capability. Moreover, the encoder-reuse technique (ERT) is proposed to minimize the area overhead of extra circuits without disturbing the whole encoding and decoding processes. ERT uses DMC encoder itself to be part of the decoder. The proposed DMC is compared to well-known codes such as the existing Hamming, MCs, and punctured difference set (PDS) codes. The obtained results show that the mean time to failure (MTTF) of the proposed scheme is 452.9%, 154.6%, and 122.6% of Hamming, MC, and PDS, respectively. At the same time, the delay overhead of the proposed scheme is 73.1%, 69.0%, and 26.2% of Hamming, MC, and PDS, respectively. The only drawback to the proposed scheme is that it requires more redundant bits for memory protection.

Proceedings ArticleDOI
24 Mar 2014
TL;DR: A detailed analysis demonstrates that the reduction in nonvolatility requirements afforded by strong error correction translates to significantly lower area for the memory array compared to simpler ECC schemes, even when accounting for the increased overhead of error correction.
Abstract: STT-MRAMs are prone to data corruption due to inadvertent bit flips. Traditional methods enhance robustness at the cost of area/energy by using larger cell sizes to improve the thermal stability of the MTJ cells. This paper employs multibit error correction with DRAM-style refreshing to mitigate errors and provides a methodology for determining the optimal level of correction. A detailed analysis demonstrates that the reduction in nonvolatility requirements afforded by strong error correction translates to significantly lower area for the memory array compared to simpler ECC schemes, even when accounting for the increased overhead of error correction.

Journal ArticleDOI
TL;DR: The proposed error-control systems achieve good tradeoffs between error-performance and complexity as compared to the traditional schemes and is also very favorable for implementation.
Abstract: In this work, we consider high-rate error-control systems for storage devices using multi-level per cell (MLC) NAND flash memories. Aiming at achieving a strong error-correcting capability, we propose error-control systems using block-wise parallel/serial concatenations of short Bose-Chaudhuri-Hocquenghem (BCH) codes with two iterative decoding strategies, namely, iterative hard-decision decoding (IHDD) and iterative reliability based decoding (IRBD). It will be shown that a simple but very efficient IRBD is possible by taking advantage of a unique feature of the block-wise concatenation. For tractable performance analysis and design of IHDD and IRBD at very low error rates, we derive semi-analytic approaches. The proposed error-control systems are compared with various error-control systems with well-known coding schemes such as a product code, multiple BCH codes, a single long BCH code, and low-density parity-check codes in terms of page error rates, which confirms our claim: the proposed error-control systems achieve good tradeoffs between error-performance and complexity as compared to the traditional schemes and is also very favorable for implementation.

Book
02 May 2014
TL;DR: This book describes the fundamentals of cryptographic primitives based on quasi-cyclic low-density parity-check (QC-LDPC) codes, with a special focus on the use of these codes in public-key cryptosystems derived from the McEliece and Niederreiter schemes.
Abstract: This book describes the fundamentals of cryptographic primitives based on quasi-cyclic low-density parity-check (QC-LDPC) codes, with a special focus on the use of these codes in public-key cryptosystems derived from the McEliece and Niederreiter schemes In the first part of the book, the main characteristics of QC-LDPC codes are reviewed, and several techniques for their design are presented, while tools for assessing the error correction performance of these codes are also described Some families of QC-LDPC codes that are best suited for use in cryptography are also presented The second part of the book focuses on the McEliece and Niederreiter cryptosystems, both in their original forms and in some subsequent variants The applicability of QC-LDPC codes in these frameworks is investigated by means of theoretical analyses and numerical tools, in order to assess their benefits and drawbacks in terms of system efficiency and security Several examples of QC-LDPC code-based public key cryptosystems are presented, and their advantages over classical solutions are highlighted The possibility of also using QC-LDPC codes in symmetric encryption schemes and digital signature algorithms is also briefly examined

Patent
Ravi H. Motwani1, Kiran Pangal1
23 Sep 2014
TL;DR: In this article, a controller comprises logic to receive a read request from a host device to read a line of data to the memory device, wherein the data is spread across a plurality of dies and comprises an error correction code (ECC) spread across the plurality (N) of dies.
Abstract: Apparatus, systems, and methods for Recovery algorithm in memory are described. In one embodiment, a controller comprises logic to receive a read request from a host device to read a line of data to the memory device, wherein the data is spread across a plurality (N) of dies and comprises an error correction code (ECC) spread across the plurality (N) of dies, retrieve the line of data from the memory device, perform an error correction code (ECC) check on the line of data retrieved from the memory device, and invoke a recovery algorithm in response to an error in the ECC check on the line of data retrieved from the memory device. Other embodiments are also disclosed and claimed.