scispace - formally typeset
Search or ask a question

Showing papers on "Error detection and correction published in 2010"


Journal ArticleDOI
TL;DR: The role of forward error correction has become of critical importance in fiber optic communications, as backbone networks increase in speed to 40 and 100 Gb/s, particularly as poor optical-signal-to-noise environments are encountered.
Abstract: The role of forward error correction has become of critical importance in fiber optic communications, as backbone networks increase in speed to 40 and 100 Gb/s, particularly as poor optical-signal-to-noise environments are encountered. Such environments become more commonplace in higher-speed environments, as more optical amplifiers are deployed in networks. Many generations of FEC have been implemented, including block codes and concatenated codes. Developers now have options to consider hard-decision and soft-decision codes. This article describes the advantages of each type in particular transmission environments.

421 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a new syndrome coding scheme that limits the amount of leaked information by the PUF error-correcting codes, which can be used in many security, protection, and digital rights management applications.
Abstract: Physical unclonable functions (PUFs) offer a promising mechanism that can be used in many security, protection, and digital rights management applications. One key issue is the stability of PUF responses that is often addressed by error correction codes. The authors propose a new syndrome coding scheme that limits the amount of leaked information by the PUF error-correcting codes.

342 citations


Journal ArticleDOI
TL;DR: A voice conversion approach using an ANN model to capture speaker-specific characteristics of a target speaker is proposed and it is demonstrated that such a voice Conversion approach can perform monolingual as well as cross-lingual voice conversion of an arbitrary source speaker.
Abstract: In this paper, we use artificial neural networks (ANNs) for voice conversion and exploit the mapping abilities of an ANN model to perform mapping of spectral features of a source speaker to that of a target speaker. A comparative study of voice conversion using an ANN model and the state-of-the-art Gaussian mixture model (GMM) is conducted. The results of voice conversion, evaluated using subjective and objective measures, confirm that an ANN-based VC system performs as good as that of a GMM-based VC system, and the quality of the transformed speech is intelligible and possesses the characteristics of a target speaker. In this paper, we also address the issue of dependency of voice conversion techniques on parallel data between the source and the target speakers. While there have been efforts to use nonparallel data and speaker adaptation techniques, it is important to investigate techniques which capture speaker-specific characteristics of a target speaker, and avoid any need for source speaker's data either for training or for adaptation. In this paper, we propose a voice conversion approach using an ANN model to capture speaker-specific characteristics of a target speaker and demonstrate that such a voice conversion approach can perform monolingual as well as cross-lingual voice conversion of an arbitrary source speaker.

269 citations


Proceedings ArticleDOI
Larkhoon Leem1, Hyungmin Cho1, Jason Bau1, Quinn Jacobson2, Subhasish Mitra1 
08 Mar 2010
TL;DR: Error Resilient System Architecture (ERSA) is presented, a low-cost robust system architecture for emerging killer probabilistic applications such as Recognition, Mining and Synthesis (RMS) applications and may be adapted for general-purpose applications that are less resilient to errors.
Abstract: There is a growing concern about the increasing vulnerability of future computing systems to errors in the underlying hardware. Traditional redundancy techniques are expensive for designing energy-efficient systems that are resilient to high error rates. We present Error Resilient System Architecture (ERSA), a low-cost robust system architecture for emerging killer probabilistic applications such as Recognition, Mining and Synthesis (RMS) applications. While resilience of such applications to errors in low-order bits of data is well-known, execution of such applications on error-prone hardware significantly degrades output quality (due to high-order bit errors and crashes). ERSA achieves high error resilience to high-order bit errors and control errors (in addition to low-order bit errors) using a judicious combination of 3 key ideas: (1) asymmetric reliability in many-core architectures, (2) error-resilient algorithms at the core of probabilistic applications, and (3) intelligent software optimizations. Error injection experiments on a multi-core ERSA hardware prototype demonstrate that, even at very high error rates of 20,000 errors/second/core or 2×10−4 error/cycle/core (with errors injected in architecturally-visible registers), ERSA maintains 90% or better accuracy of output results, together with minimal impact on execution time, for probabilistic applications such as K-Means clustering, LDPC decoding and Bayesian networks. Moreover, we demonstrate the effectiveness of ERSA in tolerating high rates of static memory errors that are characteristic of emerging challenges such as Vccmin problems and erratic bit errors. Using the concept of configurable reliability, ERSA platforms may also be adapted for general-purpose applications that are less resilient to errors (but at higher costs).

246 citations


Journal ArticleDOI
TL;DR: In this paper, a detailed theoretical analysis of these (fully) absorbing sets for the class of Cp,? array-based LDPC codes is provided, including the characterization of all minimal absorbing sets, and moreover, the development of techniques to enumerate them exactly.
Abstract: The class of low-density parity-check (LDPC) codes is attractive, since such codes can be decoded using practical message-passing algorithms, and their performance is known to approach the Shannon limits for suitably large block lengths. For the intermediate block lengths relevant in applications, however, many LDPC codes exhibit a so-called ?error floor,? corresponding to a significant flattening in the curve that relates signal-to-noise ratio (SNR) to the bit-error rate (BER) level. Previous work has linked this behavior to combinatorial substructures within the Tanner graph associated with an LDPC code, known as (fully) absorbing sets. These fully absorbing sets correspond to a particular type of near-codewords or trapping sets that are stable under bit-flipping operations, and exert the dominant effect on the low BER behavior of structured LDPC codes. This paper provides a detailed theoretical analysis of these (fully) absorbing sets for the class of Cp, ? array-based LDPC codes, including the characterization of all minimal (fully) absorbing sets for the array-based LDPC codes for ? = 2,3,4, and moreover, it provides the development of techniques to enumerate them exactly. Theoretical results of this type provide a foundation for predicting and extrapolating the error floor behavior of LDPC codes.

239 citations


Proceedings ArticleDOI
19 Jun 2010
TL;DR: The significant impact of variations on refresh time and cache power consumption for large eDRAM caches is shown and Hi-ECC, a technique that incorporates multi-bit error-correcting codes to significantly reduce refresh rate, is proposed.
Abstract: Technology advancements have enabled the integration of large on-die embedded DRAM (eDRAM) caches. eDRAM is significantly denser than traditional SRAMs, but must be periodically refreshed to retain data. Like SRAM, eDRAM is susceptible to device variations, which play a role in determining refresh time for eDRAM cells. Refresh power potentially represents a large fraction of overall system power, particularly during low-power states when the CPU is idle. Future designs need to reduce cache power without incurring the high cost of flushing cache data when entering low-power states. In this paper, we show the significant impact of variations on refresh time and cache power consumption for large eDRAM caches. We propose Hi-ECC, a technique that incorporates multi-bit error-correcting codes to significantly reduce refresh rate. Multi-bit error-correcting codes usually have a complex decoder design and high storage cost. Hi-ECC avoids the decoder complexity by using strong ECC codes to identify and disable sections of the cache with multi-bit failures, while providing efficient single-bit error correction for the common case. Hi-ECC includes additional optimizations that allow us to amortize the storage cost of the code over large data words, providing the benefit of multi-bit correction at same storage cost as a single-bit error-correcting (SECDED) code (2% overhead). Our proposal achieves a 93% reduction in refresh power vs. a baseline eDRAM cache without error correcting capability, and a 66% reduction in refresh power vs. a system using SECDED codes.

231 citations


Journal ArticleDOI
TL;DR: An efficient implementation of the Extended Min-Sum (EMS) decoder is proposed which reduces the order of complexity to ¿(nm log2 nm) and starts to be reasonable enough to compete with binary decoders.
Abstract: In this paper, we propose a new implementation of the Extended Min-Sum (EMS) decoder for non-binary LDPC codes. A particularity of the new algorithm is that it takes into accounts the memory problem of the non-binary LDPC decoders, together with a significant complexity reduction per decoding iteration. The key feature of our decoder is to truncate the vector messages of the decoder to a limited number nm of values in order to reduce the memory requirements. Using the truncated messages, we propose an efficient implementation of the EMS decoder which reduces the order of complexity to ?(nm log2 nm). This complexity starts to be reasonable enough to compete with binary decoders. The performance of the low complexity algorithm with proper compensation is quite good with respect to the important complexity reduction, which is shown both with a simulated density evolution approach and actual simulations.

225 citations


Journal ArticleDOI
TL;DR: It is shown that polar codes asymptotically achieve the whole capacity-equivocation region for the wiretap channel when the wiretapper's channel is degraded with respect to the main channel, and the weak secrecy notion is used.
Abstract: We show that polar codes asymptotically achieve the whole capacity-equivocation region for the wiretap channel when the wiretapper's channel is degraded with respect to the main channel, and the weak secrecy notion is used. Our coding scheme also achieves the capacity of the physically degraded receiver-orthogonal relay channel. We show simulation results for moderate block length for the binary erasure wiretap channel, comparing polar codes and two edge type LDPC codes.

216 citations


Patent
01 Jul 2010
TL;DR: In this paper, statistical methods are used to arrive at expected values for the collected data and the data is compared to the expected value and must meet one or more acceptance criteria (e.g., be within a prescribed range) to be considered valid.
Abstract: Methods and apparatus for collection, validation, analysis, and automated error correction of data regarding user interaction with content. In one embodiment, statistical methods are used to arrive at expected values for the collected data. The data is compared to the expected value and must meet one or more acceptance criteria (e.g., be within a prescribed range) to be considered valid. The prescribed range is determined by the network operator, or a computer program adapted to generate this value. The invention enables a network operator to assess a large volume of data without requiring significant amounts of manual monitoring and/or error correction. The ability to collect, validate and analyze data across multiple platforms is also provided. Still further, an automated system capable of learning evaluation and error correction patterns is disclosed.

213 citations


Journal ArticleDOI
TL;DR: A novel approach, termed Reptile, for error correction in short-read data from next-generation sequencing that outperforms previous methods in the percentage of errors removed from the data and the accuracy in true base assignment and a significant reduction in run time and memory usage have been achieved.
Abstract: MOTIVATION Error correction is critical to the success of next-generation sequencing applications, such as resequencing and de novo genome sequencing. It is especially important for high-throughput short-read sequencing, where reads are much shorter and more abundant, and errors more frequent than in traditional Sanger sequencing. Processing massive numbers of short reads with existing error correction methods is both compute and memory intensive, yet the results are far from satisfactory when applied to real datasets. RESULTS We present a novel approach, termed Reptile, for error correction in short-read data from next-generation sequencing. Reptile works with the spectrum of k-mers from the input reads, and corrects errors by simultaneously examining: (i) Hamming distance-based correction possibilities for potentially erroneous k-mers; and (ii) neighboring k-mers from the same read for correct contextual information. By not needing to store input data, Reptile has the favorable property that it can handle data that does not fit in main memory. In addition to sequence data, Reptile can make use of available quality score information. Our experiments show that Reptile outperforms previous methods in the percentage of errors removed from the data and the accuracy in true base assignment. In addition, a significant reduction in run time and memory usage have been achieved compared with previous methods, making it more practical for short-read error correction when sampling larger genomes. AVAILABILITY Reptile is implemented in C++ and is available through the link: http://aluru-sun.ece.iastate.edu/doku.php?id=software CONTACT aluru@iastate.edu.

180 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed block codes for asymmetric limited-magnitude errors over q-ary channels, where the number of errors is bounded by t and the error magnitudes are bounded by l. The construction of these codes is performed over the small alphabet whose size depends only on the maximum error magnitude and is independent of the alphabet size of the outer code.
Abstract: Several physical effects that limit the reliability and performance of multilevel flash memories induce errors that have low magnitudes and are dominantly asymmetric. This paper studies block codes for asymmetric limited-magnitude errors over q-ary channels. We propose code constructions and bounds for such channels when the number of errors is bounded by t and the error magnitudes are bounded by l. The constructions utilize known codes for symmetric errors, over small alphabets, to protect large-alphabet symbols from asymmetric limited-magnitude errors. The encoding and decoding of these codes are performed over the small alphabet whose size depends only on the maximum error magnitude and is independent of the alphabet size of the outer code. Moreover, the size of the codes is shown to exceed the sizes of known codes (for related error models), and asymptotic rate-optimality results are proved. Extensions of the construction are proposed to accommodate variations on the error model and to include systematic codes as a benefit to practical implementation.

Proceedings Article
01 Jan 2010
TL;DR: Non-malleable codes as mentioned in this paper relaxes the notion of error correction and error detection, and can be achieved for very rich classes of modifications, such as functions where every bit in the tampered codewords can depend arbitrarily on any 99% of the bits in the original codeword.
Abstract: We introduce the notion of “non-malleable codes” which relaxes the notion of error correction and error detection. Informally, a code is non-malleable if the message contained in a modified codeword is either the original message, or a completely unrelated value. In contrast to error correction and error detection, non-malleability can be achieved for very rich classes of modifications. We construct an efficient code that is non-malleable with respect to modifications that affect each bit of the codeword arbitrarily (i.e., leave it untouched, flip it, or set it to either 0 or 1), but independently of the value of the other bits of the codeword. Using the probabilistic method, we also show a very strong and general statement: there exists a non-malleable code for every “small enough” family F of functions via which codewords can be modified. Although this probabilistic method argument does not directly yield efficient constructions, it gives us efficient non-malleable codes in the random-oracle model for very general classes of tampering functions—e.g., functions where every bit in the tampered codeword can depend arbitrarily on any 99% of the bits in the original codeword. As an application of non-malleable codes, we show that they provide an elegant algorithmic solution to the task of protecting functionalities implemented in hardware (e.g., signature cards) against “tampering attacks.” In such attacks, the secret state of a physical system is tampered, in the hopes that future interaction with the modified system will reveal some secret information. This problem was previously studied in the work of Gennaro et al. in 2004 under the name “algorithmic tamper proof security” (ATP). We show that non-malleable codes can be used to achieve important improvements over the prior work. In particular, we show that any functionality can be made secure against a large class of tampering attacks, simply by encoding the secret state with a non-malleable code while it is stored in memory.

Patent
19 Nov 2010
TL;DR: In this article, the authors describe a method and a controller for performing a copy-back command from at least one flash memory device to a host to another host to a flash memory devices.
Abstract: The embodiments described herein provide a method and controller for performing a copy-back command. In one embodiment, a controller receives the data and error correction code associated with a copy-back operation from at least one flash memory device. The controller determines if the error correction code indicates there is an error in the data. If the error correction code does not indicate there is an error in the data, the controller sends a destination address and copy-back program command received from a host to the at least one flash memory device. If the error correction code indicates there is an error in the data, the controller corrects the data and sends the destination address, the corrected data, and a program command to the at least one flash memory device. Additional embodiments relate to modifying data during the copy-back operation.

Journal ArticleDOI
TL;DR: Borders on the size of error-correcting codes for charge-constrained errors in the rank-modulation scheme are shown, and metric-embedding techniques are used to give constructions which translate a wealth of knowledge of codes in the Lee metric to codes over permutations in Kendall's ¿-metric.
Abstract: We investigate error-correcting codes for a the rank-modulation scheme with an application to flash memory devices. In this scheme, a set of n cells stores information in the permutation induced by the different charge levels of the individual cells. The resulting scheme eliminates the need for discrete cell levels, overcomes overshoot errors when programming cells (a serious problem that reduces the writing speed), and mitigates the problem of asymmetric errors. In this paper, we study the properties of error-correcting codes for charge-constrained errors in the rank-modulation scheme. In this error model the number of errors corresponds to the minimal number of adjacent transpositions required to change a given stored permutation to another erroneous one-a distance measure known as Kendall's ? -distance. We show bounds on the size of such codes, and use metric-embedding techniques to give constructions which translate a wealth of knowledge of codes in the Lee metric to codes over permutations in Kendall's ?-metric. Specifically, the one-error-correcting codes we construct are at least half the ball-packing upper bound.

Journal ArticleDOI
TL;DR: A comprehensive survey of error-correcting codes for channels corrupted by synchronization errors and potential applications as well as the obstacles that need to be overcome before such codes can be used in practical systems are presented.
Abstract: We present a comprehensive survey of error-correcting codes for channels corrupted by synchronization errors. We discuss potential applications as well as the obstacles that need to be overcome before such codes can be used in practical systems.

Patent
Kuljit S. Bains1
28 Jun 2010
TL;DR: In this paper, a memory device includes a memory core having a first portion to store data bits and a second portion to storing error correction code (ECC) bits corresponding to the data bits.
Abstract: Embodiments of the invention are generally directed to improving the reliability, availability, and serviceability of a memory device. In some embodiments, a memory device includes a memory core having a first portion to store data bits and a second portion to store error correction code (ECC) bits corresponding to the data bits. The memory device may also include error correction logic on the same die as the memory core. In some embodiments, the error correction logic enables the memory device to compute ECC bits and to compare the stored ECC bits with the computed ECC bits.

Journal ArticleDOI
TL;DR: A novel class of bit-flipping algorithm for decoding low-density parity-check (LDPC) codes is presented, which exhibit better decoding performance than known BF algorithms, such as the weighted BF algorithms or the modified weighted BF algorithm for several LDPC codes.
Abstract: A novel class of bit-flipping (BF) algorithm for decoding low-density parity-check (LDPC) codes is presented. The proposed algorithms, which are referred to as gradient descent bit flipping (GDBF) algorithms, can be regarded as simplified gradient descent algorithms. The proposed algorithms exhibit better decoding performance than known BF algorithms, such as the weighted BF algorithm or the modified weighted BF algorithm for several LDPC codes.

Patent
21 Sep 2010
TL;DR: In this paper, an error detection method for single-bit errors in a memory module is presented. But the method is limited to single bit errors and is not suitable for the case of repeated errors.
Abstract: One embodiment provides an error detection method wherein single-bit errors in a memory module are detected and identified as being a random error or a repeat error. Each identified random error and each identified repeat error occurring in a time interval is counted. An alert is generated in response to a number of identified random errors reaching a random-error threshold or a number of identified repeat errors reaching a repeat-error threshold during the predefined interval. The repeat-error threshold is set lower than the random-error threshold. A hashing process may be applied to the memory address of each detected error to map the location of the error in the memory system to a corresponding location in an electronic table.

Proceedings ArticleDOI
01 Dec 2010
TL;DR: For this joint channel-decoding and network-encoding task a generalized Sum-Product Algorithm (SPA) is developed and outperforms other recently proposed schemes as demonstrated by simulation results.
Abstract: In this paper a physical-layer network coded two-way relay system applying Low-Density Parity-Check (LDPC) codes for error correction is considered, where two sources A and B desire to exchange information with each other by the help of a relay R. The critical process in such a system is the calculation of the network-coded transmit word at the relay on basis of the superimposed channel-coded words of the two sources. For this joint channel-decoding and network-encoding task a generalized Sum-Product Algorithm (SPA) is developed. This novel iterative decoding approach outperforms other recently proposed schemes as demonstrated by simulation results.

Patent
Andrew Tomlin1
08 Nov 2010
TL;DR: In this article, the authors present methods for improving data relocation operations by selecting whether to check ECC based on predetermined selection criteria, and if ECC checking is not selected, causing the memory to perform an on-chip copy the data from a first location to a second location.
Abstract: The present invention presents methods for improving data relocation operations. In one aspect, rather than check the quality of the data based on its associated error correction code (ECC) in every relocation operation, it is determined whether to check ECC based on predetermined selection criteria, and if ECC checking is not selected, causing the memory to perform an on-chip copy the data from a first location to a second location. If ECC checking is selected, the data is transferred to the controller and checked; when an error is found, a correction operation is performed and when no error is found, an on-chip copy is performed. The predetermined selection criteria may comprise a sampling mechanism, which may be random based or deterministic. In another aspect, data transfer flags are introduced to indicate data has been corrected and should be transferred back to the memory.

Journal ArticleDOI
TL;DR: This paper presents SYNAPSE++, a system for over the air reprogramming of wireless sensor networks (WSNs), which adopts a more sophisticated error recovery approach exploiting rateless fountain codes (FCs), allowing it to scale considerably better in dense networks and to better cope with noisy environments.
Abstract: This paper presents SYNAPSE++, a system for over the air reprogramming of wireless sensor networks (WSNs). In contrast to previous solutions, which implement plain negative acknowledgment-based ARQ strategies, SYNAPSE++ adopts a more sophisticated error recovery approach exploiting rateless fountain codes (FCs). This allows it to scale considerably better in dense networks and to better cope with noisy environments. In order to speed up the decoding process and decrease its computational complexity, we engineered the FC encoding distribution through an original genetic optimization approach. Furthermore, novel channel access and pipelining techniques have been jointly designed so as to fully exploit the benefits of fountain codes, mitigate the hidden terminal problem and reduce the number of collisions. All of this makes it possible for SYNAPSE++ to recover data over multiple hops through overhearing by limiting, as much as possible, the number of explicit retransmissions. We finally created new bootloader and memory management modules so that SYNAPSE++ could disseminate and load program images written using any language. At the end of this paper, the effectiveness of SYNAPSE++ is demonstrated through experimental results over actual multihop deployments, and its performance is compared with that of Deluge, the de facto standard protocol for code dissemination in WSNs. The TinyOS 2 code of SYNAPSE++ is available at http://dgt.dei.unipd.it/download.

Journal ArticleDOI
TL;DR: An 11-bit 160-MS/s four-channel time-interleaved double-sampled pipelined ADC implemented in a 0.35-μm CMOS process is described and digital calibration is used to correct mismatch errors between channels as well as the memory errors that arise from the use of double sampling.
Abstract: An 11-bit 160-MS/s four-channel time-interleaved double-sampled pipelined ADC implemented in a 0.35-μm CMOS process is described. Digital calibration is used to correct mismatch errors between channels as well as the memory errors that arise from the use of double sampling. The signal-to-noise-and-distortion ratio is improved from 45 to 62 dB after calibration with an 8.7-MHz input. The spurious-free dynamic range is increased from 47 dB to 79 dB.

Journal ArticleDOI
TL;DR: It is shown that the optimal recovery fidelity can be predicted exactly from a dual optimization problem on the environment causing the noise, and an estimate of the optimalovery fidelity is obtained.
Abstract: We derive necessary and sufficient conditions for the approximate correctability of a quantum code, generalizing the Knill-Laflamme conditions for exact error correction. Our measure of success of the recovery operation is the worst-case entanglement fidelity. We show that the optimal recovery fidelity can be predicted exactly from a dual optimization problem on the environment causing the noise. We use this result to obtain an estimate of the optimal recovery fidelity as well as a way of constructing a class of near-optimal recovery channels that work within twice the minimal error. In addition to standard subspace codes, our results hold for subsystem codes and hybrid quantum-classical codes.

Journal ArticleDOI
01 Feb 2010
TL;DR: A novel metric based on the computation of BCI Utility is proposed, which can accurately predict the overall performance of a BCI system, as it takes into account both the classifier and the control interface characteristics.
Abstract: A relevant issue in a brain-computer interface (BCI) is the capability to efficiently convert user intentions into correct actions, and how to properly measure this efficiency. Usually, the evaluation of a BCI system is approached through the quantification of the classifier performance, which is often measured by means of the information transfer rate (ITR). A shortcoming of this approach is that the control interface design is neglected, and hence a poor description of the overall performance is obtained for real systems. To overcome this limitation, we propose a novel metric based on the computation of BCI Utility. The new metric can accurately predict the overall performance of a BCI system, as it takes into account both the classifier and the control interface characteristics. It is therefore suitable for design purposes, where we have to select the best options among different components and different parameters setup. In the paper, we compute Utility in two scenarios, a P300 speller and a P300 speller with an error correction system (ECS), for different values of accuracy of the classifier and recall of the ECS. Monte Carlo simulations confirm that Utility predicts the performance of a BCI better than ITR.

Journal ArticleDOI
TL;DR: An asynchronous 6 bit 1 GS/s ADC is achieved by time interleaving two ADCs based on the binary successive approximation (SA) algorithm using a series capacitive ladder as mentioned in this paper.
Abstract: An asynchronous 6 bit 1 GS/s ADC is achieved by time interleaving two ADCs based on the binary successive approximation (SA) algorithm using a series capacitive ladder. The semi-closed loop asynchronous technique eliminates the high internal clocks and significantly speeds up the SA algorithm. A key feature to reduce the power in this design involves relaxing the comparator requirements using an error correction technique, which can be viewed as an extension of the SA algorithm to remove degradation due to metastability. Fabricated in 65 nm CMOS with an active area of 0.11 mm2, it achieves a peak SNDR of 31.5 dB at 1GS/s sampling rate and has a total power consumption of 6.7 mW.

Proceedings ArticleDOI
28 Apr 2010
TL;DR: Maranello is the first partial packet recovery design to be implemented in commonly available firmware and compares Maranello to alternative recovery protocols using a trace-driven simulation and to 802.11 using a live implementation under various channel conditions.
Abstract: Partial packet recovery protocols attempt to repair corrupted packets instead of retransmitting them in their entirety. Recent approaches have used physical layer confidence estimates or additional error detection codes embedded in each transmission to identify corrupt bits, or have applied forward error correction to repair without such explicit knowledge. In contrast to these approaches, our goal is a practical design that simultaneously: (a) requires no extra bits in correct packets, (b) reduces recovery latency, except in rare instances, (c) remains compatible with existing 802.11 devices by obeying timing and backoff standards, and (d) can be incrementally deployed on widely available access points and wireless cards.In this paper, we design, implement, and evaluate Maranello, a novel partial packet recovery mechanism for 802.11. In Maranello, the receiver computes checksums over blocks in corrupt packets and bundles these checksums into a negative acknowledgment sent when the sender expects to receive an acknowledgment. The sender then retransmits only those blocks for which the checksum is incorrect, and repeats this partial retransmission until it receives an acknowledgment. Successful transmissions are not burdened by additional bits and the receiver needs not infer which bits were corrupted. We implemented Maranello using OpenFWWF (open source firmware for Broadcom wireless cards) and deployed it in a small testbed. We compare Maranello to alternative recovery protocols using a trace-driven simulation and to 802.11 using a live implementation under various channel conditions. To our knowledge, Maranello is the first partial packet recovery design to be implemented in commonly available firmware.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that precise distortion estimation enables the proposed transmission system to achieve a significantly higher average video peak signal-to-noise ratio compared to a conventional content independent system.
Abstract: Efficient bit stream adaptation and resilience to packet losses are two critical requirements in scalable video coding for transmission over packet-lossy networks. Various scalable layers have highly distinct importance, measured by their contribution to the overall video quality. This distinction is especially more significant in the scalable H.264/advanced video coding (AVC) video, due to the employed prediction hierarchy and the drift propagation when quality refinements are missing. Therefore, efficient bit stream adaptation and unequal protection of these layers are of special interest in the scalable H.264/AVC video. This paper proposes an algorithm to accurately estimate the overall distortion of decoder reconstructed frames due to enhancement layer truncation, drift/error propagation, and error concealment in the scalable H.264/AVC video. The method recursively computes the total decoder expected distortion at the picture-level for each layer in the prediction hierarchy. This ensures low computational cost since it bypasses highly complex pixel-level motion compensation operations. Simulation results show an accurate distortion estimation at various channel loss rates. The estimate is further integrated into a cross-layer optimization framework for optimized bit extraction and content-aware channel rate allocation. Experimental results demonstrate that precise distortion estimation enables our proposed transmission system to achieve a significantly higher average video peak signal-to-noise ratio compared to a conventional content independent system.

Journal ArticleDOI
TL;DR: An optimized scheme is introduced which combines a multibit error-correcting BCH code with Hamming codes in a hierarchical manner to give an average latency as low as that of the single-bit correcting Hamming decoder.
Abstract: This paper presents multibit error-correction schemes for nor Flash used specifically for execute-in-place applications. As architectures advance to accommodate more bits/cell and geometries decrease to structures that are smaller than 32 nm, single-bit error-correction codes (ECCs) are unable to compensate for the increasing array bit error rates, making it imperative to use 2-b ECC. However, 2-b ECC algorithms are complex and add a timing overhead on the memory read access time. This paper proposes low-latency multibit ECC schemes. Starting with the binary Bose-Chaudhuri-Hocquenghem (BCH) codes, an optimized scheme is introduced which combines a multibit error-correcting BCH code with Hamming codes in a hierarchical manner to give an average latency as low as that of the single-bit correcting Hamming decoder. A Hamming algorithm with 2-b error-correcting capacity for very small block sizes (< 1 B) is another low-latency multibit ECC algorithm that is discussed. The viability of these methods and algorithms with respect to latency and die area is proved vis-a?-vis software and hardware implementations.

Journal ArticleDOI
TL;DR: Three error-correcting architectures, named as whole-page, sector-pipelined, and multistrip ones, are proposed and the VLSI design applies both algorithmic and architectural-level optimizations that include parallel algorithm transformation, resource sharing, and time multiplexing.
Abstract: Bit-error correction is crucial for realizing cost-effective and reliable NAND Flash-memory-based storage systems. In this paper, low-power and high-throughput error-correction circuits have been developed for multilevel cell (MLC) nand Flash memories. The developed circuits employ the Bose-Chaudhuri-Hocquenghem code to correct multiple random bit errors. The error-correcting codes for them are designed based on the bit-error characteristics of MLC NAND Flash memories for solid-state drives. To trade the code rate, circuit complexity, and power consumption, three error-correcting architectures, named as whole-page, sector-pipelined, and multistrip ones, are proposed. The VLSI design applies both algorithmic and architectural-level optimizations that include parallel algorithm transformation, resource sharing, and time multiplexing. The chip area, power consumption, and throughput results for these three architectures are presented.

Journal ArticleDOI
TL;DR: An error correction procedure for the toric and planarcodes is described, based on polynomial-time graph matching techniques, which is efficiently implementable as the classical feed-forward processing step in a real quantum computer.
Abstract: The planar code scheme for quantum computation features a 2d array of nearest-neighborcoupled qubits yet claims a threshold error rate approaching 1% [1]. This result wasobtained for the toric code, from which the planar code is derived, and surpasses allother known codes restricted to 2d nearest-neighbor architectures by several orders ofmagnitude. We describe in detail an error correction procedure for the toric and planarcodes, which is based on polynomial-time graph matching techniques and is efficientlyimplementable as the classical feed-forward processing step in a real quantum computer.By applying one and two qubit depolarizing errors of equal probability p, we determinethe threshold error rates for the two codes (differing only in their boundary conditions)for both ideal and non-ideal syndrome extraction scenarios. We verify that the toriccode has an asymptotic threshold of pth = 15.5% under ideal syndrome extraction, and pth = 7.8×10-3 for the non-ideal case, in agreement with [1]. Simulations of the planarcode indicate that the threshold is close to that of the toric code.