scispace - formally typeset
Search or ask a question

Showing papers on "Error detection and correction published in 2015"


Journal ArticleDOI
05 Mar 2015-Nature
TL;DR: The protection of classical states from environmental bit-flip errors is reported and the suppression of these errors with increasing system size is demonstrated, motivating further research into the many challenges associated with building a large-scale superconducting quantum computer.
Abstract: Quantum computing becomes viable when a quantum state can be protected from environment-induced error. If quantum bits (qubits) are sufficiently reliable, errors are sparse and quantum error correction (QEC) is capable of identifying and correcting them. Adding more qubits improves the preservation of states by guaranteeing that increasingly larger clusters of errors will not cause logical failure-a key requirement for large-scale systems. Using QEC to extend the qubit lifetime remains one of the outstanding experimental challenges in quantum computing. Here we report the protection of classical states from environmental bit-flip errors and demonstrate the suppression of these errors with increasing system size. We use a linear array of nine qubits, which is a natural step towards the two-dimensional surface code QEC scheme, and track errors as they occur by repeatedly performing projective quantum non-demolition parity measurements. Relative to a single physical qubit, we reduce the failure rate in retrieving an input state by a factor of 2.7 when using five of our nine qubits and by a factor of 8.5 when using all nine qubits after eight cycles. Additionally, we tomographically verify preservation of the non-classical Greenberger-Horne-Zeilinger state. The successful suppression of environment-induced errors will motivate further research into the many challenges associated with building a large-scale superconducting quantum computer.

979 citations


Proceedings ArticleDOI
07 Jun 2015
TL;DR: A low-latency generic accuracy configurable adder to support variable approximation modes that provides a higher number of potential configurations compared to state-of-the-art, thus enabling a high degree of design flexibility and trade-off between performance and output quality.
Abstract: High performance approximate adders typically comprise of multiple smaller sub-adders, carry prediction units and error correction units In this paper, we present a low-latency generic accuracy configurable adder to support variable approximation modes It provides a higher number of potential configurations compared to state-of-the-art, thus enabling a high degree of design flexibility and trade-off between performance and output quality An error correction unit is integrated to provide accurate results for cases where high accuracy is required Furthermore, an associated scheme for error probability estimation allows convenient comparison of different approximate adder configurations without requiring the need to numerically simulate the adder Our experimental results validate the developed error model and also the lower latency of our generic accuracy configurable adder over state-of-the-art approximate adders For functional verification and prototyping, we have used a Xilinx Virtex-6 FPGA Our adder model and synthesizable RTL are made open-source

274 citations


Journal ArticleDOI
TL;DR: In this article, the two parity measurements comprising the stabilizers of the three-qubit repetition code protecting one logical qubit from physical bit-flip errors were realized using a five qubit superconducting processor.
Abstract: Quantum data are susceptible to decoherence induced by the environment and to errors in the hardware processing it. A future fault-tolerant quantum computer will use quantum error correction to actively protect against both. In the smallest error correction codes, the information in one logical qubit is encoded in a two-dimensional subspace of a larger Hilbert space of multiple physical qubits. For each code, a set of non-demolition multi-qubit measurements, termed stabilizers, can discretize and signal physical qubit errors without collapsing the encoded information. Here using a five-qubit superconducting processor, we realize the two parity measurements comprising the stabilizers of the three-qubit repetition code protecting one logical qubit from physical bit-flip errors. While increased physical qubit coherence times and shorter quantum error correction blocks are required to actively safeguard the quantum information, this demonstration is a critical step towards larger codes based on multiple parity measurements.

268 citations


Journal ArticleDOI
06 May 2015-Neuron
TL;DR: Grid cells in behaving rodents as they made long trajectories across an open arena found that error accumulates relative to time and distance traveled since the animal last encountered a boundary, suggesting that border cells serve as a neural substrate for error correction.

217 citations


Journal ArticleDOI
TL;DR: The efficiency of starcode is attributable to the poucet search, a novel implementation of the Needleman–Wunsch algorithm performed on the nodes of a trie.
Abstract: Motivation: The increasing throughput of sequencing technologies offers new applications and challenges for computational biology. In many of those applications, sequencing errors need to be corrected. This is particularly important when sequencing reads from an unknwon reference such as random DNA barcodes. In this case, error correction can be done by performing a pairwise comparison of all the barcodes, which is computationally complex problem. Results: Here we address this challenge and describe an exact algorithm to determine which pairs of sequences lie within a given Levenshtein distance. For error correction or redundancy reduction purposes, matched pairs are then merged into clusters of similar sequences. The effiency of starcode is attributable to the poucet search, a novel implementation of the Needleman-Wunsch algorithm performed on the nodes of a trie. On the task of matching random barcodes, starcode outperforms sequence clustering algorithms in both speed and precision. Availability and implementation: The C source code is available at http://github.com/gui11aume/starcode.

160 citations


Journal ArticleDOI
TL;DR: In this paper, the performance of a quantum annealer on hard random Ising optimization problems can be substantially improved using quantum Annealing Correction (QAC) strategy tailored to the D-Wave Two device.
Abstract: We demonstrate that the performance of a quantum annealer on hard random Ising optimization problems can be substantially improved using quantum annealing correction (QAC). Our error correction strategy is tailored to the D-Wave Two device. We find that QAC provides a statistically significant enhancement in the performance of the device over a classical repetition code, improving as a function of problem size as well as hardness. Moreover, QAC provides a mechanism for overcoming the precision limit of the device, in addition to correcting calibration errors. Performance is robust even to missing qubits. We present evidence for a constructive role played by quantum effects in our experiments by contrasting the experimental results with the predictions of a classical model of the device. Our work demonstrates the importance of error correction in appropriately determining the performance of quantum annealers.

115 citations


Journal ArticleDOI
TL;DR: The experimental results demonstrate that this algorithm provides better robustness without affecting the quality of watermarked image, and combines the advantages and removes the disadvantages of the two transform techniques.
Abstract: In this paper, the effects of different error correction codes on the robustness and imperceptibility of discrete wavelet transform and singular value decomposition based dual watermarking scheme is investigated. Text and image watermarks are embedded into cover radiological image for their potential application in secure and compact medical data transmission. Four different error correcting codes such as Hamming, the Bose, Ray-Chaudhuri, Hocquenghem (BCH), the Reed---Solomon and hybrid error correcting (BCH and repetition code) codes are considered for encoding of text watermark in order to achieve additional robustness for sensitive text data such as patient identification code. Performance of the proposed algorithm is evaluated against number of signal processing attacks by varying the strength of watermarking and covers image modalities. The experimental results demonstrate that this algorithm provides better robustness without affecting the quality of watermarked image.This algorithm combines the advantages and removes the disadvantages of the two transform techniques. Out of the three error correcting codes tested, it has been found that Reed---Solomon shows the best performance. Further, a hybrid model of two of the error correcting codes (BCH and repetition code) is concatenated and implemented. It is found that the hybrid code achieves better results in terms of robustness. This paper provides a detailed analysis of the obtained experimental results.

103 citations


Journal ArticleDOI
TL;DR: It is shown that univariate multiplicity codes of rate R over fields of prime order can be list-decoded from a (1 R e) fraction of errors in polynomial time (for constant R;e).
Abstract: We study the list-decodability of multiplicity codes. These codes, which are based on evaluations of high-degree polynomials and their derivatives, have rate approaching 1 while simultaneously allowing for sublinear-time error correction. In this paper, we show that multiplicity codes also admit powerful list-decoding and local list-decoding algorithms that work even in the presence of a large error fraction. In other words, we give algorithms for recovering a polynomial given several evaluations of it and its derivatives, where possibly many of the given evaluations are incorrect. Our first main result shows that univariate multiplicity codes over fields of prime order can be list-decoded up to the so-called "list-decoding capacity." Specifically, we show that univariate multiplicity codes of rate R over fields of prime order can be list-decoded from a (1 R e) fraction of errors in polynomial time (for constant R;e). This resembles the behavior of the "Folded Reed-Solomon Codes" of Guruswami and Rudra (Trans. Info. Theory 2008). The list-decoding algorithm is based on constructing a differential equation of which the desired codeword is a solution; this differential equation is then solved using a power-series approach (a variation of Hensel lifting) along with other algebraic ideas. Our second main result is a list-decoding algorithm for decoding multivariate multiplicity codes up to their Johnson radius. The key ingredient of this algorithm is the construction of a special family of "algebraically-repelling" curves passing through the points of F m ; no moderate-degree multivariate polynomial over F m can simultaneously vanish on all these A version of this paper was posted online as an Electronic Colloq. on Computational Complexity Tech. Report (20). Supported in part by a Sloan Fellowship and NSF grant CCF-1253886.

96 citations


Journal ArticleDOI
TL;DR: A proof that the minimum weight perfect matching problem associated with running a particular class of topological quantum error correction codes on this array can be exactly solved with a 2-D square array of classical computing devices is provided.
Abstract: Consider a 2-D square array of qubits of extent L × L. We provide a proof that the minimum weight perfect matching problem associated with running a particular class of topological quantum error correction codes on this array can be exactly solved with a 2-D square array of classical computing devices, each of which is nominally associated with a fixed number N of qubits, in constant average time per round of error detection independent of L provided physical error rates are below fixed nonzero values, and other physically reasonable assumptions. This proof is applicable to the fully fault-tolerant case only, not the case of perfect stabilizer measurements.

91 citations


Journal ArticleDOI
TL;DR: A new class of exact-repair regenerating codes is constructed by stitching together shorter erasure correction codes, where the stitching pattern can be viewed as block designs, and it is shown that the proposed construction can achieve a nontrivial point on the optimal functional-repair tradeoff and is asymptotically optimal at high rate.
Abstract: A new class of exact-repair regenerating codes is constructed by stitching together shorter erasure correction codes, where the stitching pattern can be viewed as block designs. The proposed codes have the help-by-transfer property where the helper nodes simply transfer part of the stored data directly, without performing any computation. This embedded error correction structure makes the decoding process straightforward, and in some cases the complexity is very low. We show that this construction is able to achieve performance better than space-sharing between the minimum storage regenerating codes and the minimum repair-bandwidth regenerating codes, and it is the first class of codes to achieve this performance. In fact, it is shown that the proposed construction can achieve a nontrivial point on the optimal functional-repair tradeoff, and it is asymptotically optimal at high rate, i.e., it asymptotically approaches the minimum storage and the minimum repair-bandwidth simultaneously.

85 citations


Journal ArticleDOI
TL;DR: A novel robust nuclear norm regularized regression (RNR) method for face recognition with occlusion, which integrates error detection and error support into one regression model and provides the complexity analysis and convergence analysis of NR.

Journal ArticleDOI
Gang Xu1, Mengdao Xing1, Xiang-Gen Xia1, Qian-qian Chen1, Lei Zhang1, Zheng Bao1 
TL;DR: A novel algorithm for high-resolution ISAR imaging and scaling from SA data is presented, which effectively incorporates the translational motion phase error and MTRC corrections.
Abstract: In high-resolution radar imaging, the rotational motion of targets generally produces migration through resolution cells (MTRC) in inverse synthetic aperture radar (ISAR) images. Usually, it is a challenge to realize accurate MTRC correction on sparse aperture (SA) data, which tends to degrade the performance of translational motion compensation and SA-imaging. In this paper, we present a novel algorithm for high-resolution ISAR imaging and scaling from SA data, which effectively incorporates the translational motion phase error and MTRC corrections. In this algorithm, the ISAR image formation is converted into a sparsity-driven optimization via maximum a posterior (MAP) estimation, where the statistics of an ISAR image is modeled as complex Laplace distribution to provide a sparse prior. The translational motion phase error compensation and cross-range MTRC correction are modeled as joint range-invariant and range-variant phase error corrections in the range-compressed phase history domain. Our proposed imaging approach is performed by a two-step process: 1) the range-invariant and range-variant phase error estimations using a metric of minimum entropy are employed and solved by using a coordinate descent method to realize a coarse phase error correction. Meanwhile, the rotational motion can be obtained from the estimation of range-variant phase errors, which is used for ISAR scaling in the cross-range dimension; 2) under a two-dimensional (2-D) Fourier-based dictionary by involving the slant-range MTRC, joint MTRC-corrected ISAR imaging and accurate phase adjustment are realized by solving the sparsity-driven optimization with SA data, where the residual phase errors are treated as model error and removed to achieve a fine correction. Finally, some experiments based on simulated and measured data are performed to confirm the effectiveness of the proposed algorithm.

Journal ArticleDOI
TL;DR: A new puncturing algorithm is proposed, and an algorithm for finding good extending sequences for polar codes from any arbitrary punctured rate is developed, with the goal of improving the throughput as much as possible.
Abstract: We construct polar codes for the specific purpose of incremental redundancy hybrid automatic repeat request (IR-HARQ) schemes. The rate compatibility of our scheme is ensured by both puncturing and extending of the code. A new puncturing algorithm for polar codes is proposed, and we develop an algorithm for finding good extending sequences for polar codes from any arbitrary punctured rate, with the goal of improving the throughput as much as possible. Simulation results for different types of puncturing and extending algorithms are presented. We show how the proposed extending algorithm, when properly operated with a good puncturing algorithm and a well-chosen puncturing rate, yields IR-HARQ coding schemes which can operate within 1 dB of Shannon capacity over a very wide range of signal-to-noise ratios.

Journal ArticleDOI
TL;DR: A new error correction method for quantum computing memories is based on local computing elements, which relies on parallel cellular operations within the topological quantum memory, so that the local operations replace the need for a complex system-wide scheme.
Abstract: We introduce a new framework for constructing topological quantum memories, by recasting error recovery as a dynamical process on a field generating cellular automaton. We envisage quantum systems controlled by a classical hardware composed of small local memories, communicating with neighbours and repeatedly performing identical simple update rules. This approach does not require any global operations or complex decoding algorithms. Our cellular automata draw inspiration from classical field theories, with a Coulomb-like potential naturally emerging from the local dynamics. For a 3D automaton coupled to a 2D toric code, we present evidence of an error correction threshold above 6.1% for uncorrelated noise. A 2D automaton equipped with a more complex update rule yields a threshold above 8.2%. Our framework provides decisive new tools in the quest for realising a passive dissipative quantum memory. A new error correction method for quantum computing memories is based on local computing elements. Michael Herold from the Freie Universitat Berlin in Germany, with colleagues in Germany, Denmark and the UK, sought to address the challenge of maintaining information stored in topological quantum memories. Without a stable memory, delicate quantum states can decay quickly, introducing errors in stored information. Error correction is an important process in stabilizing topological memories, but was previously conceived as a system-wide process. The proposed practical error correction mechanism relies on parallel cellular operations within the topological quantum memory, so that the local operations replace the need for a complex system-wide scheme. The concept has the further benefit of being compatible with classical hardware, and it is easily scalable.

Patent
17 Aug 2015
TL;DR: In this article, a memory system includes a link having at least one signal line and a controller, and a first error protection generator coupled to the transmitter, which dynamically adds an error detection code to at least a portion of the first data.
Abstract: A memory system includes a link having at least one signal line and a controller. The controller includes at least one transmitter coupled to the link to transmit first data, and a first error protection generator coupled to the transmitter. The first error protection generator dynamically adds an error detection code to at least a portion of the first data. At least one receiver is coupled to the link to receive second data. A first error detection logic determines if the second data received by the controller contains at least one error and, if an error is detected, asserts a first error condition. The system includes a memory device having at least one memory device transmitter coupled to the link to transmit the second data. A second error protection generator coupled to the memory device transmitter dynamically adds an error detection code to at least a portion of the second data.

Journal ArticleDOI
TL;DR: In this paper, a localized, robust and efficient a-posteriori error estimation of the localized reduced basis multi-scale (LRBMS) method for parametric elliptic problems with possibly heterogeneous diffusion coefficient is considered.
Abstract: In this contribution we consider localized, robust and efficient a-posteriori error estimation of the localized reduced basis multi-scale (LRBMS) method for parametric elliptic problems with possibly heterogeneous diffusion coefficient. The numerical treatment of such parametric multi-scale problems are characterized by a high computational complexity, arising from the multi-scale character of the underlying differential equation and the additional parameter dependence. The LRBMS method can be seen as a combination of numerical multi-scale methods and model reduction using reduced basis (RB) methods to efficiently reduce the computational complexity with respect to the multi-scale as well as the parametric aspect of the problem, simultaneously. In contrast to the classical residual based error estimators currently used in RB methods, we are considering error estimators that are based on conservative flux reconstruction and provide an efficient and rigorous bound on the full error with respect to the weak solution. In addition, the resulting error estimator is localized and can thus be used in the on-line phase to adaptively enrich the solution space locally where needed. The resulting certified LRBMS method with adaptive on-line enrichment thus guarantees the quality of the reduced solution during the on-line phase, given any (possibly insufficient) reduced basis that was generated during the offline phase. Numerical experiments are given to demonstrate the applicability of the resulting algorithm with online enrichment to single phase flow in heterogeneous media.

Journal ArticleDOI
TL;DR: Trivial solutions for error-correcting and authenticating data streams either suffer from a long delay at the receiver’s end or cannot perform well when the communication channel is noisy.
Abstract: Error correction and message authentication are well studied in the literature, and various efficient solutions have been suggested and analyzed. This is however not the case for data streams in which the message is very long, possibly infinite, and not known in advance to the sender. Trivial solutions for error-correcting and authenticating data streams either suffer from a long delay at the receiver’s end or cannot perform well when the communication channel is noisy.

Proceedings ArticleDOI
24 Aug 2015
TL;DR: The benefits of a coded generalization of selective repeat ARQ for minimizing the in-order delivery delay is explored andumerical results help show the gains and trade-offs between meeting the user's delay constraints and the costs inflicted on the achievable rate.
Abstract: Reducing the in-order delivery, or playback, delay of reliable transport layer protocols over error prone networks can significantly improve application layer performance. This is especially true for applications that have time sensitive constraints such as streaming services. We explore the benefits of a coded generalization of selective repeat ARQ for minimizing the in-order delivery delay. An analysis of the delay's first two moments is provided so that we can determine when and how much redundancy should be added to meet a user's requirements. Numerical results help show the gains over selective repeat ARQ, as well as the trade-offs between meeting the user's delay constraints and the costs inflicted on the achievable rate. Finally, the analysis is compared with experimental results to help illustrate how our work can be used to help inform system decisions.

Journal ArticleDOI
01 Nov 2015
TL;DR: Numerical analysis provides us with usable checking computations for the solution of initial-value problems in ODEs and PDEs, arguably the most common problems in computational science.
Abstract: Errors due to hardware or low-level software problems, if detected, can be fixed by various schemes, such as recomputation from a checkpoint. Silent errors are errors in application state that have escaped low-level error detection. At extreme scale, where machines can perform astronomically many operations per second, silent errors threaten the validity of computed results. We propose a new paradigm for detecting silent errors at the application level. Our central idea is to frequently compare computed values to those provided by a cheap checking computation, and to build error detectors based on the difference between the two output sequences. Numerical analysis provides us with usable checking computations for the solution of initial-value problems in ODEs and PDEs, arguably the most common problems in computational science. Here, we provide, optimize, and test methods based on Runge-Kutta and linear multistep methods for ODEs, and on implicit and explicit finite difference schemes for PDEs. We take the heat equation and Navier-Stokes equations as examples. In tests with artificially injected errors, this approach effectively detects almost all meaningful errors, without significant slowdown.

Journal ArticleDOI
TL;DR: In this paper, the authors explore a variety of techniques for leakage-resilient, fault-tolerant error correction in topological codes and present a leakage model that is physically motivated and efficient to simulate.
Abstract: Quantum codes excel at correcting local noise but fail to correct leakage faults that excite qubits to states outside the computational space. Aliferis and Terhal [1] have shown that an accuracy threshold exists for leakage faults using gadgets called leakage reduction units (LRUs). However, these gadgets reduce the accuracy threshold and increase overhead and experimental complexity, and these costs have not been thoroughly understood. We explore a variety of techniques for leakage-resilient, fault-tolerant error correction in topological codes. Our contributions are threefold. First, we develop a leakage model that is physically motivated and efficient to simulate. Second, we use Monte-Carlo simulations to survey several syndrome extraction circuits. Third, given the capability to perform 3-outcome measurements, we present a dramatically improved syndrome processing algorithm. Our simulations show that simple circuits with one extra CNOT per check operator and no additional ancillas reduce the accuracy threshold by less than a factor of 4 when leakage and depolarizing noise rates are comparable. This becomes a factor of 2 when the decoder uses 3-outcome measurements. Finally, when the physical error rate is less than 2 × 10-4, placing LRUs after every gate may achieve the lowest logical error rates of all of the circuits we considered. We anticipate that the closely related planar codes might exhibit the same accuracy thresholds and that the ideas may generalize naturally to other topological codes.

Journal ArticleDOI
TL;DR: New codes that can correct triple adjacent errors and 3-bit burst errors are presented and have been implemented using a 45-nm library and compared with previous proposals, showing that these codes have better error protection with a moderate overhead and low redundancy.
Abstract: Static random access memories (SRAMs) are key in electronic systems. They are used not only as standalone devices, but also embedded in application specific integrated circuits. One key challenge for memories is their susceptibility to radiation-induced soft errors that change the value of memory cells. Error correction codes (ECCs) are commonly used to ensure correct data despite soft errors effects in semiconductor memories. Single error correction/double error detection (SEC-DED) codes have been traditionally the preferred choice for data protection in SRAMs. During the last decade, the percentage of errors that affect more than one memory cell has increased substantially, mainly due to multiple cell upsets (MCUs) caused by radiation. The bits affected by these errors are physically close. To mitigate their effects, ECCs that correct single errors and double adjacent errors have been proposed. These codes, known as single error correction/double adjacent error correction (SEC-DAEC), require the same number of parity bits as traditional SEC-DED codes and a moderate increase in the decoder complexity. However, MCUs are not limited to double adjacent errors, because they affect more bits as technology scales. In this brief, new codes that can correct triple adjacent errors and 3-bit burst errors are presented. They have been implemented using a 45-nm library and compared with previous proposals, showing that our codes have better error protection with a moderate overhead and low redundancy.

Patent
Yoshimichi Tanizawa1
13 Feb 2015
TL;DR: In this paper, a quantum key distribution device includes a key sharing unit, a correcting unit, compressor, and a controller, which is configured to generate a corrected bit string through an error correction process with respect to the shared bit string.
Abstract: According to an embodiment, a quantum key distribution device includes a key sharing unit, a correcting unit, a compressor, and a controller The key sharing unit is configured to generate a shared bit string by using quantum key distribution performed with another quantum key distribution device via a quantum communication channel The correcting unit is configured to generate a corrected bit string through an error correction process with respect to the shared bit string The compressor is configured to generate an encryption key through a key compression process with respect to the corrected bit string The controller is configured to perform a restraining operation in which the total number of bits of encryption keys generated per unit time by the compressor is smaller than the total number of bits of the encryption keys generated per unit time by the compressor in the case of not performing the restraining operation

Journal ArticleDOI
TL;DR: A novel differential spatial modulation scheme for amplitude phase shift keying (APSK) modulation is developed, which can either improve throughput or performance over DSM for PSK, and the impact of time-varying fading on DSM is investigated.
Abstract: We develop a novel differential spatial modulation (DSM) scheme for amplitude phase shift keying (APSK) modulation, which can either improve throughput or performance over DSM for PSK. Then we investigate the impact of time-varying fading on DSM. We find performance degrades if the fading is too fast due to differential detection. The impact of a long outer error control code (ECC) is also considered. Its performance is limited by the slowly varying channel required for differential detection. We consider using reconfigurable antennas to periodically change the channel conditions and hence significantly improve coded performance for DSM systems.

Journal ArticleDOI
Sunghwan Kim1
TL;DR: An adaptive forward error correction scheme based on low-density parity-check (LDPC) codes to efficiently adjust dimming values in visible light communication systems with significant advantages.
Abstract: In this letter, we propose an adaptive forward error correction scheme based on low-density parity-check (LDPC) codes to efficiently adjust dimming values in visible light communication (VLC) systems. The proposed code is a quasi-cyclic LDPC code, in which extension and puncturing methods are used to maintain dimming control. The significant advantages of the proposed method are due to the lower number of codes required to support codes with different coding rate and the relatively small performance degradation for dimming control. Simulation results show that our proposed method efficiently maintains the dimming control in VLC systems with LDPC codes.

Patent
21 Jul 2015
TL;DR: In this paper, the authors used combinations of various methods, including transmitting data symbols by weighing or modulating a family of time shifted and frequency shifted waveforms bursts, pilot symbol methods, error detection methods, MIMO methods, and other methods, to automatically determine the structure of a data channel, and automatically compensate for signal distortions caused by various structural aspects of the data channel.
Abstract: Computerized wireless transmitter/receiver system that automatically uses combinations of various methods, including transmitting data symbols by weighing or modulating a family of time shifted and frequency shifted waveforms bursts, pilot symbol methods, error detection methods, MIMO methods, and other methods, to automatically determine the structure of a data channel, and automatically compensate for signal distortions caused by various structural aspects of the data channel, as well as changes in channel structure. Often the data channel is a two or three dimensional space in which various wireless transmitters, receivers and signal reflectors are moving. The invention's modulation methods detect locations and speeds of various reflectors and other channel impairments. Error detection schemes, variation of modulation methods, and MIMO techniques further detect and compensate for impairments. The invention can automatically optimize its operational parameters, and produce a deterministic non-fading signal in environments in which other methods would likely degrade.

Journal ArticleDOI
TL;DR: A novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time‐stepping ordinary differential equation solvers is proposed.
Abstract: Summary Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models (ROMs) can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory bounding the approximation error in time of proper orthogonal decomposition-based ROMs, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers' test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full-order model is recovered to within discretization error. A parallel version of the resulting method can be used on supercomputers to generate proper orthogonal decomposition-based ROMs, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space. Copyright © 2016 John Wiley & Sons, Ltd.

Proceedings ArticleDOI
15 Jul 2015
TL;DR: In this method, an individual Luenberger observer is designed from each sensor, and the state estimates from each of the observers are combined through a scheme motivated by error correction techniques, which results in estimation resiliency against sensor attacks under a mild condition on the system observability.
Abstract: This paper presents a secure and robust state estimation scheme for continuous-time linear dynamical systems. The method is secure in that it correctly estimates the states under sensor attacks by exploiting sensing redundancy, and it is robust in that it guarantees a bounded estimation error despite measurement noises and process disturbances. In this method, an individual Luenberger observer (of possibly smaller size) is designed from each sensor. Then, the state estimates from each of the observers are combined through a scheme motivated by error correction techniques, which results in estimation resiliency against sensor attacks under a mild condition on the system observability. Moreover, in the state estimates combining stage, our method reduces the search space of a minimization problem to a finite set, which substantially reduces the required computational effort.

Journal ArticleDOI
TL;DR: Owing to its significantly increased parallelism, the proposed algorithm facilitates throughputs and latencies that are up to 6.86 times superior to those of the state-of-the art algorithm, when employed for the LTE and WiMAX turbo codes, but at the cost of a moderately increased computational complexity and resource requirement.
Abstract: This paper proposes a novel alternative to the Logarithmic Bahl-Cocke-Jelinek-Raviv (Log-BCJR) algorithm for turbo decoding, yielding significantly improved processing throughput and latency. While the Log-BCJR processes turbo-encoded bits in a serial forwards-backwards manner, the proposed algorithm operates in a fully-parallel manner, processing all bits in both components of the turbo code at the same time. The proposed algorithm is compatible with all turbo codes, including those of the LTE and WiMAX standards. These standardized codes employ odd-even interleavers, facilitating a novel technique for reducing the complexity of the proposed algorithm by 50%. More specifically, odd-even interleavers allow the proposed algorithm to alternate between processing the odd-indexed bits of the first component code at the same time as the even-indexed bits of the second component, and vice-versa. Furthermore, the proposed fully-parallel algorithm is shown to converge to the same error correction performance as the state-of-the-art turbo decoding algorithm. Owing to its significantly increased parallelism, the proposed algorithm facilitates throughputs and latencies that are up to 6.86 times superior to those of the state-of-the art algorithm, when employed for the LTE and WiMAX turbo codes. However, this is achieved at the cost of a moderately increased computational complexity and resource requirement.

Journal ArticleDOI
TL;DR: This work develops a fault-tolerant decoder for the surface code, capable of efficient operation for qubits and qudits of any dimension, generalizing the decoder first introduced by Bravyi and Haah.
Abstract: The surface code is one of the most promising candidates for combating errors in large scale fault-tolerant quantum computation. A fault-tolerant decoder is a vital part of the error correction process---it is the algorithm which computes the operations needed to correct or compensate for the errors according to the measured syndrome, even when the measurement itself is error prone. Previously decoders based on minimum-weight perfect matching have been studied. However, these are not immediately generalizable from qubit to qudit codes. In this work, we develop a fault-tolerant decoder for the surface code, capable of efficient operation for qubits and qudits of any dimension, generalizing the decoder first introduced by Bravyi and Haah [Phys. Rev. Lett. 111, 200501 (2013)]. We study its performance when both the physical qudits and the syndromes measurements are subject to generalized uncorrelated bit-flip noise (and the higher-dimensional equivalent). We show that, with appropriate enhancements to the decoder and a high enough qudit dimension, a threshold at an error rate of more than $8%$ can be achieved.

Journal ArticleDOI
TL;DR: Pollux is a general-purpose error corrector that corrects errors introduced by Illumina, Ion Torrent, and Roche 454 sequencing technologies and can be applied to single- or mixed-genome data.
Abstract: Second-generation sequencers generate millions of relatively short, but error-prone, reads. These errors make sequence assembly and other downstream projects more challenging. Correcting these errors improves the quality of assemblies and projects which benefit from error-free reads. We have developed a general-purpose error corrector that corrects errors introduced by Illumina, Ion Torrent, and Roche 454 sequencing technologies and can be applied to single- or mixed-genome data. In addition to correcting substitution errors, we locate and correct insertion, deletion, and homopolymer errors while remaining sensitive to low coverage areas of sequencing projects. Using published data sets, we correct 94% of Illumina MiSeq errors, 88% of Ion Torrent PGM errors, 85% of Roche 454 GS Junior errors. Introduced errors are 20 to 70 times more rare than successfully corrected errors. Furthermore, we show that the quality of assemblies improves when reads are corrected by our software. Pollux is highly effective at correcting errors across platforms, and is consistently able to perform as well or better than currently available error correction software. Pollux provides general-purpose error correction and may be used in applications with or without assembly.