scispace - formally typeset
Search or ask a question

Showing papers on "Error detection and correction published in 2018"


Journal ArticleDOI
TL;DR: In this paper, a joint source and channel coding (JSCC) technique was proposed for wireless image transmission that does not rely on explicit codes for either compression or error correction; instead, it directly maps the image pixel values to the complex-valued channel input symbols.
Abstract: We propose a joint source and channel coding (JSCC) technique for wireless image transmission that does not rely on explicit codes for either compression or error correction; instead, it directly maps the image pixel values to the complex-valued channel input symbols. We parameterize the encoder and decoder functions by two convolutional neural networks (CNNs), which are trained jointly, and can be considered as an autoencoder with a non-trainable layer in the middle that represents the noisy communication channel. Our results show that the proposed deep JSCC scheme outperforms digital transmission concatenating JPEG or JPEG2000 compression with a capacity achieving channel code at low signal-to-noise ratio (SNR) and channel bandwidth values in the presence of additive white Gaussian noise (AWGN). More strikingly, deep JSCC does not suffer from the ``cliff effect'', and it provides a graceful performance degradation as the channel SNR varies with respect to the SNR value assumed during training. In the case of a slow Rayleigh fading channel, deep JSCC learns noise resilient coded representations and significantly outperforms separation-based digital communication at all SNR and channel bandwidth values.

246 citations


Journal ArticleDOI
TL;DR: It is shown that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli Z errors occur more frequently than X or Y errors, and that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.
Abstract: We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli Z errors occur more frequently than X or Y errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of 43.7(1)% using a tensor network decoder proposed by Bravyi, Suchara, and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example 28.2(2)% at a bias of 10. The performance is, in fact, at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of Y instead of Z around the faces, as this doubles the number of useful syndrome bits associated with the dominant Z errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.

178 citations


Proceedings ArticleDOI
17 Jun 2018
TL;DR: In this paper, the authors proposed a scheme that, in addition to using error correction to distribute mixed jobs across nodes, is also able to exploit the work completed by all nodes, including stragglers.
Abstract: In cloud computing systems slow processing nodes, often referred to as “stragglers”, can significantly extend the computation time. Recent results have shown that error correction coding can be used to reduce the effect of stragglers. In this work we introduce a scheme that, in addition to using error correction to distribute mixed jobs across nodes, is also able to exploit the work completed by all nodes, including stragglers. We first consider vector-matrix multiplication and apply maximum distance separable (MDS) codes to small blocks of sub-matrices. The worker nodes process blocks sequentially, working block-by-block, transmitting partial per-block results to the master as they are completed. Sub-blocking allows a more continuous completion process, which thereby allows us to exploit the work of a much broader spectrum of processors and reduces computation time. We then apply this technique to matrix-matrix multiplication using product code. In this case, we show that the order of computing sub-tasks is a new degree of design freedom that can be exploited to reduce computation time further. We propose a novel approach to analyze the finishing time, which is different from typical order statistics. Simulation results show that the expected computation time decreases by a factor of at least two in compared to previous methods.

120 citations


Journal ArticleDOI
22 Mar 2018
TL;DR: A study of implicit sensorimotor adaptation using task-irrelevant clamped visual feedback finds that learning is constrained primarily by the size of the error correction rather than sensitivity to error, which presents a challenge to current models of adaptation.
Abstract: Implicit sensorimotor adaptation is traditionally described as a process of error reduction, whereby a fraction of the error is corrected for with each movement. Here, in our study of healthy human participants, we characterize two constraints on this learning process: the size of adaptive corrections is only related to error size when errors are smaller than 6°, and learning functions converge to a similar level of asymptotic learning over a wide range of error sizes. These findings are problematic for current models of sensorimotor adaptation, and point to a new theoretical perspective in which learning is constrained by the size of the error correction, rather than sensitivity to error.

116 citations


Journal ArticleDOI
TL;DR: This research proposes an innovative hybrid model based on optimal feature extraction, deep learning algorithm and error correction strategy for multi-step wind speed prediction that consistently has the smallest statistical errors, and outperforms other benchmark methods.

109 citations


Journal ArticleDOI
TL;DR: Three experimental results show that: the error correction is effective in decreasing the prediction error, and the proposed models with error correction are suitable for short-term wind speed forecasting; the ICEEMDAN method is more powerful than other variants of empirical mode decomposition in performing non-stationary decomposition, and

106 citations


Journal ArticleDOI
TL;DR: In this article, a generalized successive cancellation flip (SCFlip) decoding of polar codes is proposed, where one or several positions are flipped from the standard SC decoding to correct the trajectory of the SC decoding.
Abstract: This paper proposes a generalization of the recently introduced successive cancellation flip (SCFlip) decoding of polar codes, characterized by a number of extra decoding attempts, where one or several positions are flipped from the standard SC decoding. To make such an approach effective, we first introduce the concept of higher order bit flips and propose a new metric to determine the bit flips that are more likely to correct the trajectory of the SC decoding. We then propose a generalized SCFlip decoding algorithm, referred to as dynamic-SCFlip (D-SCFlip), which dynamically builds a list of candidate bit flips, while guaranteeing that the next attempt has the highest probability of success among the remaining ones. Simulation results show that D-SCFlip is an effective alternative to SC-list decoding of polar codes, by providing very good error correcting performance, with an average computation complexity close to the one of the SC decoder.

103 citations


Proceedings ArticleDOI
01 Feb 2018
TL;DR: A new error correction scheme for analog neural network accelerators based on arithmetic codes that reduces the respective misclassification rates by 1.5x and 1.1x and encodes the data through multiplication by an integer, which preserves addition operations through the distributive property.
Abstract: Deep neural networks (DNNs) have attracted substantial interest in recent years due to their superior performance on many classification and regression tasks as compared to other supervised learning models. DNNs often require a large amount of data movement, resulting in performance and energy overheads. One promising way to address this problem is to design an accelerator based on in-situ analog computing that leverages the fundamental electrical properties of memristive circuits to perform matrix-vector multiplication. Recent work on analog neural network accelerators has shown great potential in improving both the system performance and the energy efficiency. However, detecting and correcting the errors that occur during in-memory analog computation remains largely unexplored. The same electrical properties that provide the performance and energy improvements make these systems especially susceptible to errors, which can severely hurt the accuracy of the neural network accelerators. This paper examines a new error correction scheme for analog neural network accelerators based on arithmetic codes. The proposed scheme encodes the data through multiplication by an integer, which preserves addition operations through the distributive property. Error detection and correction are performed through a modulus operation and a correction table lookup. This basic scheme is further improved by data-aware encoding to exploit the state dependence of the errors, and by knowledge of how critical each portion of the computation is to overall system accuracy. By leveraging the observation that a physical row that contains fewer 1s is less susceptible to an error, the proposed scheme increases the effective error correction capability with less than 4.5% area and less than 4.7% energy overheads. When applied to a memristive DNN accelerator performing inference on the MNIST and ILSVRC-2012 datasets, the proposed technique reduces the respective misclassification rates by 1.5x and 1.1x.

102 citations


Journal ArticleDOI
08 Feb 2018
TL;DR: In this paper, an example fault-tolerant error correction (EC) protocol using flag qubits is presented. But it is applicable to stabilizer codes of arbitrary distance that satisfy a set of conditions and uses fewer qubits than other schemes, such as Shor, Steane and Knill.
Abstract: Fault-tolerant error correction (EC) is desirable for performing large quantum computations. In this disclosure, example fault-tolerant EC protocols are disclosed that use flag circuits, which signal when errors resulting from υ faults have weight greater than υ. Also disclosed are general constructions for these circuits (also referred to as flag qubits) for measuring arbitrary weight stabilizers. The example flag EC protocol is applicable to stabilizer codes of arbitrary distance that satisfy a set of conditions and uses fewer qubits than other schemes, such as Shor, Steane and Knill error correction. Also disclosed are examples of infinite code families that satisfy these conditions and analyze the behaviour of distance-three and -five examples numerically. Using fewer resources than Shor EC, the example flag EC protocols can be used in low-overhead fault-tolerant EC protocols using large low density parity check quantum codes.

100 citations


Journal ArticleDOI
Nan Chi1, Yingjun Zhou1, Shangyu Liang1, Fumin Wang1, Jiehui Li1, Yiguang Wang1 
TL;DR: This work successfully implements high-speed CAP32, CAP64, and CAP128 VLC experimental system over 1-m free space transmission with bit error rate under the hard decision-forward error correction threshold of 3.8 × 10–3.3%.
Abstract: High-speed light emitting diode (LED) based visible light communication (VLC) system is restricted by the limited LED bandwidth, low detector sensitivity, and linear and nonlinear distortions. Thus, single- and two-cascaded constant-resistance symmetrical bridged-T amplitude hardware pre-equalizers, carrierless amplitude and phase (CAP) modulation, and a three-stage hybrid postequalizer are investigated to increase transmission data rate for a high-speed LED-based VLC system. The schemes utilized by the hybrid postequalizer are modified cascaded multimodulus algorithm, Volterra series based nonlinear compensation algorithm and decision-directed least mean square algorithm. With these technologies, we successfully implement high-speed CAP32, CAP64, and CAP128 VLC experimental system over 1-m free space transmission with bit error rate under the hard decision-forward error correction threshold of 3.8 × 10–3. System performance improvement employing these key technologies is also validated through experimental demonstration. With the superposition of each stage of the hybrid postequalizer, system performance will be improved. And utilizing two-cascaded hardware pre-equalizer will provide more accurate channel compensation than a single pre-equalizer.

90 citations


Journal ArticleDOI
TL;DR: The results on test set show the correction considering inherent errors of numerical techniques can integrate the physical with statistical information effectively and enhance the forecast accuracy indeed.

Journal ArticleDOI
TL;DR: This paper presents a 28-nm system-on-chip (SoC) for Internet of things (IoT) applications with a programmable accelerator design that implements a powerful fully connected deep neural network (DNN) classifier that exploits data sparsity by completely eliding unnecessary computation and data movement.
Abstract: This paper presents a 28-nm system-on-chip (SoC) for Internet of things (IoT) applications with a programmable accelerator design that implements a powerful fully connected deep neural network (DNN) classifier. To reach the required low energy consumption, we exploit the key properties of neural network algorithms: parallelism, data reuse, small/sparse data, and noise tolerance. We map the algorithm to a very large scale integration (VLSI) architecture based around an single-instruction, multiple-data data path with hardware support to exploit data sparsity by completely eliding unnecessary computation and data movement. This approach exploits sparsity, without compromising the parallel computation. We also exploit the inherent algorithmic noise-tolerance of neural networks, by introducing circuit-level timing violation detection to allow worst case voltage guard-bands to be minimized. The resulting intermittent timing violations may result in logic errors, which conventionally need to be corrected. However, in lieu of explicit error correction, we cope with this by accentuating the noise tolerance of neural networks. The measured test chip achieves high classification accuracy (98.36% for the MNIST test set), while tolerating aggregate timing violation rates > 10−1. The accelerator achieves a minimum energy of 0.36 $\mu \text{J}$ /inference at 667 MHz; maximum throughput at 1.2 GHz and 0.57 $\mu \text{J}$ /inference; or a 10% margined operating point at 1 GHz and 0.58 $\mu \text{J}$ /inference.

Journal ArticleDOI
TL;DR: In this paper, a quasi-cyclic code construction for multi-edge LDPC codes was proposed for hardware-accelerated decoding on a graphics processing unit (GPU), achieving an information throughput of 7.16 Kbit/s on a single NVIDIA GeForce GTX 1080 GPU.
Abstract: The speed at which two remote parties can exchange secret keys in continuous-variable quantum key distribution (CV-QKD) is currently limited by the computational complexity of key reconciliation. Multi-dimensional reconciliation using multi-edge low-density parity-check (LDPC) codes with low code rates and long block lengths has been shown to improve error-correction performance and extend the maximum reconciliation distance. We introduce a quasi-cyclic code construction for multi-edge codes that is highly suitable for hardware-accelerated decoding on a graphics processing unit (GPU). When combined with an 8-dimensional reconciliation scheme, our LDPC decoder achieves an information throughput of 7.16 Kbit/s on a single NVIDIA GeForce GTX 1080 GPU, at a maximum distance of 142 km with a secret key rate of 6.64 × 10−8 bits/pulse for a rate 0.02 code with block length of 106 bits. The LDPC codes presented in this work can be used to extend the previous maximum CV-QKD distance of 100 km to 142 km, while delivering up to 3.50× higher information throughput over the tight upper bound on secret key rate for a lossy channel. Improvements in the post-processing algorithms for quantum cryptography can extend the secure transmission distance by over 40%. Quantum key distribution protocols rely on the transmission of quantum states, but also on classical post-processing to eliminate errors introduced by imperfect equipment or the interference of an attacker. Over long distances, the requirements of this classical 'reconciliation' processing can become the bottleneck for key exchange. Mario Milicevic and colleagues from the University of Toronto and the University of British Columbia in Canada have developed a high-throughput error correction scheme that increases the potential operating range for quantum key distribution from 100 to 143 km. Their method is fast enough that the rate of key distribution is instead limited by the physical properties of the communication channel.

Journal ArticleDOI
TL;DR: A soft-decision decoder for quantum error correction and detection by teleportation is proposed that can achieve almost optimal performance for the depolarizing channel and dramatically improve Knill's C4/C6 scheme for fault-tolerant quantum computation.
Abstract: Fault-tolerant quantum computation with quantum error-correcting codes has been considerably developed over the past decade. However, there are still difficult issues, particularly on the resource requirement. For further improvement of fault-tolerant quantum computation, here we propose a soft-decision decoder for quantum error correction and detection by teleportation. This decoder can achieve almost optimal performance for the depolarizing channel. Applying this decoder to Knill's C4/C6 scheme for fault-tolerant quantum computation, which is one of the best schemes so far and relies heavily on error correction and detection by teleportation, we dramatically improve its performance. This leads to substantial reduction of resources.

Journal ArticleDOI
TL;DR: In this article, the authors explore the feasibility of implementing a small surface code with 9 data qubits and 8 ancilla qubits, commonly referred to as surface-17, using a linear chain of 171Yb+ ions.
Abstract: We explore the feasibility of implementing a small surface code with 9 data qubits and 8 ancilla qubits, commonly referred to as surface-17, using a linear chain of 171Yb+ ions. Two-qubit gates can be performed between any two ions in the chain with gate time increasing linearly with ion distance. Measurement of the ion state by fluorescence requires that the ancilla qubits be physically separated from the data qubits to avoid errors on the data due to scattered photons. We minimize the time required to measure one round of stabilizers by optimizing the mapping of the two-dimensional surface code to the linear chain of ions. We develop a physically motivated Pauli error model that allows for fast simulation and captures the key sources of noise in an ion trap quantum computer including gate imperfections and ion heating. Our simulations showed a consistent requirement of a two-qubit gate fidelity of > 99.9% for logical memory to have a better fidelity than physical two-qubit operations. Finally, we perform an analysis on the error subsets from the importance sampling method used to approximate the logical error rates in this paper to gain insight into which error sources are particularly detrimental to error correction.

Journal ArticleDOI
31 Jul 2018
TL;DR: In this article, the authors introduce several decoding algorithms complemented by deep neural decoders and apply them to analyze several fault-tolerant error correction protocols such as the surface code as well as Steane and Knill error correction.
Abstract: Finding efficient decoders for quantum error correcting codes adapted to realistic experimental noise in fault-tolerant devices represents a significant challenge. In this paper we introduce several decoding algorithms complemented by deep neural decoders and apply them to analyze several fault-tolerant error correction protocols such as the surface code as well as Steane and Knill error correction. Our methods require no knowledge of the underlying noise model afflicting the quantum device making them appealing for real-world experiments. Our analysis is based on a full circuit-level noise model. It considers both distance-three and five codes, and is performed near the codes pseudo-threshold regime. Training deep neural decoders in low noise rate regimes appears to be a challenging machine learning endeavour. We provide a detailed description of our neural network architectures and training methodology. We then discuss both the advantages and limitations of deep neural decoders. Lastly, we provide a rigorous analysis of the decoding runtime of trained deep neural decoders and compare our methods with anticipated gate times in future quantum devices. Given the broad applications of our decoding schemes, we believe that the methods presented in this paper could have practical applications for near term fault-tolerant experiments.

Journal ArticleDOI
TL;DR: In this paper, the error control problem in settings where the information is stored/transmitted in the form of multisets of symbols from a given finite alphabet was studied, and several constructions of error-correcting codes for this channel were described, and bounds on the size of optimal codes correcting any given number of errors were derived.
Abstract: Motivated by communication channels in which the transmitted sequences are subjected to random permutations, as well as by certain DNA storage systems, we study the error control problem in settings where the information is stored/transmitted in the form of multisets of symbols from a given finite alphabet. A general channel model is assumed in which the transmitted multisets are potentially impaired by insertions, deletions, substitutions, and erasures of symbols. Several constructions of error-correcting codes for this channel are described, and bounds on the size of optimal codes correcting any given number of errors are derived. The construction based on the notion of Sidon sets in finite Abelian groups is shown to be optimal, in the sense of the asymptotic scaling of code redundancy, for any error radius and alphabet size. It is also shown to be optimal in the stronger sense of maximal code cardinality in various cases.

Journal ArticleDOI
TL;DR: In this paper, a large-scale simulation of quantum error correction protocols based on the surface code in the presence of coherent noise was performed and it was shown that coherent effects do not significantly change the error correcting threshold of surface codes.
Abstract: Surface codes are building blocks of quantum computing platforms based on 2D arrays of qubits responsible for detecting and correcting errors. The error suppression achieved by the surface code is usually estimated by simulating toy noise models describing random Pauli errors. However, Pauli noise models fail to capture coherent processes such as systematic unitary errors caused by imperfect control pulses. Here we report the first large-scale simulation of quantum error correction protocols based on the surface code in the presence of coherent noise. We observe that the standard Pauli approximation provides an accurate estimate of the error threshold but underestimates the logical error rate in the sub-threshold regime. We find that for large code size the logical-level noise is well approximated by random Pauli errors even though the physical-level noise is coherent. Our work demonstrates that coherent effects do not significantly change the error correcting threshold of surface codes. This gives more confidence in the viability of the fault-tolerance architecture pursued by several experimental groups. Coherent effects are shown not to play a significant role in error correction with quantum surface codes. To build a quantum computer, the quantum bit (qubit) has to be protected from external noise and steps have to be taken to detect and correct for errors. Surface codes are a type of quantum code that can correct for such errors. However, the models used to study such codes often fail to capture quantum coherent processes, which could play an important role. By performing large-scale simulations, Robert Konig from Technical University of Munich and an international team of collaborators show that coherent effects do not significantly impact the error correction in surface codes, giving confidence in the viability of this approach for developing fault-tolerance quantum computing architectures.

Journal ArticleDOI
TL;DR: This work provides a first step to discriminate between discretization error and modeling error by providing a robust quantification of discretized error during simulations.
Abstract: Objective : To present the first a posteriori error-driven adaptive finite element approach for real-time simulation, and to demonstrate the method on a needle insertion problem. Methods : We use corotational elasticity and a frictional needle/tissue interaction model. The problem is solved using finite elements within SOFA. 1 For simulating soft tissue deformation, the refinement strategy relies upon a hexahedron-based finite element method, combined with a posteriori error estimation driven local $h$ -refinement. Results : We control the local and global error level in the mechanical fields (e.g., displacement or stresses) during the simulation. We show the convergence of the algorithm on academic examples, and demonstrate its practical usability on a percutaneous procedure involving needle insertion in a liver. For the latter case, we compare the force–displacement curves obtained from the proposed adaptive algorithm with that obtained from a uniform refinement approach. Conclusions : Error control guarantees that a tolerable error level is not exceeded during the simulations. Local mesh refinement accelerates simulations. Significance : Our work provides a first step to discriminate between discretization error and modeling error by providing a robust quantification of discretization error during simulations.

Proceedings ArticleDOI
01 Dec 2018
TL;DR: For the 5G polar code of length 1024, the error-correction performance of the proposed decoder is more than 0.25 dB better than that of the BP decoder with the same number of random permutations at the frame error rate of 0.0001.
Abstract: Polar codes are a channel coding scheme for the next generation of wireless communications standard (5G). The belief propagation (BP) decoder allows for parallel decoding of polar codes, making it suitable for high throughput applications. However, the error-correction performance of polar codes under BP decoding is far from the requirements of 5G. It has been shown that the error-correction performance of BP can be improved if the decoding is performed on multiple permuted factor graphs of polar codes. However, a different BP decoding scheduling is required for each factor graph permutation which results in the design of a different decoder for each permutation. Moreover, the selection of the different factor graph permutations is at random, which prevents the decoder to achieve a desirable error correction performance with a small number of permutations. In this paper, we first show that the permutations on the factor graph can be mapped into suitable permutations on the codeword positions. As a result, we can make use of a single decoder for all the permutations. In addition, we introduce a method to construct a set of predetermined permutations which can provide the correct codeword if the decoding fails on the original permutation. We show that for the 5G polar code of length 1024, the error-correction performance of the proposed decoder is more than 0.25 dB better than that of the BP decoder with the same number of random permutations at the frame error rate of 0.0001.

Journal ArticleDOI
TL;DR: iRazor is a lightweight error detection and correction approach to suppress the cycle time margin that is traditionally added to very large scale integration systems to tolerate process, voltage, and temperature variations, and is compared to other popular techniques that mitigate the impact of variations.
Abstract: This paper presents iRazor, a lightweight error detection and correction approach, to suppress the cycle time margin that is traditionally added to very large scale integration systems to tolerate process, voltage, and temperature variations. iRazor is based on a novel current-based detector, which is embedded in flip-flops on potentially critical paths. The proposed iRazor flip-flop requires only three additional transistors, yielding only 4.3% area penalty over a standard D flip-flop. The proposed scheme is implemented in an ARM Cortex-R4 microprocessor in 40 nm through an automated iRazor flip-flop insertion flow. To gain an insight into the effectiveness of the proposed scheme, iRazor is compared to other popular techniques that mitigate the impact of variations, through the analysis of the worst case margin in 40 silicon dies. To the best of the authors’ knowledge, this is the first paper that compares the measured cycle time margin and the power efficiency improvements offered by frequency binning and various canary approaches. Results show that iRazor achieves 26%–34% performance gain and 33%–41% energy reduction compared to a baseline design across the 0.6- to 1-V voltage range, at the cost of 13.6% area overhead.

Journal ArticleDOI
TL;DR: This work proposes an experiment demonstration of high speed error correction with multi-edge type low-density parity check (MET-LDPC) codes based on graphic processing unit (GPU) and shows that GPU-based decoding algorithm greatly improves the error correction speed.
Abstract: Error correction is a significant step in postprocessing of continuous-variable quantum key distribution system, which is used to make two distant legitimate parties share identical corrected keys. We propose an experiment demonstration of high speed error correction with multi-edge type low-density parity check (MET-LDPC) codes based on graphic processing unit (GPU). GPU supports to calculate the messages of MET-LDPC codes simultaneously and decode multiple codewords in parallel. We optimize the memory structure of parity check matrix and the belief propagation decoding algorithm to reduce computational complexity. Our results show that GPU-based decoding algorithm greatly improves the error correction speed. For the three typical code rate, i.e., 0.1, 0.05 and 0.02, when the block length is 106 and the iteration number are 100, 150 and 200, the average error correction speed can be respectively achieved to 30.39 Mbits/s (over three times faster than previous demonstrations), 21.23 Mbits/s and 16.41 Mbits/s with 64 codewords decoding in parallel, which supports high-speed real-time continuous-variable quantum key distribution system.

Journal ArticleDOI
04 Jan 2018
TL;DR: In this paper, the authors consider the problem of fault-tolerant quantum computation in the presence of slow error diagnostics, either caused by measurement latencies or slow decoding algorithms.
Abstract: We consider the problem of fault-tolerant quantum computation in the presence of slow error diagnostics, either caused by measurement latencies or slow decoding algorithms. Our scheme offers a few improvements over previously existing solutions, for instance it does not require active error correction and results in a reduced error-correction overhead when error diagnostics is much slower than the gate time. In addition, we adapt our protocol to cases where the underlying error correction strategy chooses the optimal correction amongst all Clifford gates instead of the usual Pauli gates. The resulting Clifford frame protocol is of independent interest as it can increase error thresholds and could find applications in other areas of quantum computation.

Journal ArticleDOI
01 Jan 2018
TL;DR: In this article, the authors examined the accuracy of the Pauli approximation for coherent errors on data qubits under the repetition code and found that coherent errors result in logical errors that are partially coherent and therefore non-Pauli.
Abstract: Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for coherent errors on data qubits under the repetition code. We analytically evaluate the logical error as a function of concatenation level and code distance. We find that coherent errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the error is negligible after two or more concatenation levels or at fewer than $\epsilon^{-(d-1)}$ error correction cycles, where $\epsilon \ll 1$ is the rotation angle error per cycle for a single physical qubit and $d$ is the code distance. These results lend support to the validity of modeling coherent errors using a Pauli channel under some minimum requirements for code distance and/or concatenation.

Journal ArticleDOI
TL;DR: This mapping connects the error correction threshold of the quantum code to a phase transition in the statistical mechanical model, and allows any existing method for finding phase transitions, such as Monte Carlo simulations, to be applied to approximate the threshold of any such code, without having to perform optimal decoding.
Abstract: We give a broad generalisation of the mapping, originally due to Dennis, Kitaev, Landahl and Preskill, from quantum error correcting codes to statistical mechanical models. We show how the mapping can be extended to arbitrary stabiliser or subsystem codes subject to correlated Pauli noise models, including models of fault tolerance. This mapping connects the error correction threshold of the quantum code to a phase transition in the statistical mechanical model. Thus, any existing method for finding phase transitions, such as Monte Carlo simulations, can be applied to approximate the threshold of any such code, without having to perform optimal decoding. By way of example, we numerically study the threshold of the surface code under mildly correlated bit-flip noise, showing that noise with bunching correlations causes the threshold to drop to $p_{\textrm{corr}}=10.04(6)\%$, from its known iid value of $p_{\text{iid}}=10.917(3)\%$. Complementing this, we show that the mapping also allows us to utilise any algorithm which can calculate/approximate partition functions of classical statistical mechanical models to perform optimal/approximately optimal decoding. Specifically, for 2D codes subject to locally correlated noise, we give a linear-time tensor network-based algorithm for approximate optimal decoding which extends the MPS decoder of Bravyi, Suchara and Vargo.

Journal ArticleDOI
TL;DR: This work investigates a simple accuracy configurable adder design that contains no redundancy or error detection/correction circuitry and uses very simple carry prediction and proposes a delay-adaptive self-configuration technique to further improve accuracy-delay-power tradeoff.
Abstract: Approximate computing is a promising approach for low-power IC design and has recently received considerable research attention. To accommodate dynamic levels of approximation, a few accuracy-configurable adder (ACA) designs have been developed in the past. However, these designs tend to incur large area overheads as they rely on either redundant computing or complicated carry prediction. Some of these designs include error detection and correction circuitry, which further increase the area. In this paper, we investigate a simple ACA design that contains no redundancy or error detection/correction circuitry and uses very simple carry prediction. The simulation results show that our design dominates the latest previous work on accuracy-delay-power tradeoff while using 39% lower area. In the best case, the iso-delay power of our design is only 16% of accurate adder regardless of degradation in accuracy. One variant of this design provides finer-grained and larger tunability than that of the previous works. Moreover, we propose a delay-adaptive self-configuration technique to further improve the accuracy-delay-power tradeoff. The advantages of our method are confirmed by the applications in multiplication and discrete cosine transform computing.

Journal ArticleDOI
TL;DR: A novel visual secret sharing scheme based on QR code (VSSQR) with (k, n) threshold is investigated, which can visually reveal secret image with the abilities of stacking and XOR decryptions as well as scan every shadow image by a QR code reader.
Abstract: In this paper, a novel visual secret sharing (VSS) scheme based on QR code (VSSQR) with (k, n) threshold is investigated. Our VSSQR exploits the error correction mechanism in the QR code structure, to generate the bits corresponding to shares (shadow images) by VSS from a secret bit in the processing of encoding QR. Each output share is a valid QR code that can be scanned and decoded utilizing a QR code reader, which may reduce the likelihood of attracting the attention of potential attackers. Due to different application scenarios, two different recovered ways of the secret image are given. The proposed VSS scheme based on QR code can visually reveal secret image with the abilities of stacking and XOR decryptions as well as scan every shadow image, i.e., a QR code, by a QR code reader. The secret image could be revealed by human visual system without any computation based on stacking when no lightweight computation device. On the other hand, if the lightweight computation device is available, the secret image can be revealed with better visual quality based on XOR operation and could be lossless revealed when sufficient shares are collected. In addition, it can assist alignment for VSS recovery. The experiment results show the effectiveness of our scheme.

Journal ArticleDOI
TL;DR: A geometric locality-preserving map, whose stabilizers correspond to products of Majorana operators on closed paths of the fermionic hopping graph, which can correct all single-qubit errors on a 2-dimensional square lattice, and is demonstrated that the MLSC is compatible with state-of-the-art algorithms for simulating quantum chemistry, and can offer the same error-correction properties without additional asymptotic overhead.
Abstract: Fermion-to-qubit mappings that preserve geometric locality are especially useful for simulating lattice fermion models (e.g., the Hubbard model) on a quantum computer. They avoid the overhead associated with geometric non-local parity terms in mappings such as the Jordan-Wigner transformation and the Bravyi-Kitaev transformation. As a result, they often provide quantum circuits with lower depth and gate complexity. In such encodings, fermionic states are encoded in the common +1 eigenspace of a set of stabilizers, akin to stabilizer quantum error-correcting codes. Here, we discuss several known geometric locality-preserving mappings and their abilities to correct/detect single-qubit errors. We introduce a geometric locality-preserving map, whose stabilizers correspond to products of Majorana operators on closed paths of the fermionic hopping graph. We show that our code, which we refer to as the Majorana loop stabilizer code (MLSC) can correct all single-qubit errors on a 2-dimensional square lattice, while previous geometric locality-preserving codes can only detect single-qubit errors on the same lattice. Compared to existing codes, the MLSC maps the relevant fermionic operators to lower-weight qubit operators despite having higher code distance. Going beyond lattice models, we demonstrate that the MLSC is compatible with state-of-the-art algorithms for simulating quantum chemistry, and can offer those simulations the same error-correction properties without additional asymptotic overhead. These properties make the MLSC a promising candidate for error-mitigated quantum simulations of fermions on near-term devices.

Journal ArticleDOI
TL;DR: Non-malleable codes as discussed by the authors relaxes the notion of error correction and error detection, and can be achieved for very rich classes of modifications, such as functions where every bit in the tampered codewords can depend arbitrarily on any 99% of the bits in the original codeword.
Abstract: We introduce the notion of “non-malleable codes” which relaxes the notion of error correction and error detection. Informally, a code is non-malleable if the message contained in a modified codeword is either the original message, or a completely unrelated value. In contrast to error correction and error detection, non-malleability can be achieved for very rich classes of modifications. We construct an efficient code that is non-malleable with respect to modifications that affect each bit of the codeword arbitrarily (i.e., leave it untouched, flip it, or set it to either 0 or 1), but independently of the value of the other bits of the codeword. Using the probabilistic method, we also show a very strong and general statement: there exists a non-malleable code for every “small enough” family F of functions via which codewords can be modified. Although this probabilistic method argument does not directly yield efficient constructions, it gives us efficient non-malleable codes in the random-oracle model for very general classes of tampering functions—e.g., functions where every bit in the tampered codeword can depend arbitrarily on any 99% of the bits in the original codeword. As an application of non-malleable codes, we show that they provide an elegant algorithmic solution to the task of protecting functionalities implemented in hardware (e.g., signature cards) against “tampering attacks.” In such attacks, the secret state of a physical system is tampered, in the hopes that future interaction with the modified system will reveal some secret information. This problem was previously studied in the work of Gennaro et al. in 2004 under the name “algorithmic tamper proof security” (ATP). We show that non-malleable codes can be used to achieve important improvements over the prior work. In particular, we show that any functionality can be made secure against a large class of tampering attacks, simply by encoding the secret state with a non-malleable code while it is stored in memory.

Journal ArticleDOI
TL;DR: An automated error correction code is experimentally realized and the nondestructive discrimination of GHZ states in IBM 5-qubit quantum computer is demonstrated and generalize the investigated code for maximally entangled n-qudit case.
Abstract: Construction of a fault-tolerant quantum computer remains a challenging problem due to unavoidable noise and fragile quantum states. However, this goal can be achieved by introducing quantum error-correcting codes. Here, we experimentally realize an automated error correction code and demonstrate the nondestructive discrimination of GHZ states in IBM 5-qubit quantum computer. After performing quantum state tomography, we obtain the experimental results with a high fidelity. Finally, we generalize the investigated code for maximally entangled n-qudit case, which could both detect and automatically correct any arbitrary phase-change error, or any phase-flip error, or any bit-flip error, or combined error of all types of error.