scispace - formally typeset
Search or ask a question

Showing papers on "Error detection and correction published in 2020"


Journal ArticleDOI
TL;DR: In this paper, the authors implemented the smallest viable instance, capable of repeatedly detecting any single error using seven superconducting qubits (four data qubits and three ancilla qubits) with an average logical fidelity of 96.1%.
Abstract: The realization of quantum error correction is an essential ingredient for reaching the full potential of fault-tolerant universal quantum computation. Using a range of different schemes, logical qubits that are resistant to errors can be redundantly encoded in a set of error-prone physical qubits. One such scalable approach is based on the surface code. Here we experimentally implement its smallest viable instance, capable of repeatedly detecting any single error using seven superconducting qubits—four data qubits and three ancilla qubits. Using high-fidelity ancilla-based stabilizer measurements, we initialize the cardinal states of the encoded logical qubit with an average logical fidelity of 96.1%. We then repeatedly check for errors using the stabilizer readout and observe that the logical quantum state is preserved with a lifetime and a coherence time longer than those of any of the constituent qubits when no errors are detected. Our demonstration of error detection with its resulting enhancement of the conditioned logical qubit coherence times is an important step, indicating a promising route towards the realization of quantum error correction in the surface code. In a surface code consisting of four data and three ancilla qubits, repeated error detection is demonstrated. The lifetime and coherence time of the logical qubit are enhanced over those of any of the constituent qubits when no errors are detected.

167 citations


Journal ArticleDOI
TL;DR: A teleportation-based error correction scheme that allows recoveries to be tracked entirely in software and a scheme for fault-tolerant, universal quantum computing based on concatenation of number-phase codes and Bacon-Shor subsystem codes are presented.
Abstract: Bosonic rotation codes, introduced here, are a broad class of bosonic error-correcting codes based on phase-space rotation symmetry. We present a universal quantum computing scheme applicable to a subset of this class-number-phase codes-which includes the well-known cat and binomial codes, among many others. The entangling gate in our scheme is code agnostic and can be used to interface different rotation-symmetric encodings. In addition to a universal set of operations, we propose a teleportation-based error-correction scheme that allows recoveries to be tracked entirely in software. Focusing on cat and binomial codes as examples, we compute average gate fidelities for error correction under simultaneous loss and dephasing noise and show numerically that the error-correction scheme is close to optimal for error-free ancillae and ideal measurements. Finally, we present a scheme for fault-tolerant, universal quantum computing based on the concatenation of number-phase codes and Bacon-Shor subsystem codes.

130 citations


Proceedings ArticleDOI
01 Jul 2020
TL;DR: A novel neural architecture is proposed to address the issue of Chinese spelling error correction, which consists of a network for error detection and anetwork for error correction based on BERT, with the former being connected to the latter with what is called soft-masking technique.
Abstract: Spelling error correction is an important yet challenging task because a satisfactory solution of it essentially needs human-level language understanding ability. Without loss of generality we consider Chinese spelling error correction (CSC) in this paper. A state-of-the-art method for the task selects a character from a list of candidates for correction (including non-correction) at each position of the sentence on the basis of BERT, the language representation model. The accuracy of the method can be sub-optimal, however, because BERT does not have sufficient capability to detect whether there is an error at each position, apparently due to the way of pre-training it using mask language modeling. In this work, we propose a novel neural architecture to address the aforementioned issue, which consists of a network for error detection and a network for error correction based on BERT, with the former being connected to the latter with what we call soft-masking technique. Our method of using `Soft-Masked BERT' is general, and it may be employed in other language detection-correction problems. Experimental results on two datasets, including one large dataset which we create and plan to release, demonstrate that the performance of our proposed method is significantly better than the baselines including the one solely based on BERT.

110 citations


Journal ArticleDOI
TL;DR: In this paper, a bias-preserving controlled-not (CX) gate with biased-noise stabilized cat qubits in driven nonlinear oscillators is proposed. But the performance of this gate is limited by the fact that it does not commute with the dominant error.
Abstract: The code capacity threshold for error correction using biased-noise qubits is known to be higher than with qubits without such structured noise. However, realistic circuit-level noise severely restricts these improvements. This is because gate operations, such as a controlled-NOT (CX) gate, which do not commute with the dominant error, unbias the noise channel. Here, we overcome the challenge of implementing a bias-preserving CX gate using biased-noise stabilized cat qubits in driven nonlinear oscillators. This continuous-variable gate relies on nontrivial phase space topology of the cat states. Furthermore, by following a scheme for concatenated error correction, we show that the availability of bias-preserving CX gates with moderately sized cats improves a rigorous lower bound on the fault-tolerant threshold by a factor of two and decreases the overhead in logical Clifford operations by a factor of five. Our results open a path toward high-threshold, low-overhead, fault-tolerant codes tailored to biased-noise cat qubits.

98 citations


Journal ArticleDOI
07 Dec 2020
TL;DR: In this article, it was shown that the error rates for an arbitrary set of s Pauli errors can be estimated to a relative precision ϵ using O(ϵ-4log s log s/ϵ) measurements.
Abstract: Pauli channels are ubiquitous in quantum information, both as a dominant noise source in many computing architectures and as a practical model for analyzing error correction and fault tolerance. Here, we prove several results on efficiently learning Pauli channels and more generally the Pauli projection of a quantum channel. We first derive a procedure for learning a Pauli channel on n qubits with high probability to a relative precision ϵ using O(ϵ-2n2n) measurements, which is efficient in the Hilbert space dimension. The estimate is robust to state preparation and measurement errors, which, together with the relative precision, makes it especially appropriate for applications involving characterization of high-accuracy quantum gates. Next, we show that the error rates for an arbitrary set of s Pauli errors can be estimated to a relative precision ϵ using O(ϵ-4log s log s/ϵ) measurements. Finally, we show that when the Pauli channel is given by a Markov field with at most k-local correlations, we can learn an entire n-qubit Pauli channel to relative precision ϵ with only Ok(ϵ-2n2logn) measurements, which is efficient in the number of qubits. These results enable a host of applications beyond just characterizing noise in a large-scale quantum system: they pave the way to tailoring quantum codes, optimizing decoders, and customizing fault tolerance procedures to suit a particular device.

98 citations


Journal ArticleDOI
TL;DR: The experimental results show that the proposed model performs better than all other considered models without double decomposition, and the variational mode decomposition for error series can improve the effect of error correction strategy.

93 citations


Journal ArticleDOI
Jarrod R. McClean1, Zhang Jiang1, Nicholas C. Rubin1, Ryan Babbush1, Hartmut Neven1 
TL;DR: In this article, the authors consider the idea of post-processing error decoders using existing quantum codes, which mitigate errors on logical qubits using postprocessing without explicit syndrome measurements or additional qubits beyond the encoding overhead.
Abstract: With rapid developments in quantum hardware comes a push towards the first practical applications. While fully fault-tolerant quantum computers are not yet realized, there may exist intermediate forms of error correction that enable practical applications. In this work, we consider the idea of post-processing error decoders using existing quantum codes, which mitigate errors on logical qubits using post-processing without explicit syndrome measurements or additional qubits beyond the encoding overhead. This greatly simplifies the experimental exploration of quantum codes on real, near-term devices, removing the need for locality of syndromes or fast feed-forward. We develop the theory of the method and demonstrate it on an example with the perfect [[5, 1, 3]] code, which exhibits a pseudo-threshold of p ≈ 0.50 under a single qubit depolarizing channel applied to all qubits. We also provide a demonstration of improved performance on an unencoded hydrogen molecule. Fault-tolerant quantum computation is still far, but there could be ways in which quantum error correction could improve currently available devices. Here, the authors show how to exploit existing quantum codes through only post-processing and random measurements in order to mitigate errors in NISQ devices.

89 citations


Posted Content
TL;DR: A dissipative map designed for physically realistic finite GKP codes is introduced which performs quantum error correction of a logical qubit implemented in the motion of a single trapped ion, achieving an increase in logical lifetime of a factor of three.
Abstract: Stabilization of encoded logical qubits using quantum error correction is key to the realization of reliable quantum computers. While qubit codes require many physical systems to be controlled, oscillator codes offer the possibility to perform error correction on a single physical entity. One powerful encoding for oscillators is the grid state or GKP encoding, which allows small displacement errors to be corrected. Here we introduce and implement a dissipative map designed for physically realistic finite GKP codes which performs quantum error correction of a logical qubit implemented in the motion of a single trapped ion. The correction cycle involves two rounds, which correct small displacements in position and momentum respectively. Each consists of first mapping the finite GKP code stabilizer information onto an internal electronic state ancilla qubit, and then applying coherent feedback and ancilla repumping. We demonstrate the extension of logical coherence using both square and hexagonal GKP codes, achieving an increase in logical lifetime of a factor of three. The simple dissipative map used for the correction can be viewed as a type of reservoir engineering, which pumps into the highly non-classical GKP qubit manifold. These techniques open new possibilities for quantum state control and sensing alongside their application to scaling quantum computing.

58 citations


Journal ArticleDOI
Xiangyu Kong1, Chuang Li1, Chengshan Wang1, Zhang Yusen, Jian Zhang 
TL;DR: Based on the load data of different regions, this paper selects different performance indicators, such as MAPE, MAE, RMSE, Variance and direction accuracy (DA), to prove that the proposed method has the advantages of accuracy and stability.

56 citations


Journal ArticleDOI
Huang Lingchen1, Huazi Zhang1, Rong Li1, Yiqun Ge1, Jun Wang1 
TL;DR: This paper employs a constructor-evaluator framework, in which the code constructor can be realized by various AI algorithms and the code evaluator provides code performance metric measurements, and shows that comparable code performance can be achieved with respect to the existing codes.
Abstract: In this paper, we investigate an artificial-intelligence (AI) driven approach to design error correction codes (ECC). Classic error-correction code design based upon coding-theoretic principles typically strives to optimize some performance-related code property such as minimum Hamming distance, decoding threshold, or subchannel reliability ordering. In contrast, AI-driven approaches, such as reinforcement learning (RL) and genetic algorithms, rely primarily on optimization methods to learn the parameters of an optimal code within a certain code family. We employ a constructor-evaluator framework, in which the code constructor can be realized by various AI algorithms and the code evaluator provides code performance metric measurements. The code constructor keeps improving the code construction to maximize code performance that is evaluated by the code evaluator. As examples, we focus on RL and genetic algorithms to construct linear block codes and polar codes. The results show that comparable code performance can be achieved with respect to the existing codes. It is noteworthy that our method can provide superior performances to classic constructions in certain cases (e.g., list decoding for polar codes).

55 citations


Journal ArticleDOI
TL;DR: This study presents a novel hybrid model, which includes decomposition module with real-time decomposition strategy, forecasting module and error correction module, which outperforms the compared conventional models in short-term wind speed forecasting.

Journal ArticleDOI
TL;DR: It is shown that using continuous-variable error correction codes can enhance the robustness of sensing protocols against imperfections and reinstate Heisenberg scaling up to moderate values of $M$.
Abstract: A distributed sensing protocol uses a network of local sensing nodes to estimate a global feature of the network, such as a weighted average of locally detectable parameters. In the noiseless case, continuous-variable multipartite entanglement shared by the nodes can improve the precision of parameter estimation relative to the precision attainable by a network without shared entanglement; for an entangled protocol, the root-mean-square estimation error scales like $1/M$ with the number $M$ of sensing nodes, the so-called Heisenberg scaling, while for protocols without entanglement, the error scales like $1/\sqrt{M}$. However, in the presence of loss and other noise sources, although multipartite entanglement still has some advantages for sensing displacements and phases, the scaling of the precision with $M$ is less favorable. In this paper, we show that using continuous-variable error correction codes can enhance the robustness of sensing protocols against imperfections and reinstate Heisenberg scaling up to moderate values of $M$. Furthermore, while previous distributed sensing protocols could measure only a single quadrature, we construct a protocol in which both quadratures can be sensed simultaneously. Our work demonstrates the value of continuous-variable error correction codes in realistic sensing scenarios.

Journal ArticleDOI
TL;DR: In this paper, the authors present a categorization and review of long read error correction methods, and provide a comprehensive evaluation of the corresponding long-read error correction tools, including trimming and long read sequencing depth.
Abstract: Third-generation single molecule sequencing technologies can sequence long reads, which is advancing the frontiers of genomics research. However, their high error rates prohibit accurate and efficient downstream analysis. This difficulty has motivated the development of many long read error correction tools, which tackle this problem through sampling redundancy and/or leveraging accurate short reads of the same biological samples. Existing studies to asses these tools use simulated data sets, and are not sufficiently comprehensive in the range of software covered or diversity of evaluation measures used. In this paper, we present a categorization and review of long read error correction methods, and provide a comprehensive evaluation of the corresponding long read error correction tools. Leveraging recent real sequencing data, we establish benchmark data sets and set up evaluation criteria for a comparative assessment which includes quality of error correction as well as run-time and memory usage. We study how trimming and long read sequencing depth affect error correction in terms of length distribution and genome coverage post-correction, and the impact of error correction performance on an important application of long reads, genome assembly. We provide guidelines for practitioners for choosing among the available error correction tools and identify directions for future research. Despite the high error rate of long reads, the state-of-the-art correction tools can achieve high correction quality. When short reads are available, the best hybrid methods outperform non-hybrid methods in terms of correction quality and computing resource usage. When choosing tools for use, practitioners are suggested to be careful with a few correction tools that discard reads, and check the effect of error correction tools on downstream analysis. Our evaluation code is available as open-source at https://github.com/haowenz/LRECE .

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a reinforcement learning method for multiple-input multiple-output (MIMO) systems with one-bit analog-to-digital converters using reinforcement learning approach.
Abstract: The use of one-bit analog-to-digital converters (ADCs) at a receiver is a power-efficient solution for future wireless systems operating with a large signal bandwidth and/or a massive number of receive radio frequency chains. This solution, however, induces high channel estimation error and therefore makes it difficult to perform the optimal data detection that requires perfect knowledge of likelihood functions at the receiver. In this paper, we propose a likelihood function learning method for multiple-input multiple-output (MIMO) systems with one-bit ADCs using a reinforcement learning approach. The key idea is to exploit input-output samples obtained from data detection, to compensate for the mismatch in the likelihood function. The underlying difficulty of this idea is a label uncertainty in the samples caused by a data detection error. To resolve this problem, we define a Markov decision process (MDP) to maximize the accuracy of the likelihood function learned from the samples. We then develop a reinforcement learning algorithm that efficiently finds the optimal policy by approximating the transition function and the optimal state of the MDP. Simulation results demonstrate that the proposed method provides significant performance gains for data detection methods that suffer from the mismatch in the likelihood function.

Journal ArticleDOI
TL;DR: The error correction and error detection performance of the 3GPP NR polar codes in the uplink, broadcast and downlink control channels is comprehensively characterized.
Abstract: Since their inception in 2008, polar codes have been shown to offer near-capacity error correction performance across a wide range of block lengths and coding rates. Owing to this, polar codes have been selected to provide channel coding in the control channels of Third Generation Partnership Project (3GPP) New Radio (NR). The operation of the 3GPP NR polar codes is specified in the 3GPP standard TS 38.212, together with schemes for code block segmentation, Cyclic Redundancy Check (CRC) attachment, CRC scrambling, CRC interleaving, frozen and parity check bit insertion, sub-block interleaving, bit selection, channel interleaving and code block concatenation. The configuration of these components is different for the uplink, broadcast and downlink control channels. However, the lack of visualisations and diagrammatic explanations in TS 38.212 limits the accessibility of the standard to new readers. This motivates the aims of the paper, which provides detailed tutorials on the operation and motivation of the components of the 3GPP NR polar codes, as well as surveys of the 3GPP discussions that led to their specification. Furthermore, we comprehensively characterize the error correction and error detection performance of the 3GPP NR polar codes in the uplink, broadcast and downlink control channels.

Posted Content
TL;DR: A classical bit-flip correction method that can be applied to any operator, any number of qubits, and any realistic bit-Flip probability to mitigate measurement errors on quantum computers is developed.
Abstract: We develop a classical bit-flip correction method to mitigate measurement errors on quantum computers. This method can be applied to any operator, any number of qubits, and any realistic bit-flip probability. We first demonstrate the successful performance of this method by correcting the noisy measurements of the ground-state energy of the longitudinal Ising model. We then generalize our results to arbitrary operators and test our method both numerically and experimentally on IBM quantum hardware. As a result, our correction method reduces the measurement error on the quantum hardware by up to one order of magnitude. We finally discuss how to pre-process the method and extend it to other errors sources beyond measurement errors. For local Hamiltonians, the overhead costs are polynomial in the number of qubits, even if multi-qubit correlations are included.

Posted Content
TL;DR: Results show that fault-tolerant quantum systems are currently capable of logical primitives with error rates lower than their constituent parts, and with the future addition of intermediate measurements, the full power of scalable quantum error-correction can be achieved.
Abstract: Quantum error correction protects fragile quantum information by encoding it into a larger quantum system. These extra degrees of freedom enable the detection and correction of errors, but also increase the operational complexity of the encoded logical qubit. Fault-tolerant circuits contain the spread of errors while operating the logical qubit, and are essential for realizing error suppression in practice. While fault-tolerant design works in principle, it has not previously been demonstrated in an error-corrected physical system with native noise characteristics. In this work, we experimentally demonstrate fault-tolerant preparation, measurement, rotation, and stabilizer measurement of a Bacon-Shor logical qubit using 13 trapped ion qubits. When we compare these fault-tolerant protocols to non-fault tolerant protocols, we see significant reductions in the error rates of the logical primitives in the presence of noise. The result of fault-tolerant design is an average state preparation and measurement error of 0.6% and a Clifford gate error of 0.3% after error correction. Additionally, we prepare magic states with fidelities exceeding the distillation threshold, demonstrating all of the key single-qubit ingredients required for universal fault-tolerant operation. These results demonstrate that fault-tolerant circuits enable highly accurate logical primitives in current quantum systems. With improved two-qubit gates and the use of intermediate measurements, a stabilized logical qubit can be achieved.

Journal ArticleDOI
03 Sep 2020
TL;DR: In this article, a qubit-efficient error correction scheme applicable to arbitrary codes is developed, which opens up an opportunity to engineer machines that tackle problems of increased complexity, such as problems of high complexity.
Abstract: A qubit-efficient error correction scheme applicable to arbitrary codes is developed, opening up an opportunity to engineer machines that tackle problems of increased complexity.

Journal ArticleDOI
TL;DR: This work benchmarked a weighted variant of the union-find decoder on the toric code under circuit-level depolarizing noise, which preserves the almost-linear time complexity of the original while significantly increasing the performance in the fault-tolerance setting.
Abstract: Quantum error correction requires decoders that are both accurate and efficient. To this end, union-find decoding has emerged as a promising candidate for error correction on the surface code. In this work, we benchmark a weighted variant of the union-find decoder on the toric code under circuit-level depolarizing noise. This variant preserves the almost-linear time complexity of the original while significantly increasing the performance in the fault-tolerance setting. In this noise model, weighting the union-find decoder increases the threshold from $0.38%$ to $0.62%$, compared to an increase from $0.65%$ to $0.72%$ when weighting a matching decoder. Further assuming quantum nondemolition measurements, weighted union-find decoding achieves a threshold of $0.76%$ compared to the $0.90%$ threshold when matching. We additionally provide comparisons of timing as well as low error rate behavior.

Posted Content
TL;DR: This work considers surface code error correction, which is the most popular family of error correcting codes for quantum computing, and designs a decoder micro-architecture for the Union-Find decoding algorithm that significantly speeds up the decoder.
Abstract: Quantum computation promises significant computational advantages over classical computation for some problems. However, quantum hardware suffers from much higher error rates than in classical hardware. As a result, extensive quantum error correction is required to execute a useful quantum algorithm. The decoder is a key component of the error correction scheme whose role is to identify errors faster than they accumulate in the quantum computer and that must be implemented with minimum hardware resources in order to scale to the regime of practical applications. In this work, we consider surface code error correction, which is the most popular family of error correcting codes for quantum computing, and we design a decoder micro-architecture for the Union-Find decoding algorithm. We propose a three-stage fully pipelined hardware implementation of the decoder that significantly speeds up the decoder. Then, we optimize the amount of decoding hardware required to perform error correction simultaneously over all the logical qubits of the quantum computer. By sharing resources between logical qubits, we obtain a 67% reduction of the number of hardware units and the memory capacity is reduced by 70%. Moreover, we reduce the bandwidth required for the decoding process by a factor at least 30x using low-overhead compression algorithms. Finally, we provide numerical evidence that our optimized micro-architecture can be executed fast enough to correct errors in a quantum computer.

Journal ArticleDOI
01 Jul 2020
TL;DR: A new error correction system is presented, Baran, which provides a unifying abstraction for integrating multiple error corrector models that can be pretrained and updated in the same way and, because of the underlying context-aware data representation, achieves high precision.
Abstract: Traditional error correction solutions leverage handmaid rules or master data to find the correct values. Both are often amiss in real-world scenarios. Therefore, it is desirable to additionally learn corrections from a limited number of example repairs. To effectively generalize example repairs, it is necessary to capture the entire context of each erroneous value. A context comprises the value itself, the co-occurring values inside the same tuple, and all values that define the attribute type. Typically, an error corrector based on any of these context information undergoes an individual process of operations that is not always easy to integrate with other types of error correctors. In this paper, we present a new error correction system, Baran, which provides a unifying abstraction for integrating multiple error corrector models that can be pretrained and updated in the same way. Because of the holistic nature of our approach, we generate more correction candidates than state of the art and, because of the underlying context-aware data representation, we achieve high precision. We show that, by pretraining our models based on Wikipedia revisions, our system can further improve its overall precision and recall. In our experiments, Baran significantly outperforms state-of-the-art error correction systems in terms of effectiveness and human involvement requiring only 20 labeled tuples.

Journal ArticleDOI
TL;DR: The use of flag qubits allows the construction of fault-tolerant protocols with the fewest number of ancillas known to date as mentioned in this paper, which can be used in syndrome extraction circuits to detect high-weight errors arising from fewer faults.
Abstract: Flag qubits have recently been proposed in syndrome extraction circuits to detect high-weight errors arising from fewer faults. The use of flag qubits allows the construction of fault-tolerant protocols with the fewest number of ancillas known to date. In this work, we prove some critical properties of Calderbank-Shor-Steane (CSS) codes constructed from classical cyclic codes that enable the construction of a flag fault-tolerant error correction scheme. We then develop fault-tolerant protocols as well as a family of circuits for flag fault-tolerant error correction and operator measurement, requiring only four ancilla qubits and applicable to cyclic CSS codes of distance 3. The measurement protocol can be further used for logical Clifford gate implementation via quantum gate teleportation. We also provide examples of cyclic CSS codes with large encoding rates.

Journal ArticleDOI
TL;DR: A novel semi-supervised autoencoder-based machine learning approach for improving ranging accuracy of ultra-wideband localization beyond the limitations of current improvements while aiming for performance improvements and a small memory footprint and an edge inference architecture for online UWB ranging error correction.
Abstract: Indoor localization knows many applications, such as industry 4.0, warehouses, healthcare, drones, etc., where high accuracy becomes more critical than ever. Recent advances in ultra-wideband localization systems allow high accuracies for multiple active users in line-of-sight environments, while they still introduce errors above 300 mm in non-line-of-sight environments due to multi-path effects. Current work tries to improve the localization accuracy of ultra-wideband through offline error correction approaches using popular machine learning techniques. However, these techniques are still limited to simple environments with few multi-path effects and focus on offline correction. With the upcoming demand for high accuracy and low latency indoor localization systems, there is a need to deploy (online) efficient error correction techniques with fast response times in dynamic and complex environments. To address this, we propose (i) a novel semi-supervised autoencoder-based machine learning approach for improving ranging accuracy of ultra-wideband localization beyond the limitations of current improvements while aiming for performance improvements and a small memory footprint and (ii) an edge inference architecture for online UWB ranging error correction. As such, this paper allows the design of accurate localization systems by using machine learning for low-cost edge devices. Compared to a deep neural network (as state-of-the-art, with a baseline error of 75 mm) the proposed autoencoder achieves a 29% higher accuracy. The proposed approach leverages robust and accurate ultra-wideband localization, which reduces the errors from 214 mm without correction to 58 mm with correction. Validation of edge inference using the proposed autoencoder on a NVIDIA Jetson Nano demonstrates significant uplink bandwidth savings and allows up to 20 rapidly ranging anchors per edge GPU.

Journal ArticleDOI
TL;DR: The proposed algorithm, termed optimized sparse fractional Fourier transform (OSFrFT), can reduce the computational complexity while guarantee sufficient robustness and estimation accuracy and a successful application of OSFrFT to continuous wave radar signal processing.

Journal ArticleDOI
TL;DR: Experimental simulation results show that the improved algorithm proposed has higher real-time localization accuracy and higher robustness than those of the standard EKF algorithm.
Abstract: In order to solve the problem that the standard extended Kalman filter (EKF) algorithm has large errors in Unmanned Aerial Vehicle (UAV) multi-sensor fusion localization, this paper proposes a multi-sensor fusion localization method based on adaptive error correction EKF algorithm. Firstly, a multi-sensor navigation localization system is constructed by using gyroscopes, acceleration sensors, magnetic sensors and mileage sensors. Then the information detected by the sensor is compared and adjusted, to reduce the influence of error on the estimated value. The nonlinear observation equation is linearized by Taylor, and the normal distribution hypothesis is carried out in two steps of prediction and correction respectively. Finally, the parameters of system noise and measurement noise covariance in EKF are optimized by using the evolutionary iteration mechanism of genetic algorithm. The adaptive degree is obtained according to the absolute value of the difference between the estimated value and the real value of EKF. The individual evaluation results of EKF algorithm parameters are used as the measurement standard for iteration to obtain the optimal value of EKF algorithm parameters. Experimental simulation results show that the improved algorithm proposed has higher real-time localization accuracy and higher robustness than those of the standard EKF algorithm.

Journal ArticleDOI
TL;DR: The results show that, compared with the state-of-the-art SC flip decoders, the proposed EBPF decoder exhibits 30% throughput improvement under comparable error correction performance.
Abstract: Due to its parallel propagation property, the belief propagation (BP) polar decoding can achieve high throughput and has drawn increasing attention. However, the BP decoding is not comparable with the successive cancellation list (SCL) decoding in terms of the error correction performance. In this brief, two BP flip (BPF) decoding algorithms are proposed. Compared with the existing BPF decoding, the generalized BPF (GBPF) decoding identifies error-prone bits more efficiently with a redefinition of bit-flipping. Furthermore, the GBPF decoding is optimized by decreasing the searching range by half, leading to the enhanced BP flip (EBPF) decoding with reduced complexity and improved performance for 5G polar codes. The hardware architecture is provided and implemented using SMIC 65nm CMOS technology. The results show that, compared with the state-of-the-art SC flip decoders, the proposed EBPF decoder exhibits 30% throughput improvement under comparable error correction performance.

Proceedings ArticleDOI
30 May 2020
TL;DR: In this paper, the authors proposed an approximate quantum error correction (AQEC) scheme to increase the computational power of near-term quantum computers by adapting protocols used in quantum error correcting to implement approximate error decoding.
Abstract: Quantum computers are growing in size, and design decisions are being made now that attempt to squeeze more computation out of these machines. In this spirit, we design a method to boost the computational power of nearterm quantum computers by adapting protocols used in quantum error correction to implement “Approximate Quantum Error Correction (AQEC):” By approximating fully-fledged error correction mechanisms, we can increase the compute volume (qubits $\times$ gates, or “Simple Quantum Volume (SQV)”) of near-term machines. The crux of our design is a fast hardware decoder that can approximately decode detected error syndromes rapidly. Specifically, we demonstrate a proof-of-concept that approximate error decoding can be accomplished online in near-term quantum systems by designing and implementing a novel algorithm in superconducting Single Flux Quantum (SFQ) logic technology. This avoids a critical decoding backlog, hidden in all offline decoding schemes, that leads to idle time exponential in the number of T gates in a program [58].Our design utilizes one SFQ processing module per physical quantum bit. Employing state-of-the-art SFQ synthesis tools, we show that the circuit area, power, and latency are within the constraints of typical, contemporary quantum system designs. Under a pure dephasing error model, the proposed accelerator and AQEC solution is able to expand SQV by factors between 3,402 and 11,163 on expected near-term machines. The decoder achieves a 5% accuracy threshold as well as pseudo-thresholds of approximately 5%, 4.75%, 4.5%, and 3.5% physical error rates for code distances 3, 5, 7, and 9, respectively. Decoding solutions are achieved in a maximum of $\sim$20 nanoseconds on the largest code distances studied. By avoiding the exponential idle time in offline decoders, we achieve a 10x reduction in required code distances to achieve the same logical performance as alternative designs.

Journal ArticleDOI
TL;DR: This work implements and compares two NN-based decoders, a low level decoder and a high level decoding, and study how different NN parameters affect their decoding performance and execution time, and concludes that the highlevel decoder based on a Recurrent NN shows a better balance between decoding performance
Abstract: Matching algorithms can be used for identifying errors in quantum systems, being the most famous the Blossom algorithm. Recent works have shown that small distance quantum error correction codes can be efficiently decoded by employing machine learning techniques based on neural networks (NN). Various NN-based decoders have been proposed to enhance the decoding performance and the decoding time. Their implementation differs in how the decoding is performed, at logical or physical level, as well as in several neural network related parameters. In this work, we implement and compare two NN-based decoders, a low level decoder and a high level decoder, and study how different NN parameters affect their decoding performance and execution time. Crucial parameters such as the size of the training dataset, the structure and the type of the neural network, and the learning rate used during training are discussed. After performing this comparison, we conclude that the high level decoder based on a Recurrent NN shows a better balance between decoding performance and execution time and it is much easier to train. We then test its decoding performance for different code distances, probability datasets and under the depolarizing and circuit error models.

Journal ArticleDOI
23 Mar 2020
TL;DR: In this paper, it is shown how to add a continuous Abelian group of transversal logical gates to any error-correcting code to circumvent the no-go theorem of Eastin and Knill.
Abstract: Following the introduction of the task of reference frame error correction, we show how, by using reference frame alignment with clocks, one can add a continuous Abelian group of transversal logical gates to any error-correcting code. With this we further explore a way of circumventing the no-go theorem of Eastin and Knill, which states that if local errors are correctable, the group of transversal gates must be of finite order. We are able to do this by introducing a small error on the decoding procedure that decreases with the dimension of the frames used. Furthermore, we show that there is a direct relationship between how small this error can be and how accurate quantum clocks can be: the more accurate the clock, the smaller the error; and the no-go theorem would be violated if time could be measured perfectly in quantum mechanics. The asymptotic scaling of the error is studied under a number of scenarios of reference frames and error models. The scheme is also extended to errors at unknown locations, and we show how to achieve this by simple majority voting related error correction schemes on the reference frames. In the Outlook, we discuss our results in relation to the AdS/CFT correspondence and the Page-Wooters mechanism.

Proceedings ArticleDOI
25 Oct 2020
TL;DR: A novel augmented variant of the Transformer model that encodes both the word and phoneme sequence of an entity, and attends to phoneme information in addition to word-level information during decoding to correct mistranscribed named entities is proposed.
Abstract: Domain-agnostic Automatic Speech Recognition (ASR) systems suffer from the issue of mistranscribing domain-specific words, which leads to failures in downstream tasks. In this paper, we present a post-editing ASR error correction method using the Transformer model for entity mention correction and retrieval. Specifically, we propose a novel augmented variant of the Transformer model that encodes both the word and phoneme sequence of an entity, and attends to phoneme information in addition to word-level information during decoding to correct mistranscribed named entities. We evaluate our method on both the ASR error correction task and the downstream retrieval task. Our method achieves 48.08% entity error rate (EER) reduction in ASR error correction task and 26.74% mean reciprocal rank (MRR) improvement for the retrieval task. In addition, our augmented Transformer model significantly outperforms the vanilla Transformer model with 17.89% EER reduction and 1.98% MRR increase, demonstrating the effectiveness of incorporating phoneme information in the correction model.