Topic
Sequential decoding
About: Sequential decoding is a research topic. Over the lifetime, 8667 publications have been published within this topic receiving 204271 citations.
Papers published on a yearly basis
Papers
More filters
••
30 Sep 2010TL;DR: A decoding algorithm for topological codes that is faster than previously known algorithms and applies to a wider class of topo-logical codes and makes use of two methods inspired from statistical physics: renormalization groups and mean-field approximations.
Abstract: Topological quantum error-correcting codes are defined by geometrically local checks on a two-dimensional lattice of quantum bits (qubits), making them particularly well suited for fault-tolerant quantum information processing. Here, we present a decoding algorithm for topological codes that is faster than previously known algorithms and applies to a wider class of topo-logical codes. Our algorithm makes use of two methods inspired from statistical physics: renormalization groups and mean-field approximations. First, the topological code is approximated by a concatenated block code that can be efficiently decoded. To improve this approximation, additional consistency conditions are imposed between the blocks, and are solved by a belief propagation algorithm.
54 citations
••
27 Jun 2004TL;DR: It is shown that the error probability for decoding interleaved Reed-Solomon Codes with the decoder found by Bleichenbacher et al. is upper bounded by O(1/q), independently of n.
Abstract: We show that the error probability for decoding interleaved Reed-Solomon Codes with the decoder found by Bleichenbacher et al. (Ref.1) is upper bounded by O(1/q), independently of n. The decoding algorithm presented here is similar to that of standard RS codes. It involves computing the error-locator polynomial. These polynomials are found by computing the right kernel of the matrix. The correct solution is always in the right kernel, and so we can correctly decode if the right kernel is one-dimensional
54 citations
••
TL;DR: Error analysis and simulation results indicate that for the additive white Gaussian noise (AWGN) channel, convolutional lattice codes with computationally reasonable decoders can achieve low error rate close to the channel capacity.
Abstract: The coded modulation scheme proposed in this paper has a simple construction: an integer sequence, representing the information, is convolved with a fixed, continuous-valued, finite impulse response (FIR) filter to generate the codeword - a lattice point. Due to power constraints, the code construction includes a shaping mechanism inspired by precoding techniques such as the Tomlinson-Harashima filter. We naturally term these codes “convolutional lattice codes” or alternatively “signal codes” due to the signal processing interpretation of the code construction. Surprisingly, properly chosen short FIR filters can generate good codes with large minimal distance. Decoding can be done efficiently by sequential decoding or for better performance by bidirectional sequential decoding. Error analysis and simulation results indicate that for the additive white Gaussian noise (AWGN) channel, convolutional lattice codes with computationally reasonable decoders can achieve low error rate close to the channel capacity.
54 citations
•
06 Jan 2010TL;DR: In this article, a decoding method is proposed for decoding a first decoding method and decoding a second decoding method when decoding of the first decoding algorithm fails, where the decoding method includes updating multiple variable nodes and multiple check nodes using probability values of received data.
Abstract: A decoding method includes performing a first decoding method and performing a second decoding method when decoding of the first decoding method fails. The first decoding method includes updating multiple variable nodes and multiple check nodes using probability values of received data. The second decoding method includes selecting at least one variable node from among the multiple variable nodes; correcting probability values of data received in the selected at least one variable node; updating the variable nodes and the check nodes using the corrected probability values; and determining whether decoding of the second decoding method is successful.
54 citations
•
16 Dec 1994
TL;DR: In this article, a coding device has encoder used to generate code word information in response to data and reordering unit has run count reordering sub-unit designed to arrange code words in coding sequence and binary digit arrangement sub unit for aggregating variable-length code words with fixed length alternation and for submitting these fixed-length words in sequence required by decoding device.
Abstract: FIELD: coding devices designed for data compacting systems incorporating decoding device for decoding information generated by coding device. SUBSTANCE: coding device has encoder used to generate code word information in response to data. Coding device also incorporates re- ordering unit generating stream of coded data in response to code word information arriving from encoder. Re-ordering unit has run count re-ordering sub-unit designed to arrange code words in coding sequence and binary digit arrangement sub-unit for aggregating variable-length code words with fixed length alternation and for submitting these fixed- length words in sequence required by decoding device. EFFECT: provision for precise recovery of original data. 121 cl, 33 dwgr
54 citations