Topic
Sequential decoding
About: Sequential decoding is a research topic. Over the lifetime, 8667 publications have been published within this topic receiving 204271 citations.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: Large classes of T -user codes for the binary adder channel are shown to be easily constructed by generalizing some previous results in two basic ways by introducing an equivalence relation on such codes.
Abstract: Large classes of T -user codes for the binary adder channel are shown to be easily constructed by generalizing some previous results in two basic ways. The first is an iterative procedure; the second is by introducing an equivalence relation on such codes. Decoding algorithms are presented in both cases. The number of codes in the equivalence class of some previously defined codes is determined exactly.
35 citations
••
IBM1
TL;DR: A different, but similar approach is proposed to deal with bursts in data storage systems by exploiting a dependency in byte-error correcting codes to enhance the decoding performance.
Abstract: A common way to deal with bursts in data storage systems is to interleave byte-error correcting codes. In the decoding of each of these byte-error correcting codes, one normally does not make use of the information obtained from the previous or subsequent code, while the bursty character of the channel indicates a dependency. Berlekamp and Tong (1985) exploited such a dependency to enhance the decoding performance. Here a different, but similar approach is proposed.
35 citations
••
01 Aug 1995TL;DR: A new simple encoding technique is introduced which allows the design of a wide variety of linear block codes and it is shown that the trellises of the designed codes are similar to the treller of coset codes and allow low complexity soft maximum likelihood decoding.
Abstract: The authors introduce a new simple encoding technique which allows the design of a wide variety of linear block codes. They present a number of examples in which the most widely used codes (Reed-Muller, Hamming, Golay, optimum etc.) have been designed. They also introduce a novel trellis design procedure for the proposed codes. It is shown that the trellises of the designed codes are similar to the trellises of coset codes and allow low complexity soft maximum likelihood decoding.
35 citations
••
25 Aug 2013
TL;DR: A novel way to compute confidence scores from character lattices produced during a single Viterbi decoding process using only the "filler" model, which obtains essentially the same spotting results, while requiring between one and two orders of magnitude less query computing time.
Abstract: The so-called filler or garbage Hidden Markov Models (HMM) are among the most widely used models for lexicon-free, query by string key word spotting in the fields of speech recognition and (lately) handwritten text recognition. An important drawback of this approach is the large computational cost of the keyword-specific HMM Viterbi decoding process needed to obtain the confidence scores of each word to be spotted. This paper presents a novel way to compute such confidence scores, directly from character lattices produced during a single Viterbi decoding process using only the "filler" model (i.e. no explicit keyword-specific decoding is needed). Experiments show that, as compared with the classical HMM-filler approach, the proposed method obtains essentially the same spotting results, while requiring between one and two orders of magnitude less query computing time.
35 citations
••
14 Apr 1991TL;DR: A new class of A* algorithms for Viterbi phonetic decoding subject to lexical constraints is presented that can be made to run substantially faster than the Viterba algorithm in an isolated word recognizer and that it runs very quickly on a 60000-word recognition task.
Abstract: The authors present a new class of A* algorithms for Viterbi phonetic decoding subject to lexical constraints. They show that this type of algorithm can be made to run substantially faster than the Viterbi algorithm in an isolated word recognizer having a vocabulary of 1600 words and that it runs very quickly on a 60000-word recognition task. In addition, multiple recognition hypotheses can be generated on demand and the search can be constrained to respect conditions on phone durations in such a way that computational requirements are substantially reduced. >
35 citations