scispace - formally typeset
Journal ArticleDOI

Error bounds for convolutional codes and an asymptotically optimum decoding algorithm

Reads0
Chats0
TLDR
The upper bound is obtained for a specific probabilistic nonsequential decoding algorithm which is shown to be asymptotically optimum for rates above R_{0} and whose performance bears certain similarities to that of sequential decoding algorithms.
Abstract
The probability of error in decoding an optimal convolutional code transmitted over a memoryless channel is bounded from above and below as a function of the constraint length of the code. For all but pathological channels the bounds are asymptotically (exponentially) tight for rates above R_{0} , the computational cutoff rate of sequential decoding. As a function of constraint length the performance of optimal convolutional codes is shown to be superior to that of block codes of the same length, the relative improvement increasing with rate. The upper bound is obtained for a specific probabilistic nonsequential decoding algorithm which is shown to be asymptotically optimum for rates above R_{0} and whose performance bears certain similarities to that of sequential decoding algorithms.

read more

Citations
More filters
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal ArticleDOI

A tutorial on hidden Markov models and selected applications in speech recognition

TL;DR: In this paper, the authors provide an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and give practical details on methods of implementation of the theory along with a description of selected applications of HMMs to distinct problems in speech recognition.

Pattern Recognition and Machine Learning

TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Book

Foundations of Statistical Natural Language Processing

TL;DR: This foundational text is the first comprehensive introduction to statistical natural language processing (NLP) to appear and provides broad but rigorous coverage of mathematical and linguistic foundations, as well as detailed discussion of statistical methods, allowing students and researchers to construct their own implementations.
Book

Wireless Communications

References
More filters
Journal ArticleDOI

Transmission of information

TL;DR: In this article, a quantitative measure of information is developed which is based on physical as contrasted with psychological considerations, and how the rate of transmission of this information over a system is limited by the distortion resulting from storage of energy is discussed from the transient viewpoint.
Book

Transmission of information

TL;DR: A warning device associated with arotatable body comprising means on the rotatable body for creating a magnetic field, a receiving coil on a relatively stationary member arranged such that a warning is given upon occurrence of a predetermined condition, such as pressure of a vehicle wheel deviating outside a predetermined limit.
Journal ArticleDOI

A simple derivation of the coding theorem and some applications

TL;DR: Both amplitude-discrete and amplitude-continuous channels are treated, both with and without input constraints, and the exponential behavior of the bounds with block length is the best known for all transmission rates between 0 and capacity.
Journal ArticleDOI

A heuristic discussion of probabilistic decoding

TL;DR: The invited Profess01 Fano to commit to paprr his elegant but, unelaborate explanation of the principles of sequential decoding, a scheme which is currently contending for a position as the most practical implementation of Shannon’s theory of noisy communication channels.
Journal ArticleDOI

Lower bounds to error probability for coding on discrete memoryless channels. II

TL;DR: The paper is presented in two parts: the first, appearing here, summarizes the major results and treats the case of high transmission rates in detail; the second, to appear in the subsequent issue, treats the cases of low transmission rates.