scispace - formally typeset
Search or ask a question

Showing papers on "Word error rate published in 1968"


Journal ArticleDOI
Raimo Bakis1, Noel M. Herbst1, George Nagy1
TL;DR: The recognition of hand-printed numerals is studied on a broad experimental basis within the constraints imposed by a raster scanner generating binary video patterns, a mixed measurement set, and a statistical decision function.
Abstract: The recognition of hand-printed numerals is studied on a broad experimental basis within the constraints imposed by a raster scanner generating binary video patterns, a mixed measurement set, and a statistical decision function. A computer-controlled scanner is used to acquire the characters, to adjust the raster resolution and registration, and to monitor the black-white threshold of the quantizer. The dimensionality of the decision problem is reduced by a hybrid system of measurements. In the measurement design, three types of measurements are generated: a set of "topological" measurements, a set of logical "n-tuples," both designed by hand, and a large set of n-tuples machine generated at random under special constraints. The final set of 100 measurements is selected automatically by a programmed algorithm that attempts to minimize the maximum expected error rate between every character pair. Computer simulation experiments show the effectiveness of the selection procedure, the contribution of the different types of measurements, the effect of the number of measurements selected on recognition, and the desirability of size and shear normalization. The final system is tested on four data sets printed under different degrees of control on the writers. Each data set consists of approximately 10 000 characters. For this comparison, a first-order maximum likelihood function with weights quantized to 100 levels is used. Error versus reject curves are given on several combinations of training and test sets.

59 citations


Journal ArticleDOI
TL;DR: This paper describes a PMU technique which is applicable to a wide class of digital modulation methods, and details are given for the cases of noncoherent frequency shift keying, differentially coherent phase shiftkeying, and coherent PSK receivers operating over a fading channel.
Abstract: There is a need in many digital communication systems to estimate the current digital error rate of the receiver without the use of special transmissions and without interrupting traffic flow. Devices for performing this function have been termed performance monitor units (PMU). As an element of an adaptive communication system, the PMU could be used to determine when adaptive change is needed and, by comparing the error rates which would result from the various available choices of adaptation, to select the best change to be made at any time. This paper describes a PMU technique which is applicable to a wide class of digital modulation methods. Details are given for the cases of noncoherent frequency shift keying (FSK), differentially coherent phase shift keying (PSK), and coherent PSK receivers operating over a fading channel.

39 citations


Journal ArticleDOI
TL;DR: A system under development which will perform correction of errors made in a phonemic recognizer by comparison of the received phoneme sequence with the syntax and dictionary of "the language being spoken" is described.
Abstract: The use of contextual constraints in speech recognition has been contemplated by many authors. This paper describes a system under development which will perform correction of errors made in a phonemic recognizer by comparison of the received phoneme sequence with the syntax and dictionary of "the language being spoken. The language syntax is stored in a relatively efficient manner, being essentially in Backus-Naur Form. The error correction procedure can best be described as a sequential decoding on the tree of syntax generated by the tables, as traversed by a syntactical parser. The system is currently being implemented in a form such that it will recognize a "spoken FORTRAN" language. Some initial results of its application to certain error-containing inputs are described.

25 citations


Journal ArticleDOI
TL;DR: Nearest-neighbor classification is used to explain the high error rates obtained by general statistical procedures, and the minimum human error rate is estimated, and suggested as a performance standard.
Abstract: —The results of three experiments with Highleyman's hand-printed characters are reported. Nearest-neighbor classification is used to explain the high error rates (42 to 60 percent) obtained by general statistical procedures. An error rate of 32 percent is obtained by preceding piecewise-linear classification by edge-detecting preprocessing. The minimum human error rate is estimated, and suggested as a performance standard.

21 citations


Journal ArticleDOI
I. B. Oldham1, R. T. Chien1, D. T. Tang1
TL;DR: An error-correction system has been implemented for data stored in the IBM Photo-Digital Storage System where a Reed-Solomon code is used to obtain a very low error rate in spite of flaws affecting the recorded bits.
Abstract: An error-correction system has been implemented for data stored in the IBM Photo-Digital Storage System. Hardware is used for encoding and error detection, and a processor-controller is used, on a time-sharing basis, for error correction. A Reed-Solomon code is used to obtain a very low error rate in spite of flaws affecting the recorded bits. This approach is applicable to systems which require complex codes and have a data processor available on a time-sharing basis.

21 citations


Journal ArticleDOI
TL;DR: An adaptive filter, similar to that used in automatic equalization, for use as a predictor in data compression systems, is suggested and some of the applications of this adaptive predictor in digital data transmission are discussed.
Abstract: This paper suggests an adaptive filter, similar to that used in automatic equalization, for use as a predictor in data compression systems. It discusses some of the applications of this adaptive predictor in digital data transmission. In the event of redundant data input to the system the predictor could be used to lower the transmitted power output required for a given error rate or to decrease the error rate while maintaining constant transmitted power. The action of these redundancy-removal and restoration systems is analyzed in simple cases involving Markov inputs.

15 citations


Journal ArticleDOI
TL;DR: The pulse stuffing technique for rate equalization of digital channels is extended in this article to the stuffing of a sequence of pulses (a word), which can be coded.
Abstract: The pulse stuffing technique for rate equalization of digital channels is extended in this article to the stuffing of a sequence of pulses (a word), which can be coded. The extra capacity needed for signaling the stuffed word decreases exponentially with the number of pulses in it, and may, in fact, be eliminated at a negligible increase in the error rate of the channel.

8 citations


ReportDOI
01 May 1968
TL;DR: The performance of interleaved cyclic codes will be demonstrated to be sufficient to correct all types of measured HF error patterns, and it will be demonstrating that only the total bit interleaving is important in achieving error correction.
Abstract: : In previous papers the technique of error correction of digital data through the use of interleaved cyclic codes and a set of probability functions for the evaluation of error patterns have been presented. In this paper the previous results are extended to a wide range of BCH and symbol codes. A set of simple equations is presented for the description of an interleaved cyclic code and its associated delay, and a method is presented which allows for a significant increase in error rate improvement at a reduction in the delay time introduced into the channel. It is demonstrated that the performance of interleaved cyclic codes is sufficient to correct all types of measured HF error patterns; that, using delay as a basis of comparison, only the total bit interleaving is important in achieving error correction; and that it is possible to get almost 100 percent error correction for delays under 3 seconds for all channel conditions measured.

2 citations


Journal ArticleDOI
01 Jan 1968
TL;DR: It is found that error rate of linear pattern classifiers based on the mean-square error criterion is uniquely determined by the distance between the means of two pattern classes on the real line when the two patterns have the normal distributions with the same covariance matrices.
Abstract: It is found that error rate of linear pattern classifiers based on the mean-square error criterion is uniquely determined by the distance between the means of two pattern classes on the real line when the two pattern classes have the normal distributions with the same covariance matrices.

Proceedings ArticleDOI
01 Dec 1968
TL;DR: A sequential algorithm for designing piecewise linear classification functions without a priori knowledge of pattern class distributions is described, which combines, under control of a performance criterion, adaptive error correcting linear classifier design procedures and clustering techniques.
Abstract: A sequential algorithm for designing piecewise linear classification functions without a priori knowledge of pattern class distributions is described. The algorithm combines, under control of a performance criterion, adaptive error correcting linear classifier design procedures and clustering techniques. An error rate criterion is used to constrain the classification function structure so as to minimize design calculations and to increase recognition throughput for many classification problems. Examples from the literature are used to evaluate this approach relative to other classification algorithms.