scispace - formally typeset
Search or ask a question

Showing papers on "Human error assessment and reduction technique published in 1961"


Journal ArticleDOI
W. Bennett1, F. Froehlich
TL;DR: It is concluded that some of the codes considered are capable of reducing the error rates in digital communication considerably, however, most error-control methods are highly sensitive to the particular error statistics found on the transmission channel.
Abstract: In recent years considerable attention has been directed toward digital data communication among machines utilizing the telephone network. Such communication raises many questions concerning the accuracy of transmission attainable. This paper discusses the effectiveness of several error-control techniques. First, the transmission channel is described by three parameters which permit correlation among the errors. A number of errorcorrecting methods are evaluated by a computer simulation technique using the parameters of several hypothetical transmission channels which might be representative of telephone circuits. Emphasis has been placed on the recurrent burst-correcting codes. The performance of these "recurrent" codes, when subjected to actual errors collected from Data-Phone test calls, is compared to the performance of these codes using the assumed channel parameters. Graphs relating the average final error rate after correction to the transmission channel error statistics are shown. The difficulty of extracting channel parameters from real error data is discussed and a method is presented which might permit such calculations. It is concluded that some of the codes considered are capable of reducing the error rates in digital communication considerably. However, most error-control methods are highly sensitive to the particular error statistics found on the transmission channel. An analytical description of the errors found on a transmission channel is needed for the proper evaluation of error-control methods.

6 citations


Journal ArticleDOI
TL;DR: It is concluded that there is a substantial advantage in the use of large (500-bit) rather than small (50- bit) blocks for detecting errors and the practical evidence of error patterns which remain undetected should enable more effective detection designs to be produced.
Abstract: The factors influencing the speed at which data can be transmitted, and those determining the error rate, are summarized as an introduction to a statement in respect to error rates measured. Different applications for data transmission are considered in relation to the amount of error correction which may be justified. Various forms of error detection and correction are illustrated, and the probability of undetected errors is considered from a theoretical point of view for a random distribution of errors. These results are then compared with records obtained by practical tests using various redundancy arrangements. It is concluded that there is a substantial advantage in the use of large (500-bit) rather than small (50-bit) blocks for detecting errors. It is also suggested that the practical evidence of error patterns which remain undetected should enable more effective detection designs to be produced.

4 citations