Author

# David A. Huffman

Bio: David A. Huffman is an academic researcher from Massachusetts Institute of Technology. The author has contributed to research in topics: Electronic circuit & Optical burst switching. The author has an hindex of 5, co-authored 5 publications receiving 8817 citations.

##### Papers

More filters

••

01 Sep 1952TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

••

TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

3,528 citations

••

TL;DR: In this article, an orderly procedure is developed by which the requirements of a sequential switching circuit (one with memory) can be reduced to requirements of several combinational switching circuits (those without memory).

Abstract: An orderly procedure is developed by which the requirements of a sequential switching circuit (one with memory) can be reduced to the requirements of several combinational switching circuits (those without memory). Important in this procedure are: 1. 1. the flow table: a tabular means by which the requirements of a sequential switching circuit may be stated precisely and by which redundancy in these requirements may be recognized and eliminated, and 2. 2. the transition index: a new variable which indicates the stability (or lack of stability) of a switching device. The role of those switching devices which are not directly controlled by the input of a sequential switching circuit is investigated thoroughly. The resulting philosophy, which is exploited in synthesis procedures for circuits using either relay or vacuum-tube switching devices, is valid for circuits using other devices as well.

399 citations

••

TL;DR: It is shown here that it is always possible to synthesize a combinational switching network which behaves ideally even though its individual components do not, and the technique proposed here does not involve the use of a new switching algebra.

Abstract: When combinational switching networks of relay contacts or of electronic gate elements are analyzed or synthesized, the network components are usually idealized in such a way that they may be adequately described by the Boolean algebra. I t is usually postulated [1] that, at all times, all the normally open (or normally closed) contacts on a given relay open and close in synchronism with each other, and that each normally open contact is open (closed) when the normally closed contacts are closed (open), and vice versa. Similarly in a network of electronic gate elements it is usually postulated [2] that an input variable can affect the network output with no intervening time lag. These assumptions, when put to use in synthesis procedures, lead to networks which behave correctly for steady-state situations but which may not for transient conditions (during which a network input variable is changing from one of its binary states to the other). When combinational networks which do not behave ideally (in a sense to be considered below) during changes of input state are incorporated into larger switching networks which have sequential action (that is, which act as if they had memories) a hazard [3] exists and the sequential circuit may not operate as it was meant to by the designer. The significance of a network hazard can be substantially reduced and sometimes eliminated by \"smoothing\" of the network output [4]. In a relay circuit, for example, the contact networks are used to control relays which in turn contribute contacts to the various other contact networks. If the response time of a given relay is increased so as to increase its smoothing action, the effect of a hazard in its controlling network may be eliminated. But now contacts from the given relay which appear in other networks of the circuit may behave even less ideally than they did originally, thus creating new hazards in the other networks. Moreover, in both relay and electronic circuits, the stratagem of smoothing at critical points in the circuit will always increase the reaction time of the circuit. In most applications this is undesirable. This paper suggests a method for the elimination of hazards without resort to signal smoothing. It is shown here that it is always possible to synthesize a combinational switching network which behaves ideally even though its individual components do not. The technique proposed here does not involve the use of a new switching algebra. The terminal behavior of the resulting hazard4ree net-

104 citations

•

TL;DR: An orderly procedure is developed by which the requirements of a sequential switching circuit (one with memory) can be reduced to the requirement of several combinational switching circuits (those without memory).

Abstract: An orderly procedure is developed by which the requirements of a sequential switching circuit (one with memory) can be reduced to the requirements of several combinational switching circuits (those without memory). Important in this procedure are: 1. 1. the flow table: a tabular means by which the requirements of a sequential switching circuit may be stated precisely and by which redundancy in these requirements may be recognized and eliminated, and 2. 2. the transition index: a new variable which indicates the stability (or lack of stability) of a switching device. The role of those switching devices which are not directly controlled by the input of a sequential switching circuit is investigated thoroughly. The resulting philosophy, which is exploited in synthesis procedures for circuits using either relay or vacuum-tube switching devices, is valid for circuits using other devices as well.

14 citations

##### Cited by

More filters

•

01 Jan 1998

TL;DR: An introduction to a Transient World and an Approximation Tour of Wavelet Packet and Local Cosine Bases.

Abstract: Introduction to a Transient World. Fourier Kingdom. Discrete Revolution. Time Meets Frequency. Frames. Wavelet Zoom. Wavelet Bases. Wavelet Packet and Local Cosine Bases. An Approximation Tour. Estimations are Approximations. Transform Coding. Appendix A: Mathematical Complements. Appendix B: Software Toolboxes.

17,693 citations

••

TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

Abstract: In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations

••

TL;DR: A thorough exposition of community structure, or clustering, is attempted, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists.

Abstract: The modern science of networks has brought significant advances to our understanding of complex systems. One of the most relevant features of graphs representing real systems is community structure, or clustering, i. e. the organization of vertices in clusters, with many edges joining vertices of the same cluster and comparatively few edges joining vertices of different clusters. Such clusters, or communities, can be considered as fairly independent compartments of a graph, playing a similar role like, e. g., the tissues or the organs in the human body. Detecting communities is of great importance in sociology, biology and computer science, disciplines where systems are often represented as graphs. This problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years. We will attempt a thorough exposition of the topic, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.

9,057 citations

••

TL;DR: A thorough exposition of the main elements of the clustering problem can be found in this paper, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.

Abstract: The modern science of networks has brought significant advances to our understanding of complex systems. One of the most relevant features of graphs representing real systems is community structure, or clustering, i.e. the organization of vertices in clusters, with many edges joining vertices of the same cluster and comparatively few edges joining vertices of different clusters. Such clusters, or communities, can be considered as fairly independent compartments of a graph, playing a similar role like, e.g., the tissues or the organs in the human body. Detecting communities is of great importance in sociology, biology and computer science, disciplines where systems are often represented as graphs. This problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years. We will attempt a thorough exposition of the topic, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.

8,432 citations

••

TL;DR: The compression ratio achieved by the proposed universal code uniformly approaches the lower bounds on the compression ratios attainable by block-to-variable codes and variable- to-block codes designed to match a completely specified source.

Abstract: A universal algorithm for sequential data compression is presented. Its performance is investigated with respect to a nonprobabilistic model of constrained sources. The compression ratio achieved by the proposed universal code uniformly approaches the lower bounds on the compression ratios attainable by block-to-variable codes and variable-to-block codes designed to match a completely specified source.

5,844 citations