scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Electronic Computers in 1963"


Journal ArticleDOI
TL;DR: The Pattern Articulation Unit is the first modular parallel processor which is capable of more reliable visual identification than part analog/part digital preprocessors of much less generality and potential virtuosity and can serve as a prototype to a new generation of parallel computers that will capitalize upon thin film and integrated semiconductor circuitry of the immediate future.
Abstract: This report describes the system design of an all-digital computer for visual recognition. One processor, the Pattern Articulation Unit (PAU), has been singled out for detailed discussion. Other units, in particular the Arithmetic Unit and the Taxicrinic Unit, are treated in reports listed in the bibliography. The PAU has been shown to be a processor of fundamentally new design-its logical organization has no analog in the central processing unit of existing computers. The PAU is the first modular parallel processor which because of its digital organization is capable of more reliable visual identification than part analog/part digital preprocessors of much less generality and potential virtuosity; is faster than any presently suggested alternative realizable today at comparable cost; and can serve as a prototype to a new generation of parallel computers that will capitalize upon thin film and integrated semiconductor circuitry of the immediate future.

164 citations



Journal ArticleDOI
TL;DR: A notion of equivalence of definite events is introduced and the uniqueness of the minimal automaton defining an event in an equivalence class is proved.
Abstract: A definite automaton is, roughly speaking, an automaton (sequential circuit) with the property that for some fixed integer k its action depends only on the last k inputs. The notion of a definite event introduced by Kleene, as well as the related concepts of definite automata and tables, are studied here in detail. Basic results relating to the minimum number of states required for synthesizing an automaton of a given degree of definiteness are proved. We give a characterization of all k-definite events definable by k+1 state automata. Various decision problems pertaining to definite automata are effectively solved. We also solve effectively the problem of synthesizing a minimal automaton defining a given definite event. The solutions of decision and synthesis problems given here are practical in the sense that if the problem is presented by n units of information, then the algorithm in question requires about n3 steps of a very elementary nature (rather than requiring about 2n steps as some algorithms for automata do, which puts them beyond the capacity of the largest computers even for relatively small values of n). A notion of equivalence of definite events is introduced and the uniqueness of the minimal automaton defining an event in an equivalence class is proved.

147 citations


Journal ArticleDOI
TL;DR: It is shown that high-speed circuitry must be miniaturized and the implications are discussed.
Abstract: By way of worked examples in typical but somewhat idealized cases the effect on circuit speed of circuit interconnections is studied. The source, calculation and minimization of interconnection crosstalk is also discussed. It is shown that high-speed circuitry must be miniaturized and the implications are discussed.

129 citations


Journal ArticleDOI
TL;DR: A survey of the learning circuits which became known as learning matrices and some of their possible technological applications is given.
Abstract: The paper gives a survey of the learning circuits which became known as learning matrices and some of their possible technological applications. The first section describes the principle of learning matrices. So-called conditioned connections between the characteristics of an object and the meaning of an object are formed in the learning phase. During the operation of connecting the characteristics of an object with its meaning (EB operation of the knowing phase) upon presenting the object characteristics, the associated most similar meaning is realized in the form of a signal by maximum likelihood decoding. Conversely, in operation from the meaning of an object to its characteristics (BE operation) the associated object characteristics are obtained as signals by parallel reading upon application of an object meaning. According to the characteristic signals processed (binary or analog signals) discrimination must be made between binary and nonbinary learning matrices. In the case of the binary learning matrix the conditioned connections are a statistical measure for the frequency of the coordination of object characteristics and object meaning, in the case of the nonbinary learning matrix they are a measure for an analog value proportional to a characteristic. Both types of matrices allow for the characteristic sets applied during EB operation to be unsystematically disturbed within limits. Moreover, the nonbinary learning matrix is invariant to systematic deviations between presented and learned characteristic sets (invariance to affine transformation, translation and rotated skewness).

121 citations


Journal ArticleDOI
TL;DR: This paper describes the organization, programming, and hardware of a variable structure computer system presently under construction at UCLA.
Abstract: Pragmatic problem studies predict gains in computation speeds in a variety of computational tasks when executed on appropriate problem-oriented configurations of the variable structure computer. The economic feasibility of the system is based on utilization of essentially the same hardware in a variety of special purpose structures. This capability is achieved by programmed or physical restructuring of a part of the hardware. Existence of important classes of problems which the variable structure computer system promises to render practically computable, as well as use of the system for experiments in computer organization and for evaluation of new circuits and devices warrant construction of a variable structure computer. This paper describes the organization, programming, and hardware of a variable structure computer system presently under construction at UCLA.

121 citations


Journal ArticleDOI
Leo Hellerman1
TL;DR: This report gives a complete catalog of minimal NOR circuits and minimal NAND circuits, assuming complements not available, for all logic functions of three variables.
Abstract: This report gives a complete catalog of minimal NOR circuits and minimal NAND circuits, assuming complements not available, for all logic functions of three variables. Minimal circuits for a function are those that satisfy these conditions: 1) The number of logic blocks of the circuit is least possible for performing the function; 2) The number of connections in the circuit (total number of inputs) is least possible, subject to the condition that the circuit satisfies the first condition. In addition, the circuits satisfy certain reasonable restrictions on fan-in and fan-out.

113 citations


Journal ArticleDOI
TL;DR: It is shown that the methods of signal flow graph theory, with the proper interpretation, apply to state diagrams of sequential circuits and leads to a simple algorithm for obtaining a regular expression describing the behavior of a sequential circuit directly from its state diagram.
Abstract: This paper considers the application of signal flow graph techniques to the problem of characterizing sequential circuits by regular expressions. It is shown that the methods of signal flow graph theory, with the proper interpretation, apply to state diagrams of sequential circuits. The use of these methods leads to a simple algorithm for obtaining a regular expression describing the behavior of a sequential circuit directly from its state diagram.

112 citations


Journal ArticleDOI
TL;DR: A numerical experiment on a problem from the area of the x-ray diffraction analysis of crystal structures indicates that the assignment procedure is computationally practical, and demonstrates that execution of the problem in a variable structure system leads to a considerable gain over a system of three modern high-speed general purpose computers.
Abstract: The problem of optimal assignment of subcomputations of a computational task to autonomous computing structures of a variable structure computing system is investigated. In particular, it is desired to determine which computing structures should be constructed from the hardware inventory of the variable structure computing system and which subcomputations should be executed on which computing structures, and in what sequence, so as to minimize the total cost of computation (cost of restructuring of the system and the cost of the actual execution time). A successive approximation assigmnent procedure is formulated. The procedure requires representation of the computational task by a directed graph and an estimate of the number of traversals of each computational loop, as well as the branching probabilities of each conditional branching operation. Computer programs for automatic execution of the assignment procedure have been written. A numerical experiment on a problem from the area of the x-ray diffraction analysis of crystal structures indicates that the procedure is computationally practical, and also demonstrates that execution of the problem in a variable structure system leads to a considerable gain over a system of three modern high-speed general purpose computers.

76 citations


Journal ArticleDOI
TL;DR: This paper describes the final design of the SOLOMON computer, from a total system viewpoint, and describes the functional capabilities of the computer, i.e., type of circuitry and packaging techniques utilized.
Abstract: Several papers have been written and published on various aspects of the SOLOMON computer. This paper describes the final design of the computer, from a total system viewpoint. The paper consists of three major portions: a brief description of the system organization, a description of the functional capabilities of the computer, i.e., the instruction set, and a description of the physical system, i.e., type of circuitry and packaging techniques utilized.

63 citations


Journal ArticleDOI
TL;DR: A diode-resistor network technique for simulating functions of any number of variables is described, and it is shown that the analog logic can be described by a distributive lattice, closely related to Boolean algebra.
Abstract: A diode-resistor network technique for simulating functions of any number of variables is described. The function is approximated by a polyhedron, or its N-dimensional equivalent, and this polyhedral model is generated directly by the circuit. The circuit is formed of two cascaded sections: the first, using resistive networks, generates voltages representing each of the faces of the polyhedron; the second section, using analog diode logic, selects the appropriate voltage as the output. The analog diode logic uses basic circuits similar to those used in digital logic. It is shown that the analog logic can be described by a distributive lattice, closely related to Boolean algebra. Methods of logic synthesis are developed. Circuit design is discussed, and it is shown how the effect on the logic section of the finite source impedance of the resistive networks which feed it can be turned to practical advantage. The specific example of the multiplication of two variables is studied in some detail, providing a basis of comparison with the known techniques such as the quarter-squares and log-antilog methods. Finally, test results on a simple three-variable function generator are given. The method is not incremental in nature, and differs from current techniques in this regard. Thus the setting up of a function on a generator is simplified, owing to the noninteraction between segments. Also, the area of the input space to be covered is adjustable to correspond with the input domain of the function.

Journal ArticleDOI
TL;DR: This paper describes the method and the system investigated to solve the problem encountered in the automatic recognition of speech sound, a monosyllable recognition system in which the phoneme is used as the basic recognition unit.
Abstract: This paper describes the method and the system investigated to solve the problem encountered in the automatic recognition of speech sound From research in the automatic analyzer of speech sound, a monosyllable recognition system was constructed in which the phoneme is used as the basic recognition unit Recently this system has been developed to accept the conversational speech sound with unlimited vocabulary The mechanical recognition of conversational speech sound requires two basic operations One is the segmentation of the continuous speech sound into several discrete intervals (or segments), each of which may be thought to correspond to a phoneme, and the other is the pattern recognition of such segments For segmentation, by defining two criteria, ``stability'' and ``distance,'' the properties of the time pattern obtained by the analysis of input speech sound may be examined The principle of the recognition is based on the mechanism of the articulation in our speech organ Corresponding to this, the machine has the functions called phoneme classification, vowel analysis and consonant analysis A conversational speech recognition system with the phonetic contextual approach is also applied to the vowel recognition where the time pattern of input speech is matched with the stored standard patterns in which the phonetic contextual effects are taken into consideration The time pattern which has great variety may be effectively expressed by the new representation of ``sequential pattern'' and ``weighting pattern''

Journal ArticleDOI
TL;DR: A canonical form is derived for ?
Abstract: Symmetric and partially symmetric functions are studied from an algebraic point of view. Tests are given for detecting these properties. A more general approach involving the concept of ?-symmetric functions is given. A canonical form is derived for ?-symmetric functions which leads to synthesis procedures that improve results of Shannon.


Journal ArticleDOI
TL;DR: It is shown that an average saving of 30 per cent in majority elements is made on trees of majority elements which individually produce ``AND'' and ``OR'' functions.
Abstract: A method is developed for the synthesis of arbitrary combinational logical functions using three-input majority elements. The networks that result are in the form of modified trees. It is shown that an average saving of 30 per cent in majority elements is made on trees of majority elements which individually produce ``AND'' and ``OR'' functions. Several variations of the basic method are developed and the results are compared with the algebraic methods of Cohn and Lindaman.

Journal ArticleDOI
Robert H. Kohr1
TL;DR: The method is extended to systems which contain several nonlinear elements and is assumed that the system may be excited by a specified periodic input and that neither the input nor the output of the system is significantly corrupted by noise.
Abstract: A method is described for the establishment of a nonlinear differential equation which acts as a model for an actual physical system. It is assumed initially that the actual system contains only a single nonlinear element and that this element may be represented in the differential equation model by a single-valued nonlinear function of a single variable. The method is then extended to systems which contain several nonlinear elements. It is assumed that the system may be excited by a specified periodic input and that neither the input nor the output of the system is significantly corrupted by noise. An experimental criterion is given for the range of periodic inputs which permit satisfactory determination of the differential equation model.

Journal ArticleDOI
TL;DR: The perceptual superiority of the human visual system over automata is outlined comparing the properties of both systems and the existing classification methods are outlined and discussed with regard to adaptive systems.
Abstract: The perceptual superiority of the human visual system over automata is outlined comparing the properties of both systems. The most effective property with regard to pattern recognition is the internal adaptability and the ability of abstracting. Both properties are well performed by human beings. A mechanical perceptor for complex pattern recognition must also have these capabilities. The use of adaptation for pattern recognition is discussed. The realization of these properties by machines is difficult, especially the development of an adequate feature generator which performs the internal adaptability and thus solves the problem of identification-criteria invariance of patterns. This is assumed to be the main task in pattern recognition research. External teaching processes may be accomplished by adaptive categorizers. The existing classification methods are outlined and discussed with regard to adaptive systems. Adaptive categorizers of a learning matrix type and a perceptron type are compared as to structure, linear classification performance, and training routine. It is assumed, however, that the somewhat passive external adaptation of categorizers must be supplemented by a more active adaptation by the system itself.

Journal ArticleDOI
J. Sklansky1
TL;DR: A synthesis procedure is described which generates all tributary networks (TRIBs) realizing a given truth function when no a priori assignment of the variables to input terminals is specified.
Abstract: A synthesis procedure is described which generates all tributary networks (TRIBs) realizing a given truth function when no a priori assignment of the variables to input terminals is specified. If the truth function is not known to be realizable by a TRIB structure, the synthesis procedure provides a convenient test for TRIB realizability. If the variables are preassigned to input terminals, the synthesis and test are still applicable and correspondingly shorter. A major tool of the procedure is the matrix of binary representations of the minterms of the truth function?the so called ``minterm matrix.'' The procedure is illustrated by a numerical example.



Journal ArticleDOI
W. H. Hanson1
TL;DR: In this article, a ternary threshold logic is proposed to represent all three-valued functions, and two methods of synthesizing these functions from their truth tables are given, one of the methods produces a normal form that is analogous to the disjunctive normal form of Boolean algebra.
Abstract: A new logical algebra, ternary threshold logic, is defined and developed. The system is shown to be capable of representing all three-valued functions, and two methods of synthesizing these functions from their truth tables are given. One of the methods produces a normal form that is analogous to the disjunctive normal form of Boolean algebra. A list of single-threshold-operator equivalents of certain normal forms is given. It is pointed out that magnetic film parametrons may be made to exhibit the logical properties of the operators of the system. Designs for ternary adder and comparator stages are given that could be constructed out of magnetic film parametrons.

Journal ArticleDOI
K. Maling1, E. L. Allen1
TL;DR: An approach to the maintenance problem of central processors which minimizes the human role is described, consisting of a programming system which computes the diagnostic program from the design-automation tape, and a novel organization of a part of the controls of an experimental computer.
Abstract: This paper describes an approach to the maintenance problem of central processors which minimizes the human role. This approach consists of a combination of 1) a programming system which computes the diagnostic program from the design-automation tape, and 2) a novel organization of a part of the controls of an experimental computer. Problems which require solution are listed and reasons are given as to why they were solved by the methods described. An account of the programming system and hardware as implemented is given, and improvements in them are considered in the light of difficulties encountered.


Journal ArticleDOI
TL;DR: The importance of solving the measurement problem in character recognition systems before or collaterally with the decision problem is stressed, and the role of the computer in recognition-logic design is discussed.
Abstract: The importance of solving the measurement problem in character recognition systems before or collaterally with the decision problem is stressed. Measurements themselves are decision processes. In the system discussed here the measurements constitute decisions on the probable presence and most likely orientation of the edges of the character strokes in each elementary scanning area. This permits an effective and economical sequential feature detection in which the statistical variations due to printing and scanning variations are taken into account. The article concludes with a discussion of the role of the computer in recognition-logic design.

Journal ArticleDOI
TL;DR: The balanced tree provides a stratagem to effect a fast information retrieval with a limited amount of serialized scanning by storing in and retrieving from the balanced tree.
Abstract: To translate descriptors into memory locations a memory organization scheme called the balanced tree is introduced, The descriptors that describe the information to be stored or retrieved constitute quasi-inputs to the tree while the outputs are lists on which the information identified by the descriptors is stored. The balanced tree thus provides a stratagem to effect a fast information retrieval with a limited amount of serialized scanning. The algorithms for storing in and retrieving from the balanced tree are outlined. While in a randomly growing tree, the shape of the tree depends on the order of the input, the balanced tree is independent of this order. The expected number of rearrangement steps to keep the tree balanced was derived from combinatorial considerations. Numerical results were obtained by machine computations and are presented in this paper.

Journal ArticleDOI
TL;DR: The notable properties of the iterative structure are augmented by the inclusion of these features, resulting in a machine with iterative computational structure that includes a form of control or interpretation unit.
Abstract: A multilayer iterative circuit computer (I.C.C.) is described, which is capable of dealing with problems involving spatial relationships between the variables, in addition to the inherent multiprogramming capabilities of this type of machine organization. Some of the novel features presented are: 1) A path-building method which retains the short-time access characteristic of the common bus system, while still permitting the simultaneous operation of several paths in the network without mutual interference. Furthermore, the connecting method allows communication between the modules in a one-to-one, one-to-many or many-to-many way. 2) A specialization in the functions of the individual layers, separating the flow of control signals from the flow of information. This step-by-step treatment of the instructions makes it possible to preinterpret them before the actual execution, thus permitting the inclusion of instructions acting from many-to-many operands. 3) Three-phase operation, with each phase active simultaneously in each layer, and operating on different instructions. Due to the overlapping of the phases in time, the total effective time per instructions remains the same. Once an instruction has been executed, the partially processed results are transferred to the next layer, and the now vacant layer starts the processing of the next instruction. The notable properties of the iterative structure are thus augmented by the inclusion of these features, resulting in a machine with iterative computational structure that includes a form of control or interpretation unit.

Journal ArticleDOI
TL;DR: This work yields the necessary methods to detect the existence of a decomposition of machines into component machines so that the most ``serious'' errors of the computation can occur only in an isolated component machine.
Abstract: The object of this paper is to study feedback in sequential machines, to classify (according to their seriousness) and analyze errors which arise in the state transitions of machines, and to establish some relations between feedback and errors. It is shown that the previously developed algebraic methods1,2 supply the necessary tools and a rigorous basis for this theory, and relate these new results to previously obtained results about the structure of sequential machines. For example, this work yields the necessary methods to detect the existence of a decomposition of machines into component machines so that the most ``serious'' errors of the computation can occur only in an isolated component machine. This leads to the possibility of imposing selectively different reliability conditions on the component machines to achieve high over-all reliability of the realizations.

Journal ArticleDOI
George Nagy1
TL;DR: A number of possible approaches to this problem, ranging from the slow and reliable electromechanical systems to the many forms of charge and flux integration, are reviewed, and the suitability of each device for various fields of application is briefly discussed.
Abstract: Widespread and persistent interest in the implementation of multilevel logic, conditional probability computers, learning machines, and brain models has created a need for an inexpensive analog or quasi-digital storage element. A number of possible approaches to this problem, ranging from the slow and reliable electromechanical systems to the many forms of charge and flux integration, are reviewed, and the suitability of each device for various fields of application is briefly discussed.

Journal ArticleDOI
M. Y. Hsiao1, F. F. Sellers1
TL;DR: This paper exhibits a checking scheme called ``carry-dependent sum add'' which is based on the parity prediction method and assures single-fault detection without duplication of the carry circuit.
Abstract: The correct operation of addition in a digital computer is very important. In this paper, the authors exhibit a checking scheme called ``carry-dependent sum add'' which is based on the parity prediction method. This scheme assures single-fault detection without duplication of the carry circuit. An example of a binary adder and a decimal adder using this scheme are included.