scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Survey of Analog Memory Devices

George Nagy1
01 Aug 1963-IEEE Transactions on Electronic Computers (IEEE)-Vol. 12, Iss: 4, pp 388-393
TL;DR: A number of possible approaches to this problem, ranging from the slow and reliable electromechanical systems to the many forms of charge and flux integration, are reviewed, and the suitability of each device for various fields of application is briefly discussed.
Abstract: Widespread and persistent interest in the implementation of multilevel logic, conditional probability computers, learning machines, and brain models has created a need for an inexpensive analog or quasi-digital storage element. A number of possible approaches to this problem, ranging from the slow and reliable electromechanical systems to the many forms of charge and flux integration, are reviewed, and the suitability of each device for various fields of application is briefly discussed.
Citations
More filters
Book ChapterDOI
01 Jan 1969
TL;DR: The invention of the stored-program digital computer during the second world war made it possible to replace the lower-level mental processes of man by electronic data-processing in machines, but the authors lack the "steam engine" or "digital computer" which will provide the necessary technology for learning and pattern recognition by machines.
Abstract: The invention of the steam engine in the late eighteenth century made it possible to replace the muscle-power of men and animals by the motive power of machines. The invention of the stored-program digital computer during the second world war made it possible to replace the lower-level mental processes of man, such as arithmetic computation and information storage, by electronic data-processing in machines. We are now coming to the stage where it is reasonable to contemplate replacing some of the higher mental processes of man, such as the ability to recognize patterns and to learn, with similar capabilities in machines. However, we lack the “steam engine” or “digital computer” which will provide the necessary technology for learning and pattern recognition by machines.

668 citations

Proceedings ArticleDOI
18 Apr 1967
TL;DR: The Stochastic Computer was developed as part of a program of research on the structure, realization and application of advanced automatic controllers in the form of Learning Machines to design an active storage element in which the stored value was stable over long periods, could be varied by small increments, and whose output could act as a 'weight' multiplying other variables.
Abstract: The Stochastic Computer was developed as part of a program of research on the structure, realization and application of advanced automatic controllers in the form of Learning Machines. Although algorithms for search, identification, policy-formation and the integration of these activities, could be established and tested by simulation on conventional digital computers, there was no hardware available which would make construction of the complex computing structure required in a Learning Machine feasible. The main problem was to design an active storage element in which the stored value was stable over long periods, could be varied by small increments, and whose output could act as a 'weight' multiplying other variables. Since large numbers of these elements would be required in any practical system it was also necessary that they be small and of low cost. Conventional analog integrators and multipliers do not fulfill requirements of stability and low cost, and unconventional elements such as electro-chemical stores and transfluxors are unreliable or require sophisticated external circuitry to make them usable. Semiconductor integrated circuits have advantages in speed, stability, size and cost, and it was decided to design a computing element based on standard gates and flip-flops which would be amenable to large-scale integration.

239 citations

Journal ArticleDOI
TL;DR: It is shown that the two-category classifier derived by least-mean-square-error adaption using an equal number of sample patterns from each category is equivalent to the optimal statistical classifier if the patterns are multivariate Gaussian random variables having the same covariance matrix for both pattern categories.
Abstract: This paper develops a relationship between two traditional statistical methods of pattern classifier design, and an adaption technique involving minimization of the mean-square error in the output of a linear threshold device. It is shown that the two-category classifier derived by least-mean-square-error adaption using an equal number of sample patterns from each category is equivalent to the optimal statistical classifier if the patterns are multivariate Gaussian random variables having the same covariance matrix for both pattern categories. It is also shown that the classifier is always equivalent to the classifier derived by R. A. Fisher. A simple modification of the least-mean-square-error adaption procedure enables the adaptive structure to converge to a nearly-optimal classifier, even though the numbers of sample patterns are not equal for the two categories. The use of minimization of mean-square error as a technique for designing classifiers has the added advantage that it leads to the optimal classifier for patterns even when the covariance matrix is singular.

116 citations

Journal ArticleDOI
Vinton G. Cerf1
TL;DR: It is arguable that in this second decade of the 21 century, the authors are starting to see serious opportunities for rethinking how they may compute, and the limitations of conventional use of silicon technology may be overcome with new materials and with new architectural designs as is beginning to be apparent with the new IBM Neural chip.
Abstract: On top of that, in the last couple of years, IBM has demonstrated two remarkable achievements: The Watson Artificial Intelligence system and the August 8, 2014 cover story of Science entitled “Brain Inspired Chip.” The TrueNorth chipset and the programming language it uses have demonstrated remarkable power efficiency compared to more conventional processing elements. What all of these topics have in common for me is the prospect of increasingly unconventional computing methods that may naturally force us to rethink how we analyze problems for purposes of getting computers to solve them for us. I consider this to be a refreshing development, challenging the academic, research, and practitioner communities to abandon or adapt past practices and to consider new ones that can take advantage of new technologies and techniques. It has always been my experience that half the battle in problem solving is to express the problem in such a way the solution may suggest itself. In mathematics, it is often the case that a change of variables can dramatically restructure the way in which the problem or formula is presented; leading one to find related problems whose solutions may be more readily applied. Changing from Cartesian to Polar coordinates often dramatically simplifies its expression. For example, a Cartesian equation for a circle centered at (0,0) is X + Y2 = Z2 but the polar version is simply r(φ)= a for some value of a. It may prove to be the case that the computational methods for solving problems with quantum computers, neural chips, and Watson-like systems will admit very different strategies and tactics than those applied in more conventional architectures. The use of graphics processing units (GPUs) to solve problems, rather than generating textured triangles at high speed, has already forced programmers to think differently about the way in which they express and compute their results. The parallelism of the GPUs and their ability to process many small “programs” at once has made them attractive for evolutionary or genetic programming, for example. One question is: Where will these new technologies take us? We have had experiences in the past with unusual designs. The Connection Machine designed by Danny Hillis was one of the first really large-scale computing machines (65K one-bit processors) hyperconnected together. LISP was one of the programming languages used for the Connection Machines along with URDU, among others. This brings to mind the earlier LISP machines made by Symbolics and LISP Machines, Inc., among others. The rapid advance in speed of more conventional processors largely overtook the advantage of special purpose, potentially language-oriented computers. This was particularly evident with the rise of the so-called RISC (Reduced Instruction Set Computing) machines developed by John Hennessy (the MIPS system) and David Patterson (Berkeley RISC and Sun Microsystems SPARC), among many others. David E. Shaw, at Columbia University, pioneered one of the explorations into a series of designs of a single instruction stream, multiple data stream (SIMD) supercomputer he called Non-Von (for “non-Von-Neumann”). Using single-bit arithmetic logic units, this design has some relative similarity to the Connection Machine although their interconnection designs were quite different. It has not escaped my attention that David Shaw is now the chief scientist of D.E. Shaw Research and is focused on computational biochemistry and bioinformatics. This topic also occupies his time at Columbia University, where he holds a senior research fellowship and adjunct professorship. Returning to new computing and memory technologies, one has the impression the limitations of conventional use of silicon technology may be overcome with new materials and with new architectural designs as is beginning to be apparent with the new IBM Neural chip. I have only taken time to offer an very incomplete and sketchy set of observations about unconventional computing in this column, but I think it is arguable that in this second decade of the 21 century, we are starting to see serious opportunities for rethinking how we may compute.

32 citations

References
More filters
Proceedings ArticleDOI
03 Mar 1959
TL;DR: The object of this paper is to present circuit techniques for significantly improving the circuit operation and to present experimental and analytic results which are pertinent to an understanding of the coupling loop operation.
Abstract: This is the second in a series of papers concerned with a technique for performing combinatorial and sequential digital logic with magnetic elements and connecting wires only. These elements are termed MAD's (Multi-Aperture Devices). For clarity, in the first paper the basic techniques were described in terms of simple circuit structures which do not represent the best that can be achieved in the way of operational properties. The object of this paper is: 1) to present circuit techniques for significantly improving the circuit operation; and 2) to present experimental and analytic results which are pertinent to an understanding of the coupling loop operation.

15 citations

Journal ArticleDOI
01 Jun 1959
TL;DR: The transpolarizer as mentioned in this paper is a new basic means for storing and gating electrical signals and, in general, means to control circuit impedance in any predetermined manner according to a stored setting.
Abstract: This new device operates by the controlled transfer of polarization through two or more ferroelectric dielectric sections in series and therefore is named "transpolarizer." It represents a new basic means for storing and gating electrical signals and, in general, means to control circuit impedance in any predetermined manner according to a stored setting. The operation of a two section transpolarizer is described. The unique storage, switching, and control properties of the transpolarizer open a large field of new applications and permit production of new devices and systems such as: recording and reproducing of intelligence in general and, more particularly, switching with a permanent setting, small and large scale storage devices with nondestructive readout, decoders, function generators, etc.

11 citations

Journal ArticleDOI
01 Dec 1954
TL;DR: It is possible to read information nondestructively from two- and three-dimensional magnetic-core digital computer memories in several microseconds by exciting selected cores with rf currents.
Abstract: It is possible to read information nondestructively from two- and three-dimensional magnetic-core digital computer memories in several microseconds by exciting selected cores with rf currents. If two co-ordinate lines of a core in a memory array plane are driven at slightly different frequencies, a beat-frequency signal is generated whose phase may take on one of two values which are separated by 180 electrical degrees. These two possible phases correspond to the 0 and 1 information states of the core. The beat-frequency signal, separated from the inevitable noises by tuned linear filters, may be phase detected to yield the desired information.

10 citations

Proceedings ArticleDOI
01 Jan 1962

2 citations