scispace - formally typeset
Search or ask a question
Author

Mark R. Walker

Bio: Mark R. Walker is an academic researcher from Arizona State University. The author has contributed to research in topics: Artificial neural network & Very-large-scale integration. The author has an hindex of 6, co-authored 9 publications receiving 86 citations.

Papers
More filters
Book Chapter•DOI•
01 Jan 1989
TL;DR: The objective of this work is to design highly layered, limited-interconnect synthetic neural architectures and develop training algorithms for systems made from these chips that are specifically designed to scale to tens of thousands of processing elements on current production size dies.
Abstract: Recent encouraging results have occurred in the application of neuromorphic, ie. neural network inspired, software simulations of speech synthesis, word recognition, and image processing. Hardware implementations of neuromorphic systems are required for real-time applications such as control and signal processing. Two disparate groups of workers are interested in VLSI hardware implementations of neural networks. The first is interested in electronic-based implementations of neural networks and use standard or custom VLSI chips for the design. The second group wants to build fault tolerant adaptive VLSI chips and are much less concerned with whether the design rigorously duplicates the neural models. In either case, the central issue in construction of a electronic neural network is that the design constraints of VLSI differ from those of biology (Walker and Akers 1988). In particular, the high fan-in/fan-outs of biology impose connectivity requirements such that the electronic implementation of a highly interconnected biological neural networks of just a few thousand neurons would require a level of connectivity which exceeds the current or even projected interconnection density of ULSI systems. Fortunately, highly-layered limited interconnected networks can be formed that are functionally equivalent to highly connected systems (Akers et al. 1988). Highly layered, limited-interconnected architectures are especially well suited for VLSI implementations. The objective of our work is to design highly layered, limited-interconnect synthetic neural architectures and develop training algorithms for systems made from these chips. These networks are specifically designed to scale to tens of thousands of processing elements on current production size dies.

16 citations

Journal Article•DOI•
TL;DR: The authors present an analog complementary metal-oxide semiconductor (CMOS) version of a model for pattern association, along with discussions of design philosophy, electrical results, and a chip architecture for a 512-element, feed-forward IC.
Abstract: The authors present an analog complementary metal-oxide semiconductor (CMOS) version of a model for pattern association, along with discussions of design philosophy, electrical results, and a chip architecture for a 512-element, feed-forward IC. They discuss hardware implementations of neural networks and the effect of limited interconnections. They then examine network design, processor-element design, and system operation. >

15 citations

Journal Article•DOI•
TL;DR: It is shown how the ART1 paradigm can be functionally emulated by the limited resolution pipelined architecture, in the absence of full parallelism.
Abstract: The embedding of neural networks in real-time systems performing classification and clustering tasks requires that models be implemented in hardware. A flexible, pipelined associative memory capable of operating in real-time is proposed as a hardware substrate for the emulation of neural fixed-radius clustering and binary classification schemes. This paper points out several important considerations in the development of hardware implementations. As a specific example, it is shown how the ART1 paradigm can be functionally emulated by the limited resolution pipelined architecture, in the absence of full parallelism.

15 citations

Proceedings Article•
01 Jan 1988
TL;DR: The judicious use of linear summations or collection units is proposed as a solution toHardware implementation of neuromorphic algorithms is hampered by high degrees of connectivity and low-level nonlinearities.
Abstract: Hardware implementation of neuromorphic algorithms is hampered by high degrees of connectivity. Functionally equivalent feedforward networks may be formed by using limited fan-in nodes and additional layers, but this complicates procedures for determining weight magnitudes. No direct mapping of weights exists between fully and limited-interconnect nets. Low-level nonlinearities prevent the formation of internal representations of widely separated spatial features and the use of gradient descent methods to minimize output error is hampered by error magnitude dissipation. The judicious use of linear summations or collection units is proposed as a solution.

12 citations

Book Chapter•DOI•
01 Jan 1989
TL;DR: Hopfield and coworkers have suggested using a quadratic cost function, which in truth is just the potential energy surface commonly used for Liaponuv stability trials, to formulate a design interconnection for an array of neuron-like switching elements, which puts the entire foundation of the processing into the interconnections.
Abstract: If designers of integrated circuits are to make a quantum jump forward in the capabilities of microchips, the development of a coherent, parallel type of processing that provides robustness and is not sensitive to failure of a few individual gates is needed. The problem of using arrays of devices, highly integrated within a chip and coupled to each other, is not one of making the arrays, but is one of introducing the hierarchial control structure necessary to fully implement the various system or computer algorithms necessary. In other words, how are the interactions between the devices orchestrated so as to map a desired architecture onto the array itself? We have suggested in the past that these arrays could be considered as local cellular automata [1], but this does not alleviate the problem of global control which must change the local computational rules in order to implement a general algorithm. Huberman [2,3] has studied the nature of attractors on finite sets in the context of iterative arrays, and has shown in a simple example how several inputs can be mapped into the same output. The ability to change the function during processing has allowed him to demonstrate adaptive behavior in which dynamical associations are made between different inputs, which initially produced sharply distinct outputs. However, these remain only the initial small steps toward the required design theory to map algorithms into network architecture. Hopfield and coworkers [4,5], in turn, have suggested using a quadratic cost function, which in truth is just the potential energy surface commonly used for Liaponuv stability trials, to formulate a design interconnection for an array of neuron-like switching elements. This approach puts the entire foundation of the processing into the interconnections.

10 citations


Cited by
More filters
Posted Content•
TL;DR: An exhaustive review of the research conducted in neuromorphic computing since the inception of the term is provided to motivate further work by illuminating gaps in the field where new research is needed.
Abstract: Neuromorphic computing has come to refer to a variety of brain-inspired computers, devices, and models that contrast the pervasive von Neumann computer architecture This biologically inspired approach has created highly connected synthetic neurons and synapses that can be used to model neuroscience theories as well as solve challenging machine learning problems The promise of the technology is to create a brain-like ability to learn and adapt, but the technical challenges are significant, starting with an accurate neuroscience model of how the brain works, to finding materials and engineering breakthroughs to build devices to support these models, to creating a programming framework so the systems can learn, to creating applications with brain-like capabilities In this work, we provide a comprehensive survey of the research and motivations for neuromorphic computing over its history We begin with a 35-year review of the motivations and drivers of neuromorphic computing, then look at the major research areas of the field, which we define as neuro-inspired models, algorithms and learning approaches, hardware and devices, supporting systems, and finally applications We conclude with a broad discussion on the major research topics that need to be addressed in the coming years to see the promise of neuromorphic computing fulfilled The goals of this work are to provide an exhaustive review of the research conducted in neuromorphic computing since the inception of the term, and to motivate further work by illuminating gaps in the field where new research is needed

570 citations

Journal Article•DOI•
TL;DR: A comprehensive review on emerging artificial neuromorphic devices and their applications is offered, showing that anion/cation migration-based memristive devices, phase change, and spintronic synapses have been quite mature and possess excellent stability as a memory device, yet they still suffer from challenges in weight updating linearity and symmetry.
Abstract: The rapid development of information technology has led to urgent requirements for high efficiency and ultralow power consumption. In the past few decades, neuromorphic computing has drawn extensive attention due to its promising capability in processing massive data with extremely low power consumption. Here, we offer a comprehensive review on emerging artificial neuromorphic devices and their applications. In light of the inner physical processes, we classify the devices into nine major categories and discuss their respective strengths and weaknesses. We will show that anion/cation migration-based memristive devices, phase change, and spintronic synapses have been quite mature and possess excellent stability as a memory device, yet they still suffer from challenges in weight updating linearity and symmetry. Meanwhile, the recently developed electrolyte-gated synaptic transistors have demonstrated outstanding energy efficiency, linearity, and symmetry, but their stability and scalability still need to be optimized. Other emerging synaptic structures, such as ferroelectric, metal–insulator transition based, photonic, and purely electronic devices also have limitations in some aspects, therefore leading to the need for further developing high-performance synaptic devices. Additional efforts are also demanded to enhance the functionality of artificial neurons while maintaining a relatively low cost in area and power, and it will be of significance to explore the intrinsic neuronal stochasticity in computing and optimize their driving capability, etc. Finally, by looking into the correlations between the operation mechanisms, material systems, device structures, and performance, we provide clues to future material selections, device designs, and integrations for artificial synapses and neurons.

373 citations

Patent•
17 Jul 1991
TL;DR: In this paper, an intelligent help system which processes information specific to a user and a system state is described, which incorporates a monitoring device to determine which events to store as data in an historical queue.
Abstract: An intelligent help system which processes information specific to a user and a system state is described. The system incorporates a monitoring device to determine which events to store as data in an historical queue. These data, as well as non-historical data (e.g., system state), are stored in a knowledge base. An inference engine tests rules against the knowledge base data, thereby providing a help tag. A display engine links the help tag with an appropriate solution tag to provide help text for display.

260 citations

Journal Article•DOI•
TL;DR: A novel circuit for Memristor-based multilayer neural networks is presented, which can use a single memristor array to realize both the plus and minus weight of the neural synapses.
Abstract: Memristors are promising components for applications in nonvolatile memory, logic circuits, and neuromorphic computing. In this paper, a novel circuit for memristor-based multilayer neural networks is presented, which can use a single memristor array to realize both the plus and minus weight of the neural synapses. In addition, memristor-based switches are utilized during the learning process to update the weight of the memristor-based synapses. Moreover, an adaptive back propagation algorithm suitable for the proposed memristor-based multilayer neural network is applied to train the neural networks and perform the XOR function and character recognition. Another highlight of this paper is that the robustness of the proposed memristor-based multilayer neural network exhibits higher recognition rates and fewer cycles as compared with other multilayer neural networks.

163 citations

Journal Article•DOI•
TL;DR: Results from working analog VLSI implementations of two different pulse stream neural network forms are reported, and a strategy for interchip communication of large numbers of neural states has been implemented in silicon.
Abstract: Results from working analog VLSI implementations of two different pulse stream neural network forms are reported. The circuits are rendered relatively invariant to processing variations, and the problem of cascadability of synapses to form large systems is addressed. A strategy for interchip communication of large numbers of neural states has been implemented in silicon and results are presented. The circuits demonstrated confront many of the issues that blight massively parallel analog systems, and offer solutions. >

119 citations