scispace - formally typeset
Journal ArticleDOI

Artificial neural networks, back propagation, and the Kelley-Bryson gradient procedure

TLDR
Back propagation (BP) (of errors) is the subject of this Note and is a major role in the neural­net resurgence of the 1980s.
Abstract
Introduction ARTIFICIAL neural networks (sometimes called connec­ tionist, parallel distributed processing, or adaptive net­ works) are experiencing a dramatic renaissance this decade. The roots of this subject can be traced to research into perceptrons, led by Frank Rosenblatt, and into adaptive linear filters, spearheaded by Bernard Widrow, in the late 1950s. These early neural­network researchers and their enthusiastic followers equated intelligence with pattern discrimination and association abilities acquired through learning from experi­ ence of concrete cases. Then, suddenly, neural­net research became relatively inactive in about 1965 and remained so until the early 1980s. During this interval, research on intelligent systems focused on what has become conventional artificial intelligence, a discipline that defines intelligence as problem solving based on reasoning. The concurrence of two events probably played a major role in the neural­net resurgence of the 1980s. First, by 1980 it had become increasingly apparent that conventional infer­ ence­based artificial intelligence was unable to deal success­ fully with most practical problems. It appeared that learned pattern discrimination and association abilities, not reasoning, underlay not only common­sense understanding but also most skills. Even the choice of which rules to apply when forced to resort to reasoning, and when to break these rules, seemed to require pattern recognition. Second, a formulational and computational procedure was advanced that seemed to sur­ mount certain technical roadblocks that were recognized but not successfully dealt with by the researchers of the 1950s and 1960s. This procedure is called back propagation (BP) (of errors) and is the subject of this Note. To fully appreciate BP, we must first briefly examine some groundbreaking work done during the late 1950s. At Cornell, Frank Rosenblatt designed various neurally inspired learning devices and simulated them on a digital computer. He called these designs \"perceptrons\" to emphasize their perceptive, rather than logical, abilities. The aim of many of his devices was to learn through example to distinguish whether an input was a member of one class of inputs, called class A, or of a different class, called class B, by being presented with exam­ ples of members of each class together with the correct classi­ fication. The diss members, any finite number being allowed, were represented by (n — 1) vectors where xf denotes the /th element of the cth such vector. Given a vector, the simplest perceptron would compute

read more

Citations
More filters
Journal ArticleDOI

Concepts of Artificial Intelligence for Computer-Assisted Drug Discovery

TL;DR: The current state-of-the art of AI-assisted pharmaceutical discovery is discussed, including applications in structure- and ligand-based virtual screening, de novo drug design, physicochemical and pharmacokinetic property prediction, drug repurposing, and related aspects.
Journal ArticleDOI

Evolution of flight vehicle system identification

TL;DR: A comprehensive account of modern system identification techniques is provided in this paper, where several challenging examples bring out the fact that these techniques have reached a high level of maturity, making them a sophisticated and powerful tool not only for research purposes, but also to support the needs of the aircraft industry.
Journal ArticleDOI

Identification of aerodynamic coefficients using computational neural networks

TL;DR: Accurate representations of aerodynamic coefficients can be generated for the complete flight envelope by combining computational neural network models with an Estimation-Before-Modeling paradigm for on-line training information.
Journal ArticleDOI

Learning and convergence analysis of neural-type structured networks

TL;DR: Bounds on the learning rate are developed under which exponential convergence of the weights to their correct values is proved for a class of matrix algebra problems that includes linear equation solving, matrix inversion, and Lyapunov equation solving.
Journal ArticleDOI

3D Convolutional Neural Networks for Tumor Segmentation using Long-range 2D Context

TL;DR: In this paper, a CNN-based model was proposed to combine the advantages of the short-range 3D context and the long-range 2D context for tumor segmentation in multisequence MR images.
References
More filters
Book

Applied optimal control

Book

Mind over machine: The power of human intuition and expertise in the era of the computer

TL;DR: The authors, one a philosopher and the other a computer scientist, argue that even highly advanced systems only correspond to the very early stages of human learning and that there are many human skills that computers will never be able to emulate.
Related Papers (5)