scispace - formally typeset
Topic

Neocognitron

About: Neocognitron is a(n) research topic. Over the lifetime, 1425 publication(s) have been published within this topic receiving 76281 citation(s).

...read more

Papers
  More

Journal ArticleDOI: 10.1109/5.726791
Yann LeCun1, Léon Bottou2, Léon Bottou3, Yoshua Bengio4  +3 moreInstitutions (5)
01 Jan 1998-
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

...read more

Topics: Neocognitron (64%), Intelligent character recognition (64%), Artificial neural network (60%) ...read more

34,930 Citations


Open accessBook
Brian D. Ripley1, N. L. HjortInstitutions (1)
01 Jan 1996-
Abstract: From the Publisher: Pattern recognition has long been studied in relation to many different (and mainly unrelated) applications, such as remote sensing, computer vision, space research, and medical imaging. In this book Professor Ripley brings together two crucial ideas in pattern recognition; statistical methods and machine learning via neural networks. Unifying principles are brought to the fore, and the author gives an overview of the state of the subject. Many examples are included to illustrate real problems in pattern recognition and how to overcome them.This is a self-contained account, ideal both as an introduction for non-specialists readers, and also as a handbook for the more expert reader.

...read more

5,508 Citations


Journal ArticleDOI: 10.1007/BF00344251
Abstract: A neural network model for a mechanism of visual pattern recognition is proposed in this paper. The network is self-organized by “learning without a teacher”, and acquires an ability to recognize stimulus patterns based on the geometrical similarity (Gestalt) of their shapes without affected by their positions. This network is given a nickname “neocognitron”. After completion of self-organization, the network has a structure similar to the hierarchy model of the visual nervous system proposed by Hubel and Wiesel. The network consits of an input layer (photoreceptor array) followed by a cascade connection of a number of modular structures, each of which is composed of two layers of cells connected in a cascade. The first layer of each module consists of “S-cells”, which show characteristics similar to simple cells or lower order hypercomplex cells, and the second layer consists of “C-cells” similar to complex cells or higher order hypercomplex cells. The afferent synapses to each S-cell have plasticity and are modifiable. The network has an ability of unsupervised learning: We do not need any “teacher” during the process of self-organization, and it is only needed to present a set of stimulus patterns repeatedly to the input layer of the network. The network has been simulated on a digital computer. After repetitive presentation of a set of stimulus patterns, each stimulus pattern has become to elicit an output only from one of the C-cell of the last layer, and conversely, this C-cell has become selectively responsive only to that stimulus pattern. That is, none of the C-cells of the last layer responds to more than one stimulus pattern. The response of the C-cells of the last layer is not affected by the pattern's position at all. Neither is it affected by a small change in shape nor in size of the stimulus pattern.

...read more

Topics: Neocognitron (61%), Form perception (51%), Stimulus (physiology) (51%) ...read more

3,956 Citations


Open access
01 Jan 1995-
Abstract: Title Type pattern recognition with neural networks in c++ PDF pattern recognition and neural networks PDF neural networks for pattern recognition advanced texts in econometrics PDF neural networks for applied sciences and engineering from fundamentals to complex pattern recognition PDF an introduction to biological and artificial neural networks for pattern recognition spie tutorial text vol tt04 tutorial texts in optical engineering PDF

...read more

Topics: Neocognitron (80%), Time delay neural network (74%), Artificial neural network (60%) ...read more

3,200 Citations


Open accessBook
Yoh-Han Pao1Institutions (1)
01 Jan 1989-
Abstract: It's coming again, the new collection that this site has. To complete your curiosity, we offer the favorite adaptive pattern recognition and neural networks book as the choice today. This is a book that will show you even new to old thing. Forget it; it will be right for you. Well, when you are really dying of adaptive pattern recognition and neural networks, just pick it. You know, this book is always making the fans to be dizzy if not to find.

...read more

Topics: Time delay neural network (68%), Neocognitron (66%), Feature (machine learning) (64%) ...read more

2,153 Citations


Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20213
20205
201912
20185
201793
201695

Top Attributes

Show by:

Topic's top 5 most impactful authors

Kunihiko Fukushima

20 papers, 992 citations

Kunihiko Fukushima

17 papers, 244 citations

Hayaru Shouno

6 papers, 42 citations

Naoyuki Tsuruta

6 papers, 27 citations

Yann LeCun

6 papers, 38.3K citations

Network Information
Related Topics (5)
Artificial neural network

207K papers, 4.5M citations

82% related
Active learning (machine learning)

13.1K papers, 566.6K citations

80% related
Recurrent neural network

29.2K papers, 890K citations

80% related
Feature (machine learning)

33.9K papers, 798.7K citations

80% related
Time delay neural network

20.8K papers, 503.2K citations

80% related