scispace - formally typeset
Search or ask a question
Topic

Connectionism

About: Connectionism is a research topic. Over the lifetime, 2825 publications have been published within this topic receiving 123897 citations. The topic is also known as: connexionism.


Papers
More filters
Journal ArticleDOI
TL;DR: Differences between Connectionist proposals for cognitive architecture and the sorts of models that have traditionally been assumed in cognitive science are explored and the possibility that Connectionism may provide an account of the neural structures in which Classical cognitive architecture is implemented is considered.

3,454 citations

Journal ArticleDOI
TL;DR: In this paper, the problem of grounding symbolic representations in nonsymbolic representations of two kinds, i.e., "iconic representations" and "categorical representations" is addressed.

3,330 citations

Book ChapterDOI
TL;DR: In this article, the authors discuss the catastrophic interference in connectionist networks and show that new learning may interfere catastrophically with old learning when networks are trained sequentially, and the analysis of the causes of interference implies that at least some interference will occur whenever new learning might alter weights involved in representing old learning.
Abstract: Publisher Summary Connectionist networks in which information is stored in weights on connections among simple processing units have attracted considerable interest in cognitive science. Much of the interest centers around two characteristics of these networks. First, the weights on connections between units need not be prewired by the model builder but rather may be established through training in which items to be learned are presented repeatedly to the network and the connection weights are adjusted in small increments according to a learning algorithm. Second, the networks may represent information in a distributed fashion. This chapter discusses the catastrophic interference in connectionist networks. Distributed representations established through the application of learning algorithms have several properties that are claimed to be desirable from the standpoint of modeling human cognition. These properties include content-addressable memory and so-called automatic generalization in which a network trained on a set of items responds correctly to other untrained items within the same domain. New learning may interfere catastrophically with old learning when networks are trained sequentially. The analysis of the causes of interference implies that at least some interference will occur whenever new learning may alter weights involved in representing old learning, and the simulation results demonstrate only that interference is catastrophic in some specific networks.

3,119 citations

Posted Content
TL;DR: It is shown that it is possible to overcome the limitation of connectionist models and train networks that can maintain expertise on tasks that they have not experienced for a long time and selectively slowing down learning on the weights important for previous tasks.
Abstract: The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Neural networks are not, in general, capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks which they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on the MNIST hand written digit dataset and by learning several Atari 2600 games sequentially.

3,026 citations

Journal ArticleDOI
TL;DR: In this paper, the authors show that it is possible to train networks that can maintain expertise on tasks that they have not experienced for a long time by selectively slowing down learning on the weights important for those tasks.
Abstract: The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks that they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on a hand-written digit dataset and by learning several Atari 2600 games sequentially.

2,917 citations


Network Information
Related Topics (5)
Unsupervised learning
22.7K papers, 1M citations
84% related
Recall
23.6K papers, 989.7K citations
83% related
Inference
36.8K papers, 1.3M citations
82% related
Reinforcement learning
46K papers, 1M citations
82% related
Cognition
99.9K papers, 4.3M citations
81% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202379
2022161
202140
202047
201952
201850