scispace - formally typeset
Search or ask a question
Author

Andreas Knoblauch

Bio: Andreas Knoblauch is an academic researcher from Honda. The author has contributed to research in topics: Content-addressable memory & Hebbian theory. The author has an hindex of 21, co-authored 60 publications receiving 1207 citations. Previous affiliations of Andreas Knoblauch include Bielefeld University & University of Ulm.


Papers
More filters
Journal ArticleDOI
TL;DR: This work analyzes operating regimes in the Willshaw model in which structural plasticity can compress the network structure and push performance to the theoretical benchmark and introduces fair measures for information-theoretic capacity in associative memory that also provide a theoretical benchmark.
Abstract: Neural associative networks with plastic synapses have been proposed as computational models of brain functions and also for applications such as pattern recognition and information retrieval. To guide biological models and optimize technical applications, several definitions of memory capacity have been used to measure the efficiency of associative memory. Here we explain why the currently used performance measures bias the comparison between models and cannot serve as a theoretical benchmark. We introduce fair measures for information-theoretic capacity in associative memory that also provide a theoretical benchmark. In neural networks, two types of manipulating synapses can be discerned: synaptic plasticity, the change in strength of existing synapses, and structural plasticity, the creation and pruning of synapses. One of the new types of memory capacity we introduce permits quantifying how structural plasticity can increase the network efficiency by compressing the network structure, for example, by pruning unused synapses. Specifically, we analyze operating regimes in the Willshaw model in which structural plasticity can compress the network structure and push performance to the theoretical benchmark. The amount C of information stored in each synapse can scale with the logarithm of the network size rather than being constant, as in classical Willshaw and Hopfield nets (≤ ln 2 ≈ 0.7). Further, the review contains novel technical material: a capacity analysis of the Willshaw model that rigorously controls for the level of retrieval quality, an analysis for memories with a nonconstant number of active units (where C ≤ 1/eln 2 ≈ 0.53), and the analysis of the computational complexity of associative memories with and without network compression.

181 citations

Journal ArticleDOI
TL;DR: This paper summarizes the present state of cell assembly theory, realized in a network of associative memories, and of the anatomical evidence for its location in the cerebral cortex.
Abstract: Donald Hebb's concept of cell assemblies is a physiology-based idea for a distributed neural representation of behaviorally relevant objects, concepts, or constellations. In the late 70s Valentino Braitenberg started the endeavor to spell out the hypothesis that the cerebral cortex is the structure where cell assemblies are formed, maintained and used, in terms of neuroanatomy (which was his main concern) and also neurophysiology. This endeavor has been carried on over the last 30 years corroborating most of his findings and interpretations. This paper summarizes the present state of cell assembly theory, realized in a network of associative memories, and of the anatomical evidence for its location in the cerebral cortex.

136 citations

Journal ArticleDOI
TL;DR: An alternative model is proposed that reproduces experimental findings of synchronized and desynchronized fast oscillations more closely and derives a technical version from the biological associative memory model that accomplishes fast pattern separation parallel in O(log2 n) steps for n neurons and sparse coding.

69 citations

Journal ArticleDOI
25 Feb 2015-PLOS ONE
TL;DR: Combined, these experiments offer evidence that a functionally organized memory structure leads to a reaction time and a perceptual advantage in tactical decision-making in soccer.
Abstract: Two core elements for the coordination of different actions in sport are tactical information and knowledge about tactical situations. The current study describes two experiments to learn about the memory structure and the cognitive processing of tactical information. Experiment 1 investigated the storage and structuring of team-specific tactics in humans’ long-term memory with regard to different expertise levels. Experiment 2 investigated tactical decision-making skills and the corresponding gaze behavior, in presenting participants the identical match situations in a reaction time task. The results showed that more experienced soccer players, in contrast to less experienced soccer players, possess a functionally organized cognitive representation of team-specific tactics in soccer. Moreover, the more experienced soccer players reacted faster in tactical decisions, because they needed less fixations of similar duration as compared to less experienced soccer players. Combined, these experiments offer evidence that a functionally organized memory structure leads to a reaction time and a perceptual advantage in tactical decision-making in soccer. The discussion emphasizes theoretical and applied implications of the current results of the study.

56 citations

Journal ArticleDOI
TL;DR: It is demonstrated that networks incorporating relevant features of neuroanatomical connectivity and neuronal function give rise to discrete neuronal circuits that store combinatorial information and exhibit a function similar to elementary rules of grammar.

53 citations


Cited by
More filters
01 Jan 1978
TL;DR: This ebook is the first authorized digital version of Kernighan and Ritchie's 1988 classic, The C Programming Language (2nd Ed.), and is a "must-have" reference for every serious programmer's digital library.
Abstract: This ebook is the first authorized digital version of Kernighan and Ritchie's 1988 classic, The C Programming Language (2nd Ed.). One of the best-selling programming books published in the last fifty years, "K&R" has been called everything from the "bible" to "a landmark in computer science" and it has influenced generations of programmers. Available now for all leading ebook platforms, this concise and beautifully written text is a "must-have" reference for every serious programmers digital library. As modestly described by the authors in the Preface to the First Edition, this "is not an introductory programming manual; it assumes some familiarity with basic programming concepts like variables, assignment statements, loops, and functions. Nonetheless, a novice programmer should be able to read along and pick up the language, although access to a more knowledgeable colleague will help."

2,120 citations

Journal ArticleDOI
TL;DR: This review critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic forgetting.

2,095 citations

Journal ArticleDOI
04 Nov 2010-Neuron
TL;DR: It is hypothesized that cell assemblies are best understood in light of their output product, as detected by "reader-actuator" mechanisms, and it is suggested that the hierarchical organization of cell assemblies may be regarded as a neural syntax.

1,105 citations

Proceedings ArticleDOI
18 Jul 2010
TL;DR: This paper proposes the use of the neural networks with ensembles for pattern recognition problems that demands many thousands of classes to be recognized and gives a short description of this type of neural network and its storage capacity.
Abstract: Pattern recognition systems usually have a relatively small number of patterns to be recognized. As a rule the number of handwritten symbols, number of phonemes or number of human faces are of the order of some dozens. But sometimes the pattern recognition task demands much more classes. For example, a continuous speech recognition system can be created on the base of syllables; a handwriting recognition system will be more efficient if the recognized units are not different letters, but triplets of letters. In these cases it is necessary to have various thousands of classes. In this paper we will consider the situation of the recognition problem that demands many thousands of classes to be recognized. For such problems we propose the use of the neural networks with ensembles. We give a short description of this type of neural network and calculate its storage capacity.

877 citations