scispace - formally typeset
Search or ask a question

Showing papers by "Andreas Knoblauch published in 2005"


Journal ArticleDOI
TL;DR: This work concisely reviews and unifies the analysis of different variants of neural associative networks consisting of binary neurons and synapses (Willshaw model) and suggests possible solutions employing spiking neurons, compression of the memory structures, and additional cell layers.

40 citations


Journal ArticleDOI
TL;DR: This work examines the ISI statistics and discusses these views in a recently published model of interacting cortical areas and shows that temporally modulated inputs lead to ISI statistics which fit better to the neurophysiological data than alternative mechanisms.
Abstract: The response of a cortical neuron to a stimulus can show a very large variability when repeatedly stimulated by exactly the same stimulus. This has been quantified in terms of inter-spike-interval (ISI) statistics by several researchers (e.g., [Softky, W., Koch, C., 1993. The highly irregular firing of cortical cells is inconsistent with temporal integration of random EPSPs. J. Neurosci. 13(1), 334-350.]). The common view is that this variability reflects noisy information processing based on redundant representation in large neuron populations. This view has been challenged by the idea that the apparent noise inherent in brain activity that is not strictly related or temporally coupled to the experiment could be functionally significant. In this work we examine the ISI statistics and discuss these views in a recently published model of interacting cortical areas [Knoblauch, A., Palm, G., 2002. Scene segmentation by spike synchronization in reciprocally connected visual areas. I. Local effects of cortical feedback. Biol. Cybernet. 87(3), 151-167.]. From the results of further single neuron simulations we can isolate temporally modulated synaptic input as a main contributor for high ISI variability in our model and possibly in real neurons. In contrast to alternative mechanisms, our model suggests a function of the temporal modulations for short-term binding and segmentation of figures from background. Moreover, we show that temporally modulated inputs lead to ISI statistics which fit better to the neurophysiological data than alternative mechanisms.

35 citations


Book ChapterDOI
TL;DR: A neurobiologically plausible system on a robot that integrates visual attention, object recognition, language and action processing using a coherent cortex-like architecture based on neural associative memories is implemented.
Abstract: We have implemented a neurobiologically plausible system on a robot that integrates visual attention, object recognition, language and action processing using a coherent cortex-like architecture based on neural associative memories. This system enables the robot to respond to spoken commands like ”bot show plum” or ”bot put apple to yellow cup”. The scenario for this is a robot close to one or two tables carrying certain kinds of fruit and other simple objects. Tasks such as finding and pointing to certain fruits in a complex visual scene according to spoken or typed commands can be demonstrated. This involves parsing and understanding of simple sentences, relating the nouns to concrete objects sensed by the camera, and coordinating motor output with planning and sensory processing.

34 citations


Book ChapterDOI
TL;DR: A biologically more realistic model variant including a network of several SDs is used to demonstrate that associative Hebb-like synaptic plasticity leads to learning of word sequences, formation of neural representations of grammatical categories, and linking of sequence detectors into neuronal assemblies that may provide a biological basis of syntactic rule knowledge.
Abstract: A fundamental prerequisite for language is the ability to distinguish word sequences that are grammatically well-formed from ungrammatical word strings and to generalise rules of syntactic serial order to new strings of constituents. In this work, we extend a neural model of syntactic brain mechanisms that is based on syntactic sequence detectors (SDs). Elementary SDs are neural units that specifically respond to a sequence of constituent words AB, but not (or much less) to the reverse sequence BA. We discuss limitations of the original version of the SD model (Pulvermuller, Theory in Biosciences, 2003) and suggest optimal model variants taking advantage of optimised neuronal response functions, non-linear interaction between inputs, and leaky integration of neuronal input accumulating over time. A biologically more realistic model variant including a network of several SDs is used to demonstrate that associative Hebb-like synaptic plasticity leads to learning of word sequences, formation of neural representations of grammatical categories, and linking of sequence detectors into neuronal assemblies that may provide a biological basis of syntactic rule knowledge. We propose that these syntactic neuronal assemblies (SNAs) underlie generalisation of syntactic regularities from already encountered strings to new grammatical word sequences.

17 citations


Book ChapterDOI
15 Jun 2005
TL;DR: This work describes the implementation of a cell assembly-based model of several visual, language, planning, and motor areas to enable a robot to understand and react to simple spoken commands.
Abstract: The brain representations of words and their referent actions and objects appear to be strongly coupled neuronal assemblies distributed over several cortical areas. In this work we describe the implementation of a cell assembly-based model of several visual, language, planning, and motor areas to enable a robot to understand and react to simple spoken commands. The essential idea is that different cortical areas represent different aspects of the same entity, and that the long-range cortico-cortical projections represent hetero-associative memories that translate between these aspects or representations.

14 citations


Proceedings ArticleDOI
01 May 2005
TL;DR: A cell assembly-based model of several visual, language, planning, and motor areas to enable a robot to understand and react to simple spoken commands and to build a multimodal internal representation using several cortical areas or neuronal maps.
Abstract: When words referring to actions or visual scenes are presented to humans, distributed networks including areas of the motor and visual systems of the cortex become active [3]. The brain correlates of words and their referent actions and objects appear to be strongly coupled neuron ensembles in dened cortical areas. Being one of the most promising theoretical frameworks for modeling and understanding the brain, the theory of cell assemblies [1, 2] suggests that entities of the outside world (and also internal states) are coded in overlapping neuron assemblies rather than in single ("grandmother") cells, and that such cell assemblies are generated by Hebbian coincidence or correlation learning. One of our long-term goals is to build a multimodal internal representation using several cortical areas or neuronal maps, which will serve as a basis for the emergence of action semantics, and to compare simulations of these areas to physiological activation of real cortical areas. In this work we have developed a cell assembly-based model of several visual, language, planning, and motor areas to enable a robot to understand and react to simple spoken commands. The essential idea is that dieren t cortical areas represent dieren t aspects (and correspondingly dieren t notions of similarity) of the same entity (e.g., visual, auditory language, semantical, syntactical, grasping related aspects of an apple) and that the (mostly bidirectional) long-range cortico-cortical projections represent hetero-associative memories that translate between these aspects or representations. This system is used in a robotics context to enable a robot to respond to spoken commands like "bot show plum" or "bot put apple to yellow cup". The scenario for this is a robot close to one or two tables carrying certain kinds of fruit and/or other simple objects. We can demonstrate part of this scenario where the task is to nd certain fruits in a complex visual scene according to spoken or typed commands. This involves parsing and understanding of simple sentences, relating the nouns to concrete objects sensed by the camera, and coordinating motor output with planning and sensory processing.

13 citations


Book ChapterDOI
TL;DR: Using associative memories and sparse distributed representations the authors have developed a system that can learn to associate words with objects, properties like colors, and actions that is used in a robotics context to enable a robot to respond to spoken commands.
Abstract: Using associative memories and sparse distributed representations we have developed a system that can learn to associate words with objects, properties like colors, and actions. This system is used in a robotics context to enable a robot to respond to spoken commands like ”bot show plum” or ”bot put apple to yellow cup”. This involves parsing and understanding of simple sentences and “symbol grounding”, for example, relating the nouns to concrete objects sensed by the camera and recognized by a neural network from the visual input.

8 citations


Book ChapterDOI
TL;DR: This work presents simulation results from a model of two reciprocally coupled visual cortical areas, which relates to neurophysiological findings concerning attention and biased competition and demonstrates how these findings can be explained very naturally by assuming different kinds of bindings between neuron groups in different areas.
Abstract: Even when creating a biologically realistic model for an apparently very simple cognitive task such as seeking a certain object in the visual field, we are confronted with severe problems concerning the binding of distributed representations. In this work, we present simulation results from a model of two reciprocally coupled visual cortical areas. One area is a peripheral visual area where local object features are represented; the other is a more central visual area where whole objects are recognized. In our model, correct binding is achieved by the simultaneous switching of the activation state of corresponding neuron groups. We relate our simulations to neurophysiological findings concerning attention and biased competition and demonstrate how these findings can be explained very naturally by assuming different kinds of bindings between neuron groups in different areas as produced by our model. Although the binding is fluctuating in the absence of attention, it becomes static by the attentional bias.

4 citations


Journal ArticleDOI
TL;DR: Binary associative networks of the Willshaw type are analyzed and it is shown that the variance in the postsynaptic potentials grows with the square of the stimulation strength if the synapses have been generated by Hebbian learning of many overlapping patterns, but only linearly for independent random synapses.

1 citations