scispace - formally typeset
Open AccessJournal ArticleDOI

Neural blackboard architectures of combinatorial structures in cognition

Reads0
Chats0
TLDR
A neural blackboard architecture for sentence structure, where neural structures that encode for words are temporarily bound in a manner that preserves the structure of the sentence, is presented and shown that the architecture solves the four problems presented by Jackendoff.
Abstract
Human cognition is unique in the way in which it relies on combinatorial (or compositional) structures. Language provides ample evidence for the existence of combinatorial structures, but they can also be found in visual cognition. To understand the neural basis of human cognition, it is therefore essential to understand how combinatorial structures can be instantiated in neural terms. In his recent book on the foundations of language, Jackendoff described four fundamental problems for a neural instantiation of combinatorial structures: the massiveness of the binding problem, the problem of 2, the problem of variables, and the transformation of combinatorial structures from working memory to long-term memory. This paper aims to show that these problems can be solved by means of neural "blackboard" architectures. For this purpose, a neural blackboard architecture for sentence structure is presented. In this architecture, neural structures that encode for words are temporarily bound in a manner that preserves the structure of the sentence. It is shown that the architecture solves the four problems presented by Jackendoff. The ability of the architecture to instantiate sentence structures is illustrated with examples of sentence complexity observed in human language performance. Similarities exist between the architecture for sentence structure and blackboard architectures for combinatorial structures in visual cognition, derived from the structure of the visual cortex. These architectures are briefly discussed, together with an example of a combinatorial structure in which the blackboard architectures for language and vision are combined. In this way, the architecture for language is grounded in perception. Perspectives and potential developments of the architectures are discussed.

read more

Content maybe subject to copyright    Report

Neural Blackboard Architectures
of Combinatorial Structures in Cognition
Frank van der Velde
Unit of Cognitive Psychology
Leiden University
Wassenaarseweg 52, 2333 AK Leiden
The Netherlands
vdvelde@fsw.leidenuniv.nl

2
Abstract
Human cognition is unique in the way in which it relies on combinatorial (or
compositional) structures. Language provides ample evidence for the existence of
combinatorial structures, but they can also be found in visual cognition. To understand
the neural basis of human cognition, it is therefore essential to understand how
combinatorial structures can be instantiated in neural terms. In his recent book on the
foundations of language, Jackendoff formulated four fundamental problems for a neural
instantiation of combinatorial structures: the massiveness of the binding problem, the
problem of 2, the problem of variables and the transformation of combinatorial structures
from working memory to long-term memory. This paper aims to show that these
problems can be solved by means of neural ‘blackboard’ architectures. For this purpose, a
neural blackboard architecture for sentence structure is presented. In this architecture,
neural structures that encode for words are temporarily bound in a manner that preserves
the structure of the sentence. It is shown that the architecture solves the four problems
presented by Jackendoff. The ability of the architecture to instantiate sentence structures
is illustrated with examples of sentence complexity observed in human language
performance. Similarities exist between the architecture for sentence structure and
blackboard architectures for combinatorial structures in visual cognition, derived from
the structure of the visual cortex. These architectures are briefly discussed, together with
an example of a combinatorial structure in which the blackboard architectures for
language and vision are combined. In this way, the architecture for language is grounded
in perception.

3
Content
1. Introduction
2. Four challenges for cognitive neuroscience
2.1. The massiveness of the binding problem
2.2. The problem of 2
2.2.1. The problem of 2 and the symbol grounding problem
2.3. The problem of variables
2.4. Binding in working memory versus long-term memory
2.5. Overview
3. Combinatorial structures with synchrony of activation
3.1. Nested structures with synchrony of activation
3.2. Productivity with synchrony of activation
4. Processing linguistic structures with recurrent neural networks
4.1. Combinatorial productivity with RNNs
4.2. RNNs and the massiveness of the binding problem
5. Blackboard architectures of combinatorial structures
6. A neural blackboard architecture of sentence structure
6.1. Gating and memory circuits
6.2. Overview of the architecture
6.2.1. Connection structure for binding in the architecture
6.3. Multiple instantiation and binding in the architecture
6.3.1. Answering binding questions
6.4. Extending the blackboard architecture
6.4.1. The modular nature of the blackboard architecture
6.5. Constituent binding in long-term memory
6.5.1. One-trial learning
6.5.2. Explicit encoding of sentence structure with synaptic modification
6.6. Variable binding
6.6.1. Neural structure versus spreading of activation
6.7. Structural dependencies in the blackboard architecture
6.7.1. Embedded clauses in the blackboard architecture
6.7.2. Multiple embedded clauses
6.7.3. Dynamics of binding in the blackboard architecture
6.7.4. Dynamics of binding and complexity
6.8. Further development of the architecture
7. Neural blackboard architectures of combinatorial structures in vision
7.1. Feature binding
7.2. A neural blackboard architecture of visual working memory
7.2.1. Feature binding in visual working memory
7.3. Feature binding in long-term memory
7.4. Integrating combinatorial structures in language and vision
8. Conclusion
Notes
References

4
1. Introduction
Human cognition is unique in the manner in which it processes and produces complex
combinatorial (or compositional) structures (e.g., Anderson 1983; Newell 1990; Pinker
1998). Therefore, to understand the neural basis of human cognition, it is essential to
understand how combinatorial structures can be instantiated in neural terms. However,
combinatorial structures present particular challenges to theories of neurocognition,
which have not been widely recognized in the cognitive neuroscience community
(Jackendoff 2002).
A prominent example of these challenges is given by the neural instantiation (in
theoretical terms) of linguistic structures. In his recent book on the foundations of
language, Jackendoff (2002; see also Jackendoff in press) analyzed the most important
theoretical problems that the combinatorial and rule-based nature of language presents to
theories of neurocognition. He summarized these problems under the heading of ‘four
challenges for cognitive neuroscience’ (pp. 58-67). As recognized by Jackendoff, these
problems arise not only with linguistic structures, but with combinatorial cognitive
structures in general.
This paper aims to show that neural ‘blackboard’ architectures can provide an
adequate theoretical basis for a neural instantiation of combinatorial cognitive structures.
In particular, I will discuss how the problems presented by Jackendoff (2002) can be
solved in terms of a neural blackboard architecture of sentence structure. I will also
discuss the similarities between the neural blackboard architecture of sentence structure
and neural blackboard architectures of combinatorial structures in visual cognition and
visual working memory (Van der Velde 1997; Van der Velde & de Kamps 2001; 2003a).
To begin with, I will first outline the problems described by Jackendoff (2002) in
more detail. This presentation is followed by a discussion of the most important solutions
that have been offered thus far to meet some of these challenges. These solutions are
based on either synchrony of activation or on recurrent neural networks
1
.
2. Four challenges for cognitive neuroscience
The four challenges for cognitive neuroscience presented by Jackendoff (2002) consists
of: the massiveness of the binding problem that occurs in language, the problem of
multiple instances (or the ‘problem of 2’), the problem of variables, and the relation
between binding in working memory and binding in long-term memory. I will discuss
these problems in turn.
2.1. The massiveness of the binding problem
In neuroscience, the binding problem concerns the way in which neural instantiations of
elements (constituents) can be related (bound) temporarily in a manner that preserves the
structural relations between the constituents. Examples of this problem can be found in
visual perception. Colors and shapes of objects are partly processed in different brain
areas, but we perceive objects as a unity of color and shape. Thus, in a visual scene with a
green apple and a red orange, the neurons that code for green have to be related
(temporarily) with the neurons that code for apple, so that the confusion with a red apple
(and a green orange) can be avoided.
In the case of language, the problem is illustrated in figure 1. Assume that words like
cat, chases and mouse each activate specific neural structures, such as the ‘word

5
assemblies’ discussed by Pulvermüller (1999). The problem is how the neural structures
or word assemblies for cat and mouse can be bound to the neural structure or word
assembly of the verb chases, in line with the thematic roles (or argument structure) of the
verb. That is, how cat and mouse can be bound to the role of agent and theme of chases
in the sentence The cat chases the mouse, and to the role of theme and agent of chases in
the sentence The mouse chases the cat.
Figure 1. (a). Illustration of the neural structures (‘neural word assemblies’) activated by the
words cat, chases and mouse. Bottom: An attempt to encode sentence structures with specialized
‘sentence’ neurons. In (b), a ‘sentence’ neuron has the assemblies for the words cat, chases and
mouse in its ‘receptive field’ (as indicated with the cone). The neuron is activated by a specialized
neural circuit when the assemblies in its receptive field are active in the order cat chases mouse.
In (c), a similar ‘sentence’ neuron for the sentence mouse chases cat.
A potential solution for this problem is illustrated in figure 1. It consists of
specialized neurons (or populations of neurons) that are activated when the strings cat
chases mouse (figure 1b) or mouse chases cat (figure 1c) are heard or seen. Each neuron
has the word assemblies for cat, mouse and chases in its ‘receptive field’ (illustrated with
the cones in figures 1b and 1c). Specialized neural circuits could activate one neuron in
the case of cat chases mouse and another neuron in the case of mouse chases cat, by
using the difference in temporal word order in both strings. Circuits of this kind can be
found in the case of motion detection in visual perception (e.g., Hubel 1995). For
mouse chases catcat chases mouse
mouse
cat
mouse
chases
cat
(a)
(b) (c)
chases
mouse
chases
cat
‘sentence neurons
‘sentence’ neurons

Figures
Citations
More filters

The representation of visual salience in monkey parietal cortex

TL;DR: The lateral intraparietal area (LIP) as mentioned in this paper has been shown to have visual responses to stimuli appearing abruptly at particular retinal locations (their receptive fields) and the visual representation in LIP is sparse, with only the most salient or behaviourally relevant objects being strongly represented.
Journal ArticleDOI

Biologically Based Computational Models of High-Level Cognition

TL;DR: The need for robust active maintenance and rapid updating of information in the prefrontal cortex appears to be satisfied by bistable activation states and dynamic gating mechanisms that are fundamental to digital computers and may be critical for the distinctive aspects of human intelligence.
Journal ArticleDOI

Agreement Attraction in Comprehension: Representations and Processes.

TL;DR: The authors showed that the attraction effect is limited to ungrammatical sentences, which would be unexpected if the representation of the subject number were inherently prone to error, and argued that agreement attraction in comprehension instead reflects a cue-based retrieval mechanism that is subject to retrieval errors.
Journal ArticleDOI

The Neural Representation of Sequences: From Transition Probabilities to Algebraic Patterns and Linguistic Trees

TL;DR: A taxonomy of five distinct cerebral mechanisms for sequence coding: transitions and timing knowledge, chunking, ordinal knowledge, algebraic patterns, and nested tree structures is proposed.
Journal ArticleDOI

Real-time parallel processing of grammatical structure in the fronto-striatal system: a recurrent network simulation study using reservoir computing.

TL;DR: It is demonstrated that a recurrent neural network can decode grammatical structure from sentences in real-time in order to generate a predictive representation of the meaning of the sentences, which can provide insight into the underlying mechanisms of human cortico-striatal function in sentence processing.
References
More filters
Journal ArticleDOI

Learning representations by back-propagating errors

TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Journal ArticleDOI

Multilayer feedforward networks are universal approximators

TL;DR: It is rigorously established that standard multilayer feedforward networks with as few as one hidden layer using arbitrary squashing functions are capable of approximating any Borel measurable function from one finite dimensional space to another to any desired degree of accuracy, provided sufficiently many hidden units are available.
Journal ArticleDOI

Neural networks and physical systems with emergent collective computational abilities

TL;DR: A model of a system having a large number of simple equivalent components, based on aspects of neurobiology but readily adapted to integrated circuits, produces a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size.
Frequently Asked Questions (14)
Q1. What are the contributions in "Neural blackboard architectures of combinatorial structures in cognition" ?

This paper showed that neural blackboard architectures can provide an adequate theoretical basis for a neural instantiation of combinatorial cognitive structures in human cognition. 

However, further research is clearly needed to provide a more complete fulfillment of this potential. A few directions of further research can be indicated with the architecture presented thus far. The fact that a neural blackboard architecture of sentence structure could transform syntactic operations into forms of pattern recognition is an attractive prospect of further research. A benefit of an explicit model as the one in figure 5 is that the model can be used as a target in computer simulations. 

With the control of activation provided by gating circuits, the neural structures of these two sentences can be selectively (re)activated. 

Neural blackboard architectures can be formulated for sentence structure and for combinatorial structures (feature binding) in visual cognition. 

With the ability to reuse the structure assemblies, the architecture can encode arbitrary and novel sentence structures on the fly. 

because of the combinatorial nature of language the authors have the ability to produce or understand arbitrary sentences from a set of this kind. 

A combined selectivity to spatial and object information in PFC is in line with the notion of a blackboard architecture for visual working memory. 

With longer sentences, it is likely that the HC will encode the sentence structure in terms of a sequence of events, each consisting of a conjunctive encoding of a part of the sentence structure. 

The fact that a neural blackboard architecture of sentence structure could transform syntactic operations into forms of pattern recognition is an attractive prospect of further research. 

For instance, S assemblies could have different noun and verb subassemblies for single and plural, which can be activated selectively. 

But symbols can be duplicated easily because they are not embedded in an overall structure that provides the grounding of the symbol3. 

Without this internal structure, the neural structures in figure 6 would collapse into direct associations between neural assemblies, which would result in a failure to distinguish between, for instance, The cat chases the mouse and The mouse chases the cat. 

Briefly stated, one could argue that the lack of combinatorial productivity with RNNs, as discussed above, illustrates a failure to encode the individual parts (words) of a combinatorial structure (sentence) in a productive manner. 

The word assemblies in The cat chases the mouse are indeed activated briefly, to prevent the interference effects that would otherwise occur.