scispace - formally typeset
Open AccessBook

How to Build a Brain: A Neural Architecture for Biological Cognition

TLDR
This chapter discusses Nengo: Advanced modeling methods, a framework for building a brain, and theories of cognition, which aim to clarify the role of language in the development of cognition.
Abstract
Contents 1 The science of cognition 1.1 The last 50 years 1.2 How we got here 1.3 Where we are 1.4 Questions and answers 1.5 Nengo: An introduction Part I: How to build a brain 2 An introduction to brain building 2.1 Brain parts 2.2 A framework for building a brain 2.2.1 Representation 2.2.2 Transformation 2.2.3 Dynamics 2.2.4 The three principles 2.3 Levels 2.4 Nengo: Neural representation 3 Biological cognition - Semantics 3.1 The semantic pointer hypothesis 3.2 What is a semantic pointer? 3.3 Semantics: An overview 3.4 Shallow semantics 3.5 Deep semantics for perception 3.6 Deep semantics for action 3.7 The semantics of perception and action 3.8 Nengo: Neural computations 4 Biological cognition - Syntax 4.1 Structured representations 4.2 Binding without neurons 4.3 Binding with neurons. 4.4 Manipulating structured representations 4.5 Learning structural manipulations 4.6 Clean-up memory and scaling 4.7 Example: Fluid intelligence 4.8 Deep semantics for cognition 4.9 Nengo: Structured representations in neurons 5 Biological cognition - Control 5.1 The flow of information 5.2 The basal ganglia 5.3 Basal ganglia, cortex, and thalamus 5.4 Example: Fixed sequences of actions 5.5 Attention and the routing of information 5.6 Example: Flexible sequences of actions 5.7 Timing and control 5.8 Example: The Tower of Hanoi 5.9 Nengo: Question answering 6 Biological cognition - Memory and learning 6.1 Extending cognition through time 6.2 Working memory 6.3 Example: Serial list memory 6.4 Biological learning 6.5 Example: Learning new actions 6.6 Example: Learning new syntactic manipulations 6.7 Nengo: Learning 7 The Semantic Pointer Architecture (SPA) 7.1 A summary of the SPA 7.2 A SPA unified network 7.3 Tasks 7.3.1 Recognition 7.3.2 Copy drawing 7.3.3 Reinforcement learning 7.3.4 Serial working memory 7.3.5 Counting 7.3.6 Question answering 7.3.7 Rapid variable creation 7.3.8 Fluid reasoning 7.3.9 Discussion 7.4 A unified view: Symbols and probabilities 7.5 Nengo: Advanced modeling methods Part II Is that how you build a brain? 8 Evaluating cognitive theories 341 8.1 Introduction 8.2 Core cognitive criteria (CCC) 8.2.1 Representational structure 8.2.1.1 Systematicity 8.2.1.2 Compositionality 8.2.1.3 Productivity 8.2.1.4 The massive binding problem 8.2.2 Performance concerns 8.2.2.1 Syntactic generalization 8.2.2.2 Robustness 8.2.2.3 Adaptability 8.2.2.4 Memory 8.2.2.5 Scalability 8.2.3 Scientific merit 8.2.3.1 Triangulation (contact with more sources of data) 8.2.3.2 Compactness 8.3 Conclusion 8.4 Nengo Bonus: How to build a brain - a practical guide 9 Theories of cognition 9.1 The state of the art 9.1.1 ACT-R 9.1.2 Synchrony-based approaches 9.1.3 Neural blackboard architecture (NBA) 9.1.4 The integrated connectionist/symbolic architecture (ICS) 9.1.5 Leabra 9.1.6 Dynamic field theory (DFT) 9.2 An evaluation 9.2.1 Representational structure 9.2.2 Performance concerns 9.2.3 Scientific merit 9.2.4 Summary 9.3 The same... 9.4 ...but different 9.5 The SPA versus the SOA 10 Consequences and challenges 10.1 Representation 10.2 Concepts 10.3 Inference 10.4 Dynamics 10.5 Challenges 10.6 Conclusion A Mathematical notation and overview A.1 Vectors A.2 Vector spaces A.3 The dot product A.4 Basis of a vector space A.5 Linear transformations on vectors A.6 Time derivatives for dynamics B Mathematical derivations for the NEF B.1 Representation B.1.1 Encoding B.1.2 Decoding B.2 Transformation B.3 Dynamics C Further details on deep semantic models C.1 The perceptual model C.2 The motor model D Mathematical derivations for the SPA D.1 Binding and unbinding HRRs D.2 Learning high-level transformations D.3 Ordinal serial encoding model D.4 Spike-timing dependent plasticity D.5 Number of neurons for representing structure E SPA model details E.1 Tower of Hanoi Bibliography Index

read more

Citations
More filters
Journal ArticleDOI

Deep learning in neural networks

TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.
Journal ArticleDOI

Whatever next? Predictive brains, situated agents, and the future of cognitive science

TL;DR: This target article critically examines this "hierarchical prediction machine" approach, concluding that it offers the best clue yet to the shape of a unified science of mind and action.

Neural Turing Machines

TL;DR: A combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-toend, allowing it to be efficiently trained with gradient descent.
Posted Content

Neural Turing Machines

TL;DR: Neural Turing Machines as discussed by the authors extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes, analogous to a Turing Machine or Von Neumann architecture but is differentiable end-to-end.
References
More filters
Journal ArticleDOI

Visual synchrony affects binding and segmentation in perception

TL;DR: It is shown that visual grouping is indeed facilitated when elements of one percept are presented at the same time as each other and are temporally separated from elements of another percept or from background elements.
Related Papers (5)