scispace - formally typeset
Open AccessJournal ArticleDOI

A Cognitive Computation Fallacy? Cognition, Computations and Panpsychism

John Mark Bishop
- 30 May 2009 - 
- Vol. 1, Iss: 3, pp 221-233
Reads0
Chats0
TLDR
A group of philosophical arguments that suggest either unequivocal optimism in computationalism is misplaced—computation is neither necessary nor sufficient for cognition—or panpsychism (the belief that the physical universe is fundamentally composed of elements each of which is conscious) is true are reviewed.
Abstract
The journal of Cognitive Computation is defined in part by the notion that biologically inspired computational accounts are at the heart of cognitive processes in both natural and artificial systems. Many studies of various important aspects of cognition (memory, observational learning, decision making, reward prediction learning, attention control, etc.) have been made by modelling the various experimental results using ever-more sophisticated computer programs. In this manner progressive inroads have been made into gaining a better understanding of the many components of cognition. Concomitantly in both science and science fiction the hope is periodically re-ignited that a man-made system can be engineered to be fully cognitive and conscious purely in virtue of its execution of an appropriate computer program. However, whilst the usefulness of the computational metaphor in many areas of psychology and neuroscience is clear, it has not gone unchallenged and in this article I will review a group of philosophical arguments that suggest either such unequivocal optimism in computationalism is misplaced—computation is neither necessary nor sufficient for cognition—or panpsychism (the belief that the physical universe is fundamentally composed of elements each of which is conscious) is true. I conclude by highlighting an alternative metaphor for cognitive processes based on communication and interaction.

read more

Content maybe subject to copyright    Report

A Cognitive Computation Fallacy? Cognition, Computations
and Panpsychism
John Mark Bishop
Published online: 30 May 2009
Springer Science+Business Media, LLC 2009
Abstract The journal of Cognitive Computation is defined
in part by the notion that biologically inspired computational
accounts are at the heart of cognitive processes in both
natural and artificial systems. Many studies of various
important aspects of cognition (memory, observational
learning, decision making, reward prediction learning,
attention control, etc.) have been made by modelling the
various experimental results using ever-more sophisticated
computer programs. In this manner progressive inroads have
been made into gaining a better understanding of the many
components of cognition. Concomitantly in both science and
science fiction the hope is periodically re-ignited that a man-
made system can be engineered to be fully cognitive and
conscious purely in virtue of its execution of an appropriate
computer program. However, whilst the usefulness of the
computational metaphor in many areas of psychology and
neuroscience is clear, it has not gone unchallenged and in
this article I will review a group of philosophical arguments
that suggest either such unequivocal optimism in computa-
tionalism is misplaced—computation is neither necessary
nor sufficient for cognition—or panpsychism (the belief that
the physical universe is fundamentally composed of ele-
ments each of which is conscious) is true. I conclude by
highlighting an alternative metaphor for cognitive processes
based on communication and interaction.
Keywords Computationalism Machine consciousness
Panpsychism
Introduction
Over the hundred years since the publication of James’
psychology [1] neuroscientists have attempted to define the
fundamental features of the brain and its information-pro-
cessing capabilities in terms of (i) mean firing rates at
points in the brain cortex (neurons) and (ii) computations;
today the prevailing view in neuroscience is that neurons
can be considered fundamentally computational devices. In
operation, such computationally defined neurons effec-
tively sum up their input and compute a complex non-
linear function on this value; output information being
encoded in the mean firing rate of neurons, which in turn
exhibit narrow functional specialisation. After Hubel and
Wiesel [2] this view of the neuron as a specialised feature
detector has become treated as established doctrine. Fur-
thermore, it has been shown that richly interconnected
networks of such neurons can ‘learn’ by suitably adjusting
the inter-neuron connection weights according to complex
computationally defined processes. In the literature there
exist numerous examples of such learning rules and
architectures, more or less inspired by varying degrees of
biological plausibility; early models include [36]. From
this followed the functional specialization paradigm,
mapping different areas of the brain to specific cognitive
functions.
In this article I suggest that this attraction to viewing the
neuron merely as a computational device fundamentally
stems from (i) the implicit adoption of a computational
theory of mind (CTM) [7]; (ii) a concomitant functionalism
with respect to the instantiation of cognitive processes
[8, 9] and (iii) an implicit non-reductive functionalism with
respect to consciousness [10]. Conversely, I suggest that a
computational description of brain operations has difficulty
in providing a physicalist account of several key features of
J. M. Bishop (&)
Department of Computing, Goldsmiths, University of London,
London, UK
e-mail: m.bishop@gold.ac.uk
123
Cogn Comput (2009) 1:221–233
DOI 10.1007/s12559-009-9019-6

human cognitive systems (in particular phenomenal con-
sciousness and ‘understanding’) and hence that computa-
tions are neither necessary nor sufficient for cognition; that
any computational description of brain processes is thus
best understood merely in a metaphorical sense. I conclude
by answering the question What is cognition if not com-
putation? by tentatively highlighting an alternative meta-
phor, defined by physically grounded processes of
communication and interaction, which is less vulnerable to
the three classical criticisms of computationalism described
herein.
1
The CTM
The CTM occupies one part of the spectrum of represen-
tational theories of mind (RTM). Although currently
undergoing challenges from dynamic systems, embodied,
enactivist and constructivist accounts of cognition (e.g.
[1319]), the RTM remains ubiquitous in contemporary
cognitive science and experimental psychology. Contrary
to naive or direct realism, indirect realism (or representa-
tionalism) postulates the actual existence of mental inter-
mediaries—representations—between the observing
subject and the world. The earliest forms of RTM can be
traced to Descartes [20] who held that all thought was
representational
2
and that it is the very nature of mind (res
cogitans) to represent the world (res extensa).
Harnish [7] observes that the RTM entails:
Cognitive states are relations to mental representations
which have content.
A cognitive state is:
A state [of mind] denoting knowledge; understand-
ing; beliefs, etc.
This definition subsequently broadened to include
knowledge of raw sensations, colours, pains, etc.
Cognitive processes—changes in cognitive states—are
mental operations on these representations.
The Emergence of Functionalism
The CTM came to the fore after the development of the
stored program digital computer in the mid-20th century
when, through machine-state functionalism, Putnam [8, 9]
first embedded the RTM in a computational framework. At
the time Putnam famously held that:
Turing machines (TMs) are multiply realisable on
different hardware.
Psychological states are multiply realisable in different
organisms.
Psychological states are functionally specified.
Putnam’s 1967 conclusion is that the best explanation of
the joint multiple realisability of TMs and psychological
states
3
is that TMs specify the relevant functional states
and so specify the psychological states of the organism;
hence by this observation Putnam makes the move from
‘the intelligence of computation to the computational the-
ory of intelligence [7]. Today variations on CTM structure
the most commonly held philosophical scaffolds for cog-
nitive science and psychology (e.g. providing the implicit
foundations of evolutionary approaches to psychology and
linguistics). Formally stated the CTM entails:
Cognitive states are computational relations to compu-
tational representations which have content.
A cognitive state is a state [of mind] denoting
knowledge; understanding; beliefs, etc.
Cognitive processes—changes in cognitive states—are
computational operations on these computational
representations.
The Problem of Consciousness
The term ‘consciousness’ can imply many things to many
different people. In the context of this article I refer spe-
cifically to that aspect of consciousness Ned Block terms
‘phenomenal consciousness’ [21], by which I refer to the
first person, subjective phenomenal states—sensory tickles,
pains, visual experiences and so on.
Cartesian theories of cognition can be broken down into
what Chalmers [10] calls the ‘easy’ problem of percep-
tion—the classification and identification of sense stim-
uli—and a corresponding ‘hard’ problem, which is the
realization of the associated phenomenal state. The
1
In two earlier articles (with Nasuto et al. [11, 12]) the author
explored theoretical limitations of the computational metaphor from
positions grounded in psychology and neuroscience; this article—
outlining a third perspective—reviews three philosophical critiques of
the computational metaphor with respect to ‘hard’ questions of
cognition related to consciousness and understanding. Its negative
conclusion is that computation is neither necessary nor sufficient for
cognition; its positive conclusion suggests that the adoption of a new
metaphor may be helpful in addressing hard conceptual questions
related to consciousness and understanding. Drawing on the conclu-
sions of the two earlier articles, the suggested new metaphor is one
grounding cognition in processes of communication and interaction
rather than computation. An analogy is with the application of
Newtonian physics and Quantum physics—both useful descriptions of
the world, but descriptions that are most appropriate in addressing
different types of questions.
2
Controversy remains surrounding Descartes’ account of the repre-
sentational content of non-intellectual thought such as pain.
3
Although Putnam talks about pain not cognition, it is clear that his
argument is intended to be general.
222 Cogn Comput (2009) 1:221–233
123

difference between the easy and the hard problems and an
apparent lack of the link between theories of the former and
an account of the latter has been termed the ‘explanatory-
gap’.
The idea that the appropriately programmed computer
really is a mind, and was eloquently suggested by Chalmers
(ibid). Central to Chalmers’ non-reductive functionalist
theory of mind is the Principle of Organizational Invari-
ance (POI). This asserts that, ‘given any system that has
conscious experiences, then any system that has the same
fine-grained functional organization will have qualitatively
identical experiences’’.
To illustrate the point Chalmers imagines a fine-grained
simulation of the operation of the human brain—a mas-
sively complex and detailed artificial neural network. If, at
a very fine-grained level, each group of simulated neurons
was functionally identical to its counterpart in the real
brain then, via Dancing Qualia and Fading Qualia argu-
ments, Chalmers (ibid) argues that the computational
neural network must have precisely the same qualitative
conscious experiences as the real human brain.
Current research into perception and neuro-physiology
certainly suggests that physically identical brains will
instantiate identical phenomenal states and, although as
Maudlin [22] observes this thesis is not analytic, something
like it underpins computational theories of mind. For if
computational functional structure supervenes on physical
structure then physically identical brains must be compu-
tationally and functionally identical. Thus Maudlin for-
mulates the Supervenience Thesis (ibid) ‘... two physical
systems engaged in precisely the same physical activity
through a time will support precisely the same modes of
consciousness (if any) through that time’’.
The Problem of Computation
It is a commonly held view that ‘there is a crucial barrier
between computer models of minds and real minds: the
barrier of consciousness’ and thus that ‘information-pro-
cessing’ and ‘phenomenal (conscious) experiences’ are
conceptually distinct [23]. But is consciousness a pre-
requisite for genuine cognition and the realisation of
mental states? Certainly Searle believes so, ‘... the study of
the mind is the study of consciousness, in much the same
sense that biology is the study of life’ [24] and this
observation leads him to postulate the Connection Princi-
ple whereby ‘... any mental state must be, at least in
principle, capable of being brought to conscious aware-
ness’ (ibid). Hence, if computational machines are not
capable of enjoying consciousness, they are incapable of
carrying genuine mental states and computation fails as an
adequate metaphor for cognition.
In the following sections I briefly review two well-
known arguments targeting computational accounts of
cognition from Penrose and Searle, which together suggest
computations are neither necessary nor sufficient for mind.
I subsequently outline a simple reductio ad absurdum
argument that suggests there may be equally serious
problems in granting phenomenal (conscious) experience
to systems purely in virtue of their execution of particular
programs; if correct, this argument suggests either strong
computational accounts of consciousness must fail or that
panpsychism is true.
Computations and Understanding: Go
¨
delian Arguments
Against Computationalism
Go
¨
del’s first incompleteness theorem states that ‘... any
effectively generated theory capable of expressing
elementary arithmetic cannot be both consistent and
complete. In particular, for any consistent, effectively
generated formal theory F that proves certain basic
arithmetic truths, there is an arithmetical statement that is
true, but not provable in the theory.’ The resulting true but
unprovable statement Gð
gÞ is often referred to as ‘the
Go
¨
del sentence for the theory (albeit there are infinitely
many other statements in the theory that share with the
Go
¨
del sentence the property of being true but not provable
from the theory).
Arguments based on Go
¨
del’s first incompleteness theo-
rem—initially from Lucas [25, 26] were first criticised by
Benacerraf [27
] and subsequently extended, developed and
widely popularised by Penrose [2831]—typically
endeavour to show that for any such formal system F,
humans can find the Go
¨
del sentence Gð
gÞ whilst the
computation/machine (being itself bound by F) cannot. In
[29] Penrose develops a subtle reformulation of the vanilla
argument that purports to show that ‘the human mathe-
matician can ‘see’ that the Go
¨
del Sentence is true for
consistent F even though the consistent F cannot prove
Gð
gÞ’’:
A detailed discussion of Penrose’s formulation of the
Go
¨
delian argument is outside the scope of this article (for
a critical introduction see [32, 33] and for Penrose’s
response see [31]); here it is simply important to note that
although Go
¨
delian-style arguments purporting to show
‘computations are not necessary for cognition’ have been
extensively
4
and vociferously critiqued in the literature
(see [34] for a review), interest in them—both positive
and negative—still regularly continues to surface (e.g.
[35, 36]).
4
For example, Lucas maintains a web page http://users.ox.ac.uk/
*jrlucas/Godel/referenc.html listing more than 50 such criticisms.
Cogn Comput (2009) 1:221–233 223
123

The Chinese Room Argument
One of the most widely known critics of computational
theories of mind is John Searle. His best-known work on
machine understanding, first presented in the 1980 paper
‘Minds, Brains & Programs’ [37], has become known as
the Chinese Room Argument (CRA). The central claim of
the CRA is that computations alone are not sufficient to
give rise to cognitive states, and hence that computational
theories of mind cannot fully explain human cognition.
More formally Searle stated that the CRA was an attempt
to prove the truth of the premise:
Syntax is not sufficient for semantics.
Which, together with the following two axioms:
(i) Programs are formal (syntactical).
(ii) Minds have semantics (mental content).
... led Searle to conclude that:
Programs are not minds.
... and hence that computationalism—the idea that the
essence of thinking lies in computational processes and that
such processes thereby underlie and explain conscious
thinking—is false [38].
In the CRA Searle emphasises the distinction between
syntax and semantics to argue that while computers can act
in accordance to formal rules, they cannot be said to know
the meaning of the symbols they are manipulating, and
hence cannot be credited with genuinely understanding the
results of the execution of programs those symbols com-
pose. In short, Searle claims that while cognitive compu-
tations may simulate aspects of cognition, they can never
instantiate it.
The CRA describes a situation where a monoglot Searle
is locked in a room and presented with a large batch of
papers covered with Chinese writing that he does not
understand. Indeed, Searle does not even recognise the
writing as Chinese ideograms, as distinct from say Japa-
nese or simply meaningless patterns. A little later Searle is
given a second batch of Chinese symbols together with a
set of rules (in English) that describe an effective method
(algorithm) for correlating the second batch with the first
purely by their form or shape. Finally Searle is given a
third batch of Chinese symbols together with another set of
rules (in English) to enable him to correlate the third batch
with the first two, and these rules instruct him how to return
certain sets of shapes (Chinese symbols) in response to
certain symbols given in the third batch.
Unknown to Searle, the people outside the room call the
first batch of Chinese symbols the script; the second batch
the story; the third questions about the story and the
symbols he returns they call answers to the questions about
the story. The set of rules he is obeying they call the
program. To complicate the matters further, the people
outside also give him stories in English and ask him
questions about them in English, to which he can reply in
English. After a while Searle gets so good at following the
instructions and the outsiders get so good at supplying the
rules which he has to follow, that the answers he gives to
the questions in Chinese symbols become indistinguishable
from those a true Chinese man might give.
From the external point of view the answers to the two
sets of questions—one in English the other in Chinese—are
equally good; Searle-in-the-Chinese-room has passed the
Turing test. Yet in the Chinese case Searle behaves like a
computer and does not understand either the questions he is
given or the answers he returns, whereas in the English
case he does. To highlight the difference consider Searle is
passed a joke first in Chinese and then English. In the
former case Searle-in-the-room might correctly output
appropriate Chinese ideograms signifying ‘ha ha’ whilst
remaining phenomenologically unmoved, whilst in the
latter, if the joke is funny, he may laugh out loud and feel
the joke within.
The decades since its inception have witnessed many
reactions to the CRA from the computational, cognitive
science, philosophical and psychological communities,
with perhaps the most widely held being based on what has
become known as the ‘Systems Reply’. This concedes that,
although the person in the room does not understand Chi-
nese, the entire system (of the person, the room and its
contents) does.
Searle finds this response entirely unsatisfactory and
responds by allowing the person in the room to memorise
everything (the rules, the batches of paper, etc.) so that there
is nothing in the system not internalised within Searle. Now
in response to the questions in Chinese and English there are
two subsystems—the native English speaking Searle and the
internalised Searle-in-the-Chinese-room—but all the same
he [Searle] continues to understand nothing of Chinese, and
a fortiori neither does the system, because there is nothing in
the system that is not just a part of him.
But others are left equally unmoved by Searle’s
response; for example in [39] Haugland asks why should
we unquestioningly accept Searle’s conclusion that ‘the
internalised Chinese room system does not understand
Chinese’, given that Searle’s responses to the questions in
Chinese are all correct? Yet, despite this and other tren-
chant criticism, almost 30 years after its first publication
there continues to be lively interest in the CRA (e.g.
[4047]). In a 2002 volume of analysis [48] comment
ranged from Selmer Bringsjord who observed the CRA to
be ‘arguably the 20th century’s greatest philosophical
polariser’ [49], to Rey who claims that in his definition
of Strong AI Searle ‘burdens the [Computational
224 Cogn Comput (2009) 1:221–233
123

Representational Theory of Thought (Strong AI)] project
with extraneous claims which any serious defender of it
should reject’ [50]. Nevertheless, although opinion on the
argument remains divided, most commentators now agree
that the CRA helped shift research in artificial intelligence
away from classical computationalism (which, pace Newell
and Simon [51], viewed intelligence fundamentally in
terms of symbol manipulation) first to a sub-symbolic
neural-connectionism and more recently, moving even
further away from symbols and representations, towards
embodied and enactive approaches to cognition. Clearly,
whatever the verdict on the soundness of Searle’s Chinese
room argument, the subsequent historical response offers
eloquent testament to his conclusion that programs are not
minds’.
Dancing with Pixies
The core argument I wish to present in this article targeting
computational accounts of cognition—the Dancing with
Pixies (DwP) reductio—derives from ideas originally
outlined by Putnam [52], Maudlin [22], Searle [53] and
subsequently criticised by Chalmers [10], Klein [54] and
Chrisley [55, 56] amongst others
5
. In what follows, instead
of seeking to justify Putnam’s claim that ‘every open
system implements every finite state automaton’ (FSA)
and hence that ‘psychological states of the brain cannot be
functional states of a computer’’, I will seek to establish the
weaker result that, over a finite time window, every open
physical system implements the trace of a FSA Q on fixed,
specified input (I). That this result leads to panpsychism is
clear as, equating FSA Q(I) to a specific computational
system that is claimed to instantiate phenomenal states as it
executes, and following Putnam’s procedure, identical
computational (and ex hypothesi phenomenal) states can be
found in every open physical system.
Formally DwP is a simple reductio ad absurdum argu-
ment that endeavours to demonstrate that:
IF the assumed claim is true: that an appropriately
programmed computer really does instantiate genuine
phenomenal states
THEN panpsychism holds
However, against the backdrop of our immense
scientific knowledge of the closed physical world,
and the corresponding widespread desire to explain
everything ultimately in physical terms, panpsy-
chism has come to seem an implausible view...
HENCE we should reject the assumed claim.
The route-map for this endeavour is as follows: in the
next section I introduce discrete state machines (DSMs)
and FSAs and show how, with input to them defined, their
behaviour can be described by a simple un-branching
sequence of state transitions. I subsequently review Put-
nam’s 1988 argument [52] that purports to show how every
open physical system implements every input-less FSA.
Then I apply Putnam’s construction to one execution trace
of any FSA with known input, such that if the FSA in-
stantiates genuine phenomenal states as it executes, then so
must any open physical system. Finally I apply the pro-
cedure to a robotic system that is claimed to instantiate
machine consciousness purely in virtue of its execution of
an appropriate program. The article is completed by a brief
discussion of some objections to the DwP reductio and
concludes by suggesting, at least with respect to ‘hard’
problems, that it may be necessary to develop an alterna-
tive metaphor for cognition to that of computation.
Discrete State Machines
In his 1950 paper ‘Computing Machinery and Intelligence’
[57] Turing defined DSMs as ‘machines that move in
sudden jumps or clicks from one quite definite state to
another’ and explained that modern digital computers fall
within the class of them. An example DSM from Turing is
a simple machine that cycles through three computational
states Q
1
, Q
2
, Q
3
at discrete clock clicks. Turing demon-
strated that such a device, which continually jumps through
a linear series of state transitions like clockwork may be
implemented by a simple discrete-position-wheel that
revolves through 120 intervals at each clock tick. Basic
input can be added to such a machine by the addition of a
simple brake mechanism and basic output by the addition
of a light that comes on when the machine is in, say,
computational state Q
3
(see Fig. 1).
An input-less FSA is specified by a set of states Q and a
set of state-transitions Q ? Q
0
for each current state Q
specifying the next state Q
0
. Such a device is trivially
implemented by Turing’s discrete-position-wheel machine
and a function that maps each physical wheel position W
n
to a logical computational state Q
n
as required. For
example, considering the simple 3-state input-less FSA
described in Table 1, by labelling the three discrete posi-
tions of the wheel W
1
, W
2
, W
3
we can map computational
states of the FSA, Q
1
, Q
2
, Q
3
, to the physical discrete
positions of the wheel, W
1
, W
2
, W
3
, such that, for example,
(W
1
? Q
1
, W
2
? Q
2
, W
3
? Q
3
).
This mapping is observer relative; the physical position
W
1
of the wheel could equally map to computational states
Q
2
or Q
3
and, with other states appropriately assigned,
the machine’s state transition sequence (and hence its
5
For early discussion of these themes see ‘Minds and Machines’,
4: 4, ‘What is Computation?’, November 1994.
Cogn Comput (2009) 1:221–233 225
123

Citations
More filters
Journal Article

A sensorimotor account of vision and visual consciousness-Authors' Response-Acting out our sensory experience

TL;DR: In this article, the authors propose that the brain produces an internal representation of the world, and the activation of this internal representation is assumed to give rise to the experience of seeing, but it leaves unexplained how the existence of such a detailed internal representation might produce visual consciousness.
Journal ArticleDOI

The conscious mind: In search of a fundamental theory

TL;DR: In this article, a clutch of '-isms' characterises the approach to consciousness which David Chalmers defends: dualism, epiphenomenalism, functionalism, anti-reductionism, and -probably -panpsychism.
Book

The Soar Cognitive Architecture

John E. Laird
TL;DR: This book offers the definitive presentation of Soar from theoretical and practical perspectives, providing comprehensive descriptions of fundamental aspects and new components, and proposes requirements for general cognitive architectures and explicitly evaluates how well Soar meets those requirements.
Journal ArticleDOI

Stochastic Diffusion Search Review

TL;DR: Various developments of the stochastic diffusion search algorithm are reviewed, which have been shown to perform well in a variety of application domains including continuous optimisation, implementation on hardware and medical imaging.
References
More filters

Toward a Science of Consciousness

TL;DR: In this paper, the authors proposed a method for title only title-only classification of abstracts, and used it in their paper: https://www.title-only.org/
Book ChapterDOI

Is the Brain a Digital Computer

TL;DR: The argument rests on the simple logical truth that syntax is not the same as, nor is it by itself sufficient for, semantics.
Journal ArticleDOI

God, The Devil, And Gödel

TL;DR: In this paper, we focus on a similar claim but one placed in a different philosophical climate as discussed by the authors, which is similar to Descartes' argument that animals are essentially machines, but humans are not.