scispace - formally typeset

Journal ArticleDOI

A Cognitive Computation Fallacy? Cognition, Computations and Panpsychism

30 May 2009-Cognitive Computation (Springer-Verlag)-Vol. 1, Iss: 3, pp 221-233

TL;DR: A group of philosophical arguments that suggest either unequivocal optimism in computationalism is misplaced—computation is neither necessary nor sufficient for cognition—or panpsychism (the belief that the physical universe is fundamentally composed of elements each of which is conscious) is true are reviewed.

AbstractThe journal of Cognitive Computation is defined in part by the notion that biologically inspired computational accounts are at the heart of cognitive processes in both natural and artificial systems. Many studies of various important aspects of cognition (memory, observational learning, decision making, reward prediction learning, attention control, etc.) have been made by modelling the various experimental results using ever-more sophisticated computer programs. In this manner progressive inroads have been made into gaining a better understanding of the many components of cognition. Concomitantly in both science and science fiction the hope is periodically re-ignited that a man-made system can be engineered to be fully cognitive and conscious purely in virtue of its execution of an appropriate computer program. However, whilst the usefulness of the computational metaphor in many areas of psychology and neuroscience is clear, it has not gone unchallenged and in this article I will review a group of philosophical arguments that suggest either such unequivocal optimism in computationalism is misplaced—computation is neither necessary nor sufficient for cognition—or panpsychism (the belief that the physical universe is fundamentally composed of elements each of which is conscious) is true. I conclude by highlighting an alternative metaphor for cognitive processes based on communication and interaction.

Summary (3 min read)

Introduction

  • In operation, such computationally defined neurons effectively sum up their input and compute a complex nonlinear function on this value; output information being encoded in the mean firing rate of neurons, which in turn exhibit narrow functional specialisation.
  • In the literature there exist numerous examples of such learning rules and architectures, more or less inspired by varying degrees of biological plausibility; early models include [3–6].
  • From this followed the functional specialization paradigm, mapping different areas of the brain to specific cognitive functions.
  • 9] and (iii) an implicit non-reductive functionalism with respect to consciousness [10].

The CTM

  • The CTM occupies one part of the spectrum of representational theories of mind (RTM).
  • Harnish [7] observes that the RTM entails: – Cognitive states are relations to mental representations which have content.
  • – Psychological states are multiply realisable in different organisms.
  • Putnam’s 1967 conclusion is that the best explanation of the joint multiple realisability of TMs and psychological states3 is that TMs specify the relevant functional states and so specify the psychological states of the organism; hence by this observation Putnam makes the move from ‘the intelligence of computation to the computational theory of intelligence’ [7].
  • – A cognitive state is a state [of mind] denoting knowledge; understanding; beliefs, etc. – Cognitive processes—changes in cognitive states—are computational operations on these computational representations.

The Problem of Consciousness

  • The term ‘consciousness’ can imply many things to many different people.
  • Its negative conclusion is that computation is neither necessary nor sufficient for cognition; its positive conclusion suggests that the adoption of a new metaphor may be helpful in addressing hard conceptual questions related to consciousness and understanding.
  • An analogy is with the application of Newtonian physics and Quantum physics—both useful descriptions of the world, but descriptions that are most appropriate in addressing different types of questions.
  • The idea that the appropriately programmed computer really is a mind, and was eloquently suggested by Chalmers (ibid).
  • This asserts that, ‘‘given any system that has conscious experiences, then any system that has the same fine-grained functional organization will have qualitatively identical experiences’’.

The Problem of Computation

  • It is a commonly held view that ‘there is a crucial barrier between computer models of minds and real minds: the barrier of ness’ and thus that ‘information-processing’ and ‘phenomenal experiences’ are conceptually distinct [23].
  • In the following sections I briefly review two wellknown arguments targeting computational accounts of cognition from Penrose and Searle, which together suggest computations are neither necessary nor sufficient for mind.
  • Arguments based on Gödel’s first incompleteness theorem—initially from Lucas [25, 26] were first criticised by Benacerraf [27] and subsequently extended, developed and widely popularised by Penrose [28–31]—typically endeavour to show that for any such formal system F, humans can find the Gödel sentence Gð gÞ whilst the computation/machine (being itself bound by F) cannot.
  • Now in response to the questions in Chinese and English there are two subsystems—the native English speaking Searle and the internalised Searle-in-the-Chinese-room—but all the same he [Searle] continues to understand nothing of Chinese, and a fortiori neither does the system, because there is nothing in the system that is not just a part of him.

Discrete State Machines

  • Turing defined DSMs as ‘‘machines that move in sudden jumps or clicks from one quite definite state to another’’ and explained that modern digital computers fall within the class of them.
  • Turing demonstrated that such a device, which continually jumps through a linear series of state transitions like clockwork may be implemented by a simple discrete-position-wheel that revolves through 120 intervals at each clock tick.
  • Note, after Chalmers, that the discrete position wheel machine described above will only implement a particular execution trace of the FSA and Chalmers remains unfazed at this result because he states that input-less machines are simply an ‘‘inappropriate formalism’’ for a computationalist theory of mind [32].

Objections

  • As the experimental setup is precisely the same for experiment (2) as for experiment (1) the computationalist must continue to claim that the robot continues to instantiate appropriate phenomenological states over this period and it is clear that a posteriori knowledge of the system input does not impact this claim.
  • It is apparent that under mapping A (see Table 4), the gate X computes the logical AND function.
  • The Objection From Randomness Superficially the DwP reductio only targets DSMs; it has nothing to say about the conscious state of suitably engineered Stochastic Automata [60].
  • At the ‘Toward a Science of Consciousness’ conference in Tucson 2006 Ron argued that as the authors morph between R1 and R2 with the deletion of each conditional non-entered state sequence substantive physical differences between R1 and R2 will emerge.

Is Counterfactual Sensitivity Essential to a Computational Account of Cognition?

  • The supervenience thesis tells us that, if the authors introduce into the vicinity of the system an entirely inert object that has absolutely no causal or physical interaction with the system, then the same activity will support the same mode of consciousness.
  • So despite Bishop’s claim, if R1 and R2 differ in their counterfactual formal properties, they must differ in their physical properties.
  • In each of the following experiments the robots are instructed to report the colour of a large red square fixed in the centre of their visual field.
  • Next the virtual robot software is re-complied using two slightly different partial evaluation compilers [65].
  • However the reductio targets computationalism—the formal abstraction and instantiation of consciousness through appropriate DSMs (and/or their stochastic variants); the DwP reductio does not target continuous [dynamic] systems or identity theories (where conscious properties of the system are defined to be irreducible from the underlying physical agent–environment system).

Are These A Priori Critiques of the Computational Metaphor too Strong?

  • Interestingly, as this form of compile time partial evaluation process cannot be undertaken for the real robot, the DwP reductio strictly no longer holds against it; however, this does not help the computationlist as any putative phenomenal states of the real robot have now become tightly bound to properties of the real-world agent/environment interactions and not the mere computations.
  • There are two responses to this question, one weak and one strong.
  • The first—the weak response—emerges from the Chinese room and DwP reductio.
  • Hence Searle’s famous observation that ‘‘… the idea that computer simulations could be the real thing ought to have seemed suspicious in the first place because the computer is not confined to simulating mental operations, by any means.
  • Both of the above responses accommodate results from computational neuroscience, but clearly both also highlight fundamental limitations to the computational metaphor.

So what Is Cognition, If Not Computation?

  • In contrast to computation, communication is not merely an observer-relative anthropomorphic projection on reality, as even simple organisms (e.g. bacteria) communicate with each other or interact with their environment.
  • Thus the new metaphor— cognition as communication—is sympathetic to modern post-symbolic, anti-representationalist, embodied, enactive accounts of cognition such as those from Brooks [70], Varela [19], O’Regan [15], Thompson [71] and Bishop and Nasuto [13].

Conclusion

  • All matter, from the simplest particles to the most complex living organisms, undergo physical processes which in most sciences are not given any special interpretation.
  • In neuroscience, and in connectionism, it is assumed that neurons and their systems possess special computational capabilities; this is equivalent to claiming that a spring, when extended by a moderate force, computes its deformation according to Hooke’s law.
  • But at heart he follows an extremely simple line of reasoning: consider an idealised analogue computer that can add two reals (a, b) and output one if they are the same, zero otherwise.
  • I would like to thank the reviewers for the many helpful comments I received during the preparation of this article.

Did you find this useful? Give us your feedback

...read more

Content maybe subject to copyright    Report

A Cognitive Computation Fallacy? Cognition, Computations
and Panpsychism
John Mark Bishop
Published online: 30 May 2009
Springer Science+Business Media, LLC 2009
Abstract The journal of Cognitive Computation is defined
in part by the notion that biologically inspired computational
accounts are at the heart of cognitive processes in both
natural and artificial systems. Many studies of various
important aspects of cognition (memory, observational
learning, decision making, reward prediction learning,
attention control, etc.) have been made by modelling the
various experimental results using ever-more sophisticated
computer programs. In this manner progressive inroads have
been made into gaining a better understanding of the many
components of cognition. Concomitantly in both science and
science fiction the hope is periodically re-ignited that a man-
made system can be engineered to be fully cognitive and
conscious purely in virtue of its execution of an appropriate
computer program. However, whilst the usefulness of the
computational metaphor in many areas of psychology and
neuroscience is clear, it has not gone unchallenged and in
this article I will review a group of philosophical arguments
that suggest either such unequivocal optimism in computa-
tionalism is misplaced—computation is neither necessary
nor sufficient for cognition—or panpsychism (the belief that
the physical universe is fundamentally composed of ele-
ments each of which is conscious) is true. I conclude by
highlighting an alternative metaphor for cognitive processes
based on communication and interaction.
Keywords Computationalism Machine consciousness
Panpsychism
Introduction
Over the hundred years since the publication of James’
psychology [1] neuroscientists have attempted to define the
fundamental features of the brain and its information-pro-
cessing capabilities in terms of (i) mean firing rates at
points in the brain cortex (neurons) and (ii) computations;
today the prevailing view in neuroscience is that neurons
can be considered fundamentally computational devices. In
operation, such computationally defined neurons effec-
tively sum up their input and compute a complex non-
linear function on this value; output information being
encoded in the mean firing rate of neurons, which in turn
exhibit narrow functional specialisation. After Hubel and
Wiesel [2] this view of the neuron as a specialised feature
detector has become treated as established doctrine. Fur-
thermore, it has been shown that richly interconnected
networks of such neurons can ‘learn’ by suitably adjusting
the inter-neuron connection weights according to complex
computationally defined processes. In the literature there
exist numerous examples of such learning rules and
architectures, more or less inspired by varying degrees of
biological plausibility; early models include [36]. From
this followed the functional specialization paradigm,
mapping different areas of the brain to specific cognitive
functions.
In this article I suggest that this attraction to viewing the
neuron merely as a computational device fundamentally
stems from (i) the implicit adoption of a computational
theory of mind (CTM) [7]; (ii) a concomitant functionalism
with respect to the instantiation of cognitive processes
[8, 9] and (iii) an implicit non-reductive functionalism with
respect to consciousness [10]. Conversely, I suggest that a
computational description of brain operations has difficulty
in providing a physicalist account of several key features of
J. M. Bishop (&)
Department of Computing, Goldsmiths, University of London,
London, UK
e-mail: m.bishop@gold.ac.uk
123
Cogn Comput (2009) 1:221–233
DOI 10.1007/s12559-009-9019-6

human cognitive systems (in particular phenomenal con-
sciousness and ‘understanding’) and hence that computa-
tions are neither necessary nor sufficient for cognition; that
any computational description of brain processes is thus
best understood merely in a metaphorical sense. I conclude
by answering the question What is cognition if not com-
putation? by tentatively highlighting an alternative meta-
phor, defined by physically grounded processes of
communication and interaction, which is less vulnerable to
the three classical criticisms of computationalism described
herein.
1
The CTM
The CTM occupies one part of the spectrum of represen-
tational theories of mind (RTM). Although currently
undergoing challenges from dynamic systems, embodied,
enactivist and constructivist accounts of cognition (e.g.
[1319]), the RTM remains ubiquitous in contemporary
cognitive science and experimental psychology. Contrary
to naive or direct realism, indirect realism (or representa-
tionalism) postulates the actual existence of mental inter-
mediaries—representations—between the observing
subject and the world. The earliest forms of RTM can be
traced to Descartes [20] who held that all thought was
representational
2
and that it is the very nature of mind (res
cogitans) to represent the world (res extensa).
Harnish [7] observes that the RTM entails:
Cognitive states are relations to mental representations
which have content.
A cognitive state is:
A state [of mind] denoting knowledge; understand-
ing; beliefs, etc.
This definition subsequently broadened to include
knowledge of raw sensations, colours, pains, etc.
Cognitive processes—changes in cognitive states—are
mental operations on these representations.
The Emergence of Functionalism
The CTM came to the fore after the development of the
stored program digital computer in the mid-20th century
when, through machine-state functionalism, Putnam [8, 9]
first embedded the RTM in a computational framework. At
the time Putnam famously held that:
Turing machines (TMs) are multiply realisable on
different hardware.
Psychological states are multiply realisable in different
organisms.
Psychological states are functionally specified.
Putnam’s 1967 conclusion is that the best explanation of
the joint multiple realisability of TMs and psychological
states
3
is that TMs specify the relevant functional states
and so specify the psychological states of the organism;
hence by this observation Putnam makes the move from
‘the intelligence of computation to the computational the-
ory of intelligence [7]. Today variations on CTM structure
the most commonly held philosophical scaffolds for cog-
nitive science and psychology (e.g. providing the implicit
foundations of evolutionary approaches to psychology and
linguistics). Formally stated the CTM entails:
Cognitive states are computational relations to compu-
tational representations which have content.
A cognitive state is a state [of mind] denoting
knowledge; understanding; beliefs, etc.
Cognitive processes—changes in cognitive states—are
computational operations on these computational
representations.
The Problem of Consciousness
The term ‘consciousness’ can imply many things to many
different people. In the context of this article I refer spe-
cifically to that aspect of consciousness Ned Block terms
‘phenomenal consciousness’ [21], by which I refer to the
first person, subjective phenomenal states—sensory tickles,
pains, visual experiences and so on.
Cartesian theories of cognition can be broken down into
what Chalmers [10] calls the ‘easy’ problem of percep-
tion—the classification and identification of sense stim-
uli—and a corresponding ‘hard’ problem, which is the
realization of the associated phenomenal state. The
1
In two earlier articles (with Nasuto et al. [11, 12]) the author
explored theoretical limitations of the computational metaphor from
positions grounded in psychology and neuroscience; this article—
outlining a third perspective—reviews three philosophical critiques of
the computational metaphor with respect to ‘hard’ questions of
cognition related to consciousness and understanding. Its negative
conclusion is that computation is neither necessary nor sufficient for
cognition; its positive conclusion suggests that the adoption of a new
metaphor may be helpful in addressing hard conceptual questions
related to consciousness and understanding. Drawing on the conclu-
sions of the two earlier articles, the suggested new metaphor is one
grounding cognition in processes of communication and interaction
rather than computation. An analogy is with the application of
Newtonian physics and Quantum physics—both useful descriptions of
the world, but descriptions that are most appropriate in addressing
different types of questions.
2
Controversy remains surrounding Descartes’ account of the repre-
sentational content of non-intellectual thought such as pain.
3
Although Putnam talks about pain not cognition, it is clear that his
argument is intended to be general.
222 Cogn Comput (2009) 1:221–233
123

difference between the easy and the hard problems and an
apparent lack of the link between theories of the former and
an account of the latter has been termed the ‘explanatory-
gap’.
The idea that the appropriately programmed computer
really is a mind, and was eloquently suggested by Chalmers
(ibid). Central to Chalmers’ non-reductive functionalist
theory of mind is the Principle of Organizational Invari-
ance (POI). This asserts that, ‘given any system that has
conscious experiences, then any system that has the same
fine-grained functional organization will have qualitatively
identical experiences’’.
To illustrate the point Chalmers imagines a fine-grained
simulation of the operation of the human brain—a mas-
sively complex and detailed artificial neural network. If, at
a very fine-grained level, each group of simulated neurons
was functionally identical to its counterpart in the real
brain then, via Dancing Qualia and Fading Qualia argu-
ments, Chalmers (ibid) argues that the computational
neural network must have precisely the same qualitative
conscious experiences as the real human brain.
Current research into perception and neuro-physiology
certainly suggests that physically identical brains will
instantiate identical phenomenal states and, although as
Maudlin [22] observes this thesis is not analytic, something
like it underpins computational theories of mind. For if
computational functional structure supervenes on physical
structure then physically identical brains must be compu-
tationally and functionally identical. Thus Maudlin for-
mulates the Supervenience Thesis (ibid) ‘... two physical
systems engaged in precisely the same physical activity
through a time will support precisely the same modes of
consciousness (if any) through that time’’.
The Problem of Computation
It is a commonly held view that ‘there is a crucial barrier
between computer models of minds and real minds: the
barrier of consciousness’ and thus that ‘information-pro-
cessing’ and ‘phenomenal (conscious) experiences’ are
conceptually distinct [23]. But is consciousness a pre-
requisite for genuine cognition and the realisation of
mental states? Certainly Searle believes so, ‘... the study of
the mind is the study of consciousness, in much the same
sense that biology is the study of life’ [24] and this
observation leads him to postulate the Connection Princi-
ple whereby ‘... any mental state must be, at least in
principle, capable of being brought to conscious aware-
ness’ (ibid). Hence, if computational machines are not
capable of enjoying consciousness, they are incapable of
carrying genuine mental states and computation fails as an
adequate metaphor for cognition.
In the following sections I briefly review two well-
known arguments targeting computational accounts of
cognition from Penrose and Searle, which together suggest
computations are neither necessary nor sufficient for mind.
I subsequently outline a simple reductio ad absurdum
argument that suggests there may be equally serious
problems in granting phenomenal (conscious) experience
to systems purely in virtue of their execution of particular
programs; if correct, this argument suggests either strong
computational accounts of consciousness must fail or that
panpsychism is true.
Computations and Understanding: Go
¨
delian Arguments
Against Computationalism
Go
¨
del’s first incompleteness theorem states that ‘... any
effectively generated theory capable of expressing
elementary arithmetic cannot be both consistent and
complete. In particular, for any consistent, effectively
generated formal theory F that proves certain basic
arithmetic truths, there is an arithmetical statement that is
true, but not provable in the theory.’ The resulting true but
unprovable statement Gð
gÞ is often referred to as ‘the
Go
¨
del sentence for the theory (albeit there are infinitely
many other statements in the theory that share with the
Go
¨
del sentence the property of being true but not provable
from the theory).
Arguments based on Go
¨
del’s first incompleteness theo-
rem—initially from Lucas [25, 26] were first criticised by
Benacerraf [27
] and subsequently extended, developed and
widely popularised by Penrose [2831]—typically
endeavour to show that for any such formal system F,
humans can find the Go
¨
del sentence Gð
gÞ whilst the
computation/machine (being itself bound by F) cannot. In
[29] Penrose develops a subtle reformulation of the vanilla
argument that purports to show that ‘the human mathe-
matician can ‘see’ that the Go
¨
del Sentence is true for
consistent F even though the consistent F cannot prove
Gð
gÞ’’:
A detailed discussion of Penrose’s formulation of the
Go
¨
delian argument is outside the scope of this article (for
a critical introduction see [32, 33] and for Penrose’s
response see [31]); here it is simply important to note that
although Go
¨
delian-style arguments purporting to show
‘computations are not necessary for cognition’ have been
extensively
4
and vociferously critiqued in the literature
(see [34] for a review), interest in them—both positive
and negative—still regularly continues to surface (e.g.
[35, 36]).
4
For example, Lucas maintains a web page http://users.ox.ac.uk/
*jrlucas/Godel/referenc.html listing more than 50 such criticisms.
Cogn Comput (2009) 1:221–233 223
123

The Chinese Room Argument
One of the most widely known critics of computational
theories of mind is John Searle. His best-known work on
machine understanding, first presented in the 1980 paper
‘Minds, Brains & Programs’ [37], has become known as
the Chinese Room Argument (CRA). The central claim of
the CRA is that computations alone are not sufficient to
give rise to cognitive states, and hence that computational
theories of mind cannot fully explain human cognition.
More formally Searle stated that the CRA was an attempt
to prove the truth of the premise:
Syntax is not sufficient for semantics.
Which, together with the following two axioms:
(i) Programs are formal (syntactical).
(ii) Minds have semantics (mental content).
... led Searle to conclude that:
Programs are not minds.
... and hence that computationalism—the idea that the
essence of thinking lies in computational processes and that
such processes thereby underlie and explain conscious
thinking—is false [38].
In the CRA Searle emphasises the distinction between
syntax and semantics to argue that while computers can act
in accordance to formal rules, they cannot be said to know
the meaning of the symbols they are manipulating, and
hence cannot be credited with genuinely understanding the
results of the execution of programs those symbols com-
pose. In short, Searle claims that while cognitive compu-
tations may simulate aspects of cognition, they can never
instantiate it.
The CRA describes a situation where a monoglot Searle
is locked in a room and presented with a large batch of
papers covered with Chinese writing that he does not
understand. Indeed, Searle does not even recognise the
writing as Chinese ideograms, as distinct from say Japa-
nese or simply meaningless patterns. A little later Searle is
given a second batch of Chinese symbols together with a
set of rules (in English) that describe an effective method
(algorithm) for correlating the second batch with the first
purely by their form or shape. Finally Searle is given a
third batch of Chinese symbols together with another set of
rules (in English) to enable him to correlate the third batch
with the first two, and these rules instruct him how to return
certain sets of shapes (Chinese symbols) in response to
certain symbols given in the third batch.
Unknown to Searle, the people outside the room call the
first batch of Chinese symbols the script; the second batch
the story; the third questions about the story and the
symbols he returns they call answers to the questions about
the story. The set of rules he is obeying they call the
program. To complicate the matters further, the people
outside also give him stories in English and ask him
questions about them in English, to which he can reply in
English. After a while Searle gets so good at following the
instructions and the outsiders get so good at supplying the
rules which he has to follow, that the answers he gives to
the questions in Chinese symbols become indistinguishable
from those a true Chinese man might give.
From the external point of view the answers to the two
sets of questions—one in English the other in Chinese—are
equally good; Searle-in-the-Chinese-room has passed the
Turing test. Yet in the Chinese case Searle behaves like a
computer and does not understand either the questions he is
given or the answers he returns, whereas in the English
case he does. To highlight the difference consider Searle is
passed a joke first in Chinese and then English. In the
former case Searle-in-the-room might correctly output
appropriate Chinese ideograms signifying ‘ha ha’ whilst
remaining phenomenologically unmoved, whilst in the
latter, if the joke is funny, he may laugh out loud and feel
the joke within.
The decades since its inception have witnessed many
reactions to the CRA from the computational, cognitive
science, philosophical and psychological communities,
with perhaps the most widely held being based on what has
become known as the ‘Systems Reply’. This concedes that,
although the person in the room does not understand Chi-
nese, the entire system (of the person, the room and its
contents) does.
Searle finds this response entirely unsatisfactory and
responds by allowing the person in the room to memorise
everything (the rules, the batches of paper, etc.) so that there
is nothing in the system not internalised within Searle. Now
in response to the questions in Chinese and English there are
two subsystems—the native English speaking Searle and the
internalised Searle-in-the-Chinese-room—but all the same
he [Searle] continues to understand nothing of Chinese, and
a fortiori neither does the system, because there is nothing in
the system that is not just a part of him.
But others are left equally unmoved by Searle’s
response; for example in [39] Haugland asks why should
we unquestioningly accept Searle’s conclusion that ‘the
internalised Chinese room system does not understand
Chinese’, given that Searle’s responses to the questions in
Chinese are all correct? Yet, despite this and other tren-
chant criticism, almost 30 years after its first publication
there continues to be lively interest in the CRA (e.g.
[4047]). In a 2002 volume of analysis [48] comment
ranged from Selmer Bringsjord who observed the CRA to
be ‘arguably the 20th century’s greatest philosophical
polariser’ [49], to Rey who claims that in his definition
of Strong AI Searle ‘burdens the [Computational
224 Cogn Comput (2009) 1:221–233
123

Representational Theory of Thought (Strong AI)] project
with extraneous claims which any serious defender of it
should reject’ [50]. Nevertheless, although opinion on the
argument remains divided, most commentators now agree
that the CRA helped shift research in artificial intelligence
away from classical computationalism (which, pace Newell
and Simon [51], viewed intelligence fundamentally in
terms of symbol manipulation) first to a sub-symbolic
neural-connectionism and more recently, moving even
further away from symbols and representations, towards
embodied and enactive approaches to cognition. Clearly,
whatever the verdict on the soundness of Searle’s Chinese
room argument, the subsequent historical response offers
eloquent testament to his conclusion that programs are not
minds’.
Dancing with Pixies
The core argument I wish to present in this article targeting
computational accounts of cognition—the Dancing with
Pixies (DwP) reductio—derives from ideas originally
outlined by Putnam [52], Maudlin [22], Searle [53] and
subsequently criticised by Chalmers [10], Klein [54] and
Chrisley [55, 56] amongst others
5
. In what follows, instead
of seeking to justify Putnam’s claim that ‘every open
system implements every finite state automaton’ (FSA)
and hence that ‘psychological states of the brain cannot be
functional states of a computer’’, I will seek to establish the
weaker result that, over a finite time window, every open
physical system implements the trace of a FSA Q on fixed,
specified input (I). That this result leads to panpsychism is
clear as, equating FSA Q(I) to a specific computational
system that is claimed to instantiate phenomenal states as it
executes, and following Putnam’s procedure, identical
computational (and ex hypothesi phenomenal) states can be
found in every open physical system.
Formally DwP is a simple reductio ad absurdum argu-
ment that endeavours to demonstrate that:
IF the assumed claim is true: that an appropriately
programmed computer really does instantiate genuine
phenomenal states
THEN panpsychism holds
However, against the backdrop of our immense
scientific knowledge of the closed physical world,
and the corresponding widespread desire to explain
everything ultimately in physical terms, panpsy-
chism has come to seem an implausible view...
HENCE we should reject the assumed claim.
The route-map for this endeavour is as follows: in the
next section I introduce discrete state machines (DSMs)
and FSAs and show how, with input to them defined, their
behaviour can be described by a simple un-branching
sequence of state transitions. I subsequently review Put-
nam’s 1988 argument [52] that purports to show how every
open physical system implements every input-less FSA.
Then I apply Putnam’s construction to one execution trace
of any FSA with known input, such that if the FSA in-
stantiates genuine phenomenal states as it executes, then so
must any open physical system. Finally I apply the pro-
cedure to a robotic system that is claimed to instantiate
machine consciousness purely in virtue of its execution of
an appropriate program. The article is completed by a brief
discussion of some objections to the DwP reductio and
concludes by suggesting, at least with respect to ‘hard’
problems, that it may be necessary to develop an alterna-
tive metaphor for cognition to that of computation.
Discrete State Machines
In his 1950 paper ‘Computing Machinery and Intelligence’
[57] Turing defined DSMs as ‘machines that move in
sudden jumps or clicks from one quite definite state to
another’ and explained that modern digital computers fall
within the class of them. An example DSM from Turing is
a simple machine that cycles through three computational
states Q
1
, Q
2
, Q
3
at discrete clock clicks. Turing demon-
strated that such a device, which continually jumps through
a linear series of state transitions like clockwork may be
implemented by a simple discrete-position-wheel that
revolves through 120 intervals at each clock tick. Basic
input can be added to such a machine by the addition of a
simple brake mechanism and basic output by the addition
of a light that comes on when the machine is in, say,
computational state Q
3
(see Fig. 1).
An input-less FSA is specified by a set of states Q and a
set of state-transitions Q ? Q
0
for each current state Q
specifying the next state Q
0
. Such a device is trivially
implemented by Turing’s discrete-position-wheel machine
and a function that maps each physical wheel position W
n
to a logical computational state Q
n
as required. For
example, considering the simple 3-state input-less FSA
described in Table 1, by labelling the three discrete posi-
tions of the wheel W
1
, W
2
, W
3
we can map computational
states of the FSA, Q
1
, Q
2
, Q
3
, to the physical discrete
positions of the wheel, W
1
, W
2
, W
3
, such that, for example,
(W
1
? Q
1
, W
2
? Q
2
, W
3
? Q
3
).
This mapping is observer relative; the physical position
W
1
of the wheel could equally map to computational states
Q
2
or Q
3
and, with other states appropriately assigned,
the machine’s state transition sequence (and hence its
5
For early discussion of these themes see ‘Minds and Machines’,
4: 4, ‘What is Computation?’, November 1994.
Cogn Comput (2009) 1:221–233 225
123

Citations
More filters

Journal Article
Abstract: Many current neurophysiological, psychophysical, and psychological approaches to vision rest on the idea that when we see, the brain produces an internal representation of the world. The activation of this internal representation is assumed to give rise to the experience of seeing. The problem with this kind of approach is that it leaves unexplained how the existence of such a detailed internal representation might produce visual consciousness. An alternative proposal is made here. We propose that seeing is a way of acting. It is a particular way of exploring the environment. Activity in internal representations does not generate the experience of seeing. The outside world serves as its own, external, representation. The experience of seeing occurs when the organism masters what we call the governing laws of sensorimotor contingency. The advantage of this approach is that it provides a natural and principled way of accounting for visual consciousness, and for the differences in the perceived quality of sensory experience in the different sensory modalities. Several lines of empirical evidence are brought forward in support of the theory, in particular: evidence from experiments in sensorimotor adaptation, visual \"filling in,\" visual stability despite eye movements, change blindness, sensory substitution, and color perception.

2,271 citations


Journal ArticleDOI
Abstract: A clutch of '-isms' characterises the approach to consciousness which David Chalmers defends: dualism, epiphenomenalism, functionalism, anti-reductionism, and -probably -panpsychism. (The author would no doubt want 'naturalism' included in the list as well, but as we shall see, Chalmers' predilection to describe his theory as 'scientific' stretches credibility.) While the book does not, as far as I can see, move consciousness research significantly forward, Chalmers succeeds admirably in clarifying the philosophical terrain around and within each of these '-isms' and in questioning the usual assumptions which suggest some of them are mutually exclusive. Because nearly all of what follows is highly critical, I want to be explicit about one thing: I do not think this is a bad book. Throughout, most discussions keep to a very high standard; it's just that they include fatal flaws.

791 citations


Book
13 Apr 2012
TL;DR: This book offers the definitive presentation of Soar from theoretical and practical perspectives, providing comprehensive descriptions of fundamental aspects and new components, and proposes requirements for general cognitive architectures and explicitly evaluates how well Soar meets those requirements.
Abstract: In development for thirty years, Soar is a general cognitive architecture that integrates knowledge-intensive reasoning, reactive execution, hierarchical reasoning, planning, and learning from experience, with the goal of creating a general computational system that has the same cognitive abilities as humans. In contrast, most AI systems are designed to solve only one type of problem, such as playing chess, searching the Internet, or scheduling aircraft departures. Soar is both a software system for agent development and a theory of what computational structures are necessary to support human-level agents. Over the years, both software system and theory have evolved. This book offers the definitive presentation of Soar from theoretical and practical perspectives, providing comprehensive descriptions of fundamental aspects and new components. The current version of Soar features major extensions, adding reinforcement learning, semantic memory, episodic memory, mental imagery, and an appraisal-based model of emotion. This book describes details of Soar's component memories and processes and offers demonstrations of individual components, components working in combination, and real-world applications. Beyond these functional considerations, the book also proposes requirements for general cognitive architectures and explicitly evaluates how well Soar meets those requirements.

697 citations


Journal ArticleDOI

126 citations


Journal ArticleDOI
TL;DR: Various developments of the stochastic diffusion search algorithm are reviewed, which have been shown to perform well in a variety of application domains including continuous optimisation, implementation on hardware and medical imaging.
Abstract: Stochastic Diffusion Search, first incepted in 1989, belongs to the extended family of swarm intelligence algorithms. In contrast to many nature-inspired algorithms, stochastic diffusion search has a strong mathematical framework describing its behaviour and convergence. In addition to concisely exploring the algorithm in the context of natural swarm intelligence systems, this paper reviews various developments of the algorithm, which have been shown to perform well in a variety of application domains including continuous optimisation, implementation on hardware and medical imaging. This algorithm has also being utilised to argue the potential computational creativity of swarm intelligence systems through the two phases of exploration and exploitation.

42 citations


References
More filters

Journal ArticleDOI
TL;DR: This article concludes a series of papers concerned with the flow of electric current through the surface membrane of a giant nerve fibre by putting them into mathematical form and showing that they will account for conduction and excitation in quantitative terms.
Abstract: This article concludes a series of papers concerned with the flow of electric current through the surface membrane of a giant nerve fibre (Hodgkinet al, 1952,J Physiol116, 424–448; Hodgkin and Huxley, 1952,J Physiol116, 449–566) Its general object is to discuss the results of the preceding papers (Section 1), to put them into mathematical form (Section 2) and to show that they will account for conduction and excitation in quantitative terms (Sections 3–6)

18,530 citations


Additional excerpts

  • ...biological plausibility; early models include [3–6]....

    [...]


Journal ArticleDOI
TL;DR: This method is used to examine receptive fields of a more complex type and to make additional observations on binocular interaction and this approach is necessary in order to understand the behaviour of individual cells, but it fails to deal with the problem of the relationship of one cell to its neighbours.
Abstract: What chiefly distinguishes cerebral cortex from other parts of the central nervous system is the great diversity of its cell types and interconnexions. It would be astonishing if such a structure did not profoundly modify the response patterns of fibres coming into it. In the cat's visual cortex, the receptive field arrangements of single cells suggest that there is indeed a degree of complexity far exceeding anything yet seen at lower levels in the visual system. In a previous paper we described receptive fields of single cortical cells, observing responses to spots of light shone on one or both retinas (Hubel & Wiesel, 1959). In the present work this method is used to examine receptive fields of a more complex type (Part I) and to make additional observations on binocular interaction (Part II). This approach is necessary in order to understand the behaviour of individual cells, but it fails to deal with the problem of the relationship of one cell to its neighbours. In the past, the technique of recording evoked slow waves has been used with great success in studies of functional anatomy. It was employed by Talbot & Marshall (1941) and by Thompson, Woolsey & Talbot (1950) for mapping out the visual cortex in the rabbit, cat, and monkey. Daniel & Whitteiidge (1959) have recently extended this work in the primate. Most of our present knowledge of retinotopic projections, binocular overlap, and the second visual area is based on these investigations. Yet the method of evoked potentials is valuable mainly for detecting behaviour common to large populations of neighbouring cells; it cannot differentiate functionally between areas of cortex smaller than about 1 mm2. To overcome this difficulty a method has in recent years been developed for studying cells separately or in small groups during long micro-electrode penetrations through nervous tissue. Responses are correlated with cell location by reconstructing the electrode tracks from histological material. These techniques have been applied to

12,146 citations


"A Cognitive Computation Fallacy? Co..." refers background in this paper

  • ...After Hubel and Wiesel [2] this view of the neuron as a specialised feature detector has become treated as established doctrine....

    [...]


Journal ArticleDOI
TL;DR: This article will be concerned primarily with the second and third questions, which are still subject to a vast amount of speculation, and where the few relevant facts currently supplied by neurophysiology have not yet been integrated into an acceptable theory.
Abstract: The first of these questions is in the province of sensory physiology, and is the only one for which appreciable understanding has been achieved. This article will be concerned primarily with the second and third questions, which are still subject to a vast amount of speculation, and where the few relevant facts currently supplied by neurophysiology have not yet been integrated into an acceptable theory. With regard to the second question, two alternative positions have been maintained. The first suggests that storage of sensory information is in the form of coded representations or images, with some sort of one-to-one mapping between the sensory stimulus

7,401 citations


Additional excerpts

  • ...biological plausibility; early models include [3–6]....

    [...]


Book
01 Jan 1988
Abstract: The first of these questions is in the province of sensory physiology, and is the only one for which appreciable understanding has been achieved. This article will be concerned primarily with the second and third questions, which are still subject to a vast amount of speculation, and where the few relevant facts currently supplied by neurophysiology have not yet been integrated into an acceptable theory. With regard to the second question, two alternative positions have been maintained. The first suggests that storage of sensory information is in the form of coded representations or images, with some sort of one-to-one mapping between the sensory stimulus

7,184 citations


Book
01 Jan 1950
TL;DR: If the meaning of the words “machine” and “think” are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, “Can machines think?” is to be sought in a statistical survey such as a Gallup poll.
Abstract: I propose to consider the question, “Can machines think?”♣ This should begin with definitions of the meaning of the terms “machine” and “think”. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words “machine” and “think” are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, “Can machines think?” is to be sought in a statistical survey such as a Gallup poll.

6,121 citations


"A Cognitive Computation Fallacy? Co..." refers background in this paper

  • ...In his 1950 paper ‘Computing Machinery and Intelligence’ [ 57 ] Turing defined DSMs as ‘‘machines that move in sudden jumps or clicks from one quite definite state to another’’ and explained that modern digital computers fall within the class of them....

    [...]