scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Cognitive Computation Fallacy? Cognition, Computations and Panpsychism

30 May 2009-Cognitive Computation (Springer-Verlag)-Vol. 1, Iss: 3, pp 221-233
TL;DR: A group of philosophical arguments that suggest either unequivocal optimism in computationalism is misplaced—computation is neither necessary nor sufficient for cognition—or panpsychism (the belief that the physical universe is fundamentally composed of elements each of which is conscious) is true are reviewed.
Abstract: The journal of Cognitive Computation is defined in part by the notion that biologically inspired computational accounts are at the heart of cognitive processes in both natural and artificial systems. Many studies of various important aspects of cognition (memory, observational learning, decision making, reward prediction learning, attention control, etc.) have been made by modelling the various experimental results using ever-more sophisticated computer programs. In this manner progressive inroads have been made into gaining a better understanding of the many components of cognition. Concomitantly in both science and science fiction the hope is periodically re-ignited that a man-made system can be engineered to be fully cognitive and conscious purely in virtue of its execution of an appropriate computer program. However, whilst the usefulness of the computational metaphor in many areas of psychology and neuroscience is clear, it has not gone unchallenged and in this article I will review a group of philosophical arguments that suggest either such unequivocal optimism in computationalism is misplaced—computation is neither necessary nor sufficient for cognition—or panpsychism (the belief that the physical universe is fundamentally composed of elements each of which is conscious) is true. I conclude by highlighting an alternative metaphor for cognitive processes based on communication and interaction.

Summary (3 min read)

Introduction

  • In operation, such computationally defined neurons effectively sum up their input and compute a complex nonlinear function on this value; output information being encoded in the mean firing rate of neurons, which in turn exhibit narrow functional specialisation.
  • In the literature there exist numerous examples of such learning rules and architectures, more or less inspired by varying degrees of biological plausibility; early models include [3–6].
  • From this followed the functional specialization paradigm, mapping different areas of the brain to specific cognitive functions.
  • 9] and (iii) an implicit non-reductive functionalism with respect to consciousness [10].

The CTM

  • The CTM occupies one part of the spectrum of representational theories of mind (RTM).
  • Harnish [7] observes that the RTM entails: – Cognitive states are relations to mental representations which have content.
  • – Psychological states are multiply realisable in different organisms.
  • Putnam’s 1967 conclusion is that the best explanation of the joint multiple realisability of TMs and psychological states3 is that TMs specify the relevant functional states and so specify the psychological states of the organism; hence by this observation Putnam makes the move from ‘the intelligence of computation to the computational theory of intelligence’ [7].
  • – A cognitive state is a state [of mind] denoting knowledge; understanding; beliefs, etc. – Cognitive processes—changes in cognitive states—are computational operations on these computational representations.

The Problem of Consciousness

  • The term ‘consciousness’ can imply many things to many different people.
  • Its negative conclusion is that computation is neither necessary nor sufficient for cognition; its positive conclusion suggests that the adoption of a new metaphor may be helpful in addressing hard conceptual questions related to consciousness and understanding.
  • An analogy is with the application of Newtonian physics and Quantum physics—both useful descriptions of the world, but descriptions that are most appropriate in addressing different types of questions.
  • The idea that the appropriately programmed computer really is a mind, and was eloquently suggested by Chalmers (ibid).
  • This asserts that, ‘‘given any system that has conscious experiences, then any system that has the same fine-grained functional organization will have qualitatively identical experiences’’.

The Problem of Computation

  • It is a commonly held view that ‘there is a crucial barrier between computer models of minds and real minds: the barrier of ness’ and thus that ‘information-processing’ and ‘phenomenal experiences’ are conceptually distinct [23].
  • In the following sections I briefly review two wellknown arguments targeting computational accounts of cognition from Penrose and Searle, which together suggest computations are neither necessary nor sufficient for mind.
  • Arguments based on Gödel’s first incompleteness theorem—initially from Lucas [25, 26] were first criticised by Benacerraf [27] and subsequently extended, developed and widely popularised by Penrose [28–31]—typically endeavour to show that for any such formal system F, humans can find the Gödel sentence Gð gÞ whilst the computation/machine (being itself bound by F) cannot.
  • Now in response to the questions in Chinese and English there are two subsystems—the native English speaking Searle and the internalised Searle-in-the-Chinese-room—but all the same he [Searle] continues to understand nothing of Chinese, and a fortiori neither does the system, because there is nothing in the system that is not just a part of him.

Discrete State Machines

  • Turing defined DSMs as ‘‘machines that move in sudden jumps or clicks from one quite definite state to another’’ and explained that modern digital computers fall within the class of them.
  • Turing demonstrated that such a device, which continually jumps through a linear series of state transitions like clockwork may be implemented by a simple discrete-position-wheel that revolves through 120 intervals at each clock tick.
  • Note, after Chalmers, that the discrete position wheel machine described above will only implement a particular execution trace of the FSA and Chalmers remains unfazed at this result because he states that input-less machines are simply an ‘‘inappropriate formalism’’ for a computationalist theory of mind [32].

Objections

  • As the experimental setup is precisely the same for experiment (2) as for experiment (1) the computationalist must continue to claim that the robot continues to instantiate appropriate phenomenological states over this period and it is clear that a posteriori knowledge of the system input does not impact this claim.
  • It is apparent that under mapping A (see Table 4), the gate X computes the logical AND function.
  • The Objection From Randomness Superficially the DwP reductio only targets DSMs; it has nothing to say about the conscious state of suitably engineered Stochastic Automata [60].
  • At the ‘Toward a Science of Consciousness’ conference in Tucson 2006 Ron argued that as the authors morph between R1 and R2 with the deletion of each conditional non-entered state sequence substantive physical differences between R1 and R2 will emerge.

Is Counterfactual Sensitivity Essential to a Computational Account of Cognition?

  • The supervenience thesis tells us that, if the authors introduce into the vicinity of the system an entirely inert object that has absolutely no causal or physical interaction with the system, then the same activity will support the same mode of consciousness.
  • So despite Bishop’s claim, if R1 and R2 differ in their counterfactual formal properties, they must differ in their physical properties.
  • In each of the following experiments the robots are instructed to report the colour of a large red square fixed in the centre of their visual field.
  • Next the virtual robot software is re-complied using two slightly different partial evaluation compilers [65].
  • However the reductio targets computationalism—the formal abstraction and instantiation of consciousness through appropriate DSMs (and/or their stochastic variants); the DwP reductio does not target continuous [dynamic] systems or identity theories (where conscious properties of the system are defined to be irreducible from the underlying physical agent–environment system).

Are These A Priori Critiques of the Computational Metaphor too Strong?

  • Interestingly, as this form of compile time partial evaluation process cannot be undertaken for the real robot, the DwP reductio strictly no longer holds against it; however, this does not help the computationlist as any putative phenomenal states of the real robot have now become tightly bound to properties of the real-world agent/environment interactions and not the mere computations.
  • There are two responses to this question, one weak and one strong.
  • The first—the weak response—emerges from the Chinese room and DwP reductio.
  • Hence Searle’s famous observation that ‘‘… the idea that computer simulations could be the real thing ought to have seemed suspicious in the first place because the computer is not confined to simulating mental operations, by any means.
  • Both of the above responses accommodate results from computational neuroscience, but clearly both also highlight fundamental limitations to the computational metaphor.

So what Is Cognition, If Not Computation?

  • In contrast to computation, communication is not merely an observer-relative anthropomorphic projection on reality, as even simple organisms (e.g. bacteria) communicate with each other or interact with their environment.
  • Thus the new metaphor— cognition as communication—is sympathetic to modern post-symbolic, anti-representationalist, embodied, enactive accounts of cognition such as those from Brooks [70], Varela [19], O’Regan [15], Thompson [71] and Bishop and Nasuto [13].

Conclusion

  • All matter, from the simplest particles to the most complex living organisms, undergo physical processes which in most sciences are not given any special interpretation.
  • In neuroscience, and in connectionism, it is assumed that neurons and their systems possess special computational capabilities; this is equivalent to claiming that a spring, when extended by a moderate force, computes its deformation according to Hooke’s law.
  • But at heart he follows an extremely simple line of reasoning: consider an idealised analogue computer that can add two reals (a, b) and output one if they are the same, zero otherwise.
  • I would like to thank the reviewers for the many helpful comments I received during the preparation of this article.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

A Cognitive Computation Fallacy? Cognition, Computations
and Panpsychism
John Mark Bishop
Published online: 30 May 2009
Springer Science+Business Media, LLC 2009
Abstract The journal of Cognitive Computation is defined
in part by the notion that biologically inspired computational
accounts are at the heart of cognitive processes in both
natural and artificial systems. Many studies of various
important aspects of cognition (memory, observational
learning, decision making, reward prediction learning,
attention control, etc.) have been made by modelling the
various experimental results using ever-more sophisticated
computer programs. In this manner progressive inroads have
been made into gaining a better understanding of the many
components of cognition. Concomitantly in both science and
science fiction the hope is periodically re-ignited that a man-
made system can be engineered to be fully cognitive and
conscious purely in virtue of its execution of an appropriate
computer program. However, whilst the usefulness of the
computational metaphor in many areas of psychology and
neuroscience is clear, it has not gone unchallenged and in
this article I will review a group of philosophical arguments
that suggest either such unequivocal optimism in computa-
tionalism is misplaced—computation is neither necessary
nor sufficient for cognition—or panpsychism (the belief that
the physical universe is fundamentally composed of ele-
ments each of which is conscious) is true. I conclude by
highlighting an alternative metaphor for cognitive processes
based on communication and interaction.
Keywords Computationalism Machine consciousness
Panpsychism
Introduction
Over the hundred years since the publication of James’
psychology [1] neuroscientists have attempted to define the
fundamental features of the brain and its information-pro-
cessing capabilities in terms of (i) mean firing rates at
points in the brain cortex (neurons) and (ii) computations;
today the prevailing view in neuroscience is that neurons
can be considered fundamentally computational devices. In
operation, such computationally defined neurons effec-
tively sum up their input and compute a complex non-
linear function on this value; output information being
encoded in the mean firing rate of neurons, which in turn
exhibit narrow functional specialisation. After Hubel and
Wiesel [2] this view of the neuron as a specialised feature
detector has become treated as established doctrine. Fur-
thermore, it has been shown that richly interconnected
networks of such neurons can ‘learn’ by suitably adjusting
the inter-neuron connection weights according to complex
computationally defined processes. In the literature there
exist numerous examples of such learning rules and
architectures, more or less inspired by varying degrees of
biological plausibility; early models include [36]. From
this followed the functional specialization paradigm,
mapping different areas of the brain to specific cognitive
functions.
In this article I suggest that this attraction to viewing the
neuron merely as a computational device fundamentally
stems from (i) the implicit adoption of a computational
theory of mind (CTM) [7]; (ii) a concomitant functionalism
with respect to the instantiation of cognitive processes
[8, 9] and (iii) an implicit non-reductive functionalism with
respect to consciousness [10]. Conversely, I suggest that a
computational description of brain operations has difficulty
in providing a physicalist account of several key features of
J. M. Bishop (&)
Department of Computing, Goldsmiths, University of London,
London, UK
e-mail: m.bishop@gold.ac.uk
123
Cogn Comput (2009) 1:221–233
DOI 10.1007/s12559-009-9019-6

human cognitive systems (in particular phenomenal con-
sciousness and ‘understanding’) and hence that computa-
tions are neither necessary nor sufficient for cognition; that
any computational description of brain processes is thus
best understood merely in a metaphorical sense. I conclude
by answering the question What is cognition if not com-
putation? by tentatively highlighting an alternative meta-
phor, defined by physically grounded processes of
communication and interaction, which is less vulnerable to
the three classical criticisms of computationalism described
herein.
1
The CTM
The CTM occupies one part of the spectrum of represen-
tational theories of mind (RTM). Although currently
undergoing challenges from dynamic systems, embodied,
enactivist and constructivist accounts of cognition (e.g.
[1319]), the RTM remains ubiquitous in contemporary
cognitive science and experimental psychology. Contrary
to naive or direct realism, indirect realism (or representa-
tionalism) postulates the actual existence of mental inter-
mediaries—representations—between the observing
subject and the world. The earliest forms of RTM can be
traced to Descartes [20] who held that all thought was
representational
2
and that it is the very nature of mind (res
cogitans) to represent the world (res extensa).
Harnish [7] observes that the RTM entails:
Cognitive states are relations to mental representations
which have content.
A cognitive state is:
A state [of mind] denoting knowledge; understand-
ing; beliefs, etc.
This definition subsequently broadened to include
knowledge of raw sensations, colours, pains, etc.
Cognitive processes—changes in cognitive states—are
mental operations on these representations.
The Emergence of Functionalism
The CTM came to the fore after the development of the
stored program digital computer in the mid-20th century
when, through machine-state functionalism, Putnam [8, 9]
first embedded the RTM in a computational framework. At
the time Putnam famously held that:
Turing machines (TMs) are multiply realisable on
different hardware.
Psychological states are multiply realisable in different
organisms.
Psychological states are functionally specified.
Putnam’s 1967 conclusion is that the best explanation of
the joint multiple realisability of TMs and psychological
states
3
is that TMs specify the relevant functional states
and so specify the psychological states of the organism;
hence by this observation Putnam makes the move from
‘the intelligence of computation to the computational the-
ory of intelligence [7]. Today variations on CTM structure
the most commonly held philosophical scaffolds for cog-
nitive science and psychology (e.g. providing the implicit
foundations of evolutionary approaches to psychology and
linguistics). Formally stated the CTM entails:
Cognitive states are computational relations to compu-
tational representations which have content.
A cognitive state is a state [of mind] denoting
knowledge; understanding; beliefs, etc.
Cognitive processes—changes in cognitive states—are
computational operations on these computational
representations.
The Problem of Consciousness
The term ‘consciousness’ can imply many things to many
different people. In the context of this article I refer spe-
cifically to that aspect of consciousness Ned Block terms
‘phenomenal consciousness’ [21], by which I refer to the
first person, subjective phenomenal states—sensory tickles,
pains, visual experiences and so on.
Cartesian theories of cognition can be broken down into
what Chalmers [10] calls the ‘easy’ problem of percep-
tion—the classification and identification of sense stim-
uli—and a corresponding ‘hard’ problem, which is the
realization of the associated phenomenal state. The
1
In two earlier articles (with Nasuto et al. [11, 12]) the author
explored theoretical limitations of the computational metaphor from
positions grounded in psychology and neuroscience; this article—
outlining a third perspective—reviews three philosophical critiques of
the computational metaphor with respect to ‘hard’ questions of
cognition related to consciousness and understanding. Its negative
conclusion is that computation is neither necessary nor sufficient for
cognition; its positive conclusion suggests that the adoption of a new
metaphor may be helpful in addressing hard conceptual questions
related to consciousness and understanding. Drawing on the conclu-
sions of the two earlier articles, the suggested new metaphor is one
grounding cognition in processes of communication and interaction
rather than computation. An analogy is with the application of
Newtonian physics and Quantum physics—both useful descriptions of
the world, but descriptions that are most appropriate in addressing
different types of questions.
2
Controversy remains surrounding Descartes’ account of the repre-
sentational content of non-intellectual thought such as pain.
3
Although Putnam talks about pain not cognition, it is clear that his
argument is intended to be general.
222 Cogn Comput (2009) 1:221–233
123

difference between the easy and the hard problems and an
apparent lack of the link between theories of the former and
an account of the latter has been termed the ‘explanatory-
gap’.
The idea that the appropriately programmed computer
really is a mind, and was eloquently suggested by Chalmers
(ibid). Central to Chalmers’ non-reductive functionalist
theory of mind is the Principle of Organizational Invari-
ance (POI). This asserts that, ‘given any system that has
conscious experiences, then any system that has the same
fine-grained functional organization will have qualitatively
identical experiences’’.
To illustrate the point Chalmers imagines a fine-grained
simulation of the operation of the human brain—a mas-
sively complex and detailed artificial neural network. If, at
a very fine-grained level, each group of simulated neurons
was functionally identical to its counterpart in the real
brain then, via Dancing Qualia and Fading Qualia argu-
ments, Chalmers (ibid) argues that the computational
neural network must have precisely the same qualitative
conscious experiences as the real human brain.
Current research into perception and neuro-physiology
certainly suggests that physically identical brains will
instantiate identical phenomenal states and, although as
Maudlin [22] observes this thesis is not analytic, something
like it underpins computational theories of mind. For if
computational functional structure supervenes on physical
structure then physically identical brains must be compu-
tationally and functionally identical. Thus Maudlin for-
mulates the Supervenience Thesis (ibid) ‘... two physical
systems engaged in precisely the same physical activity
through a time will support precisely the same modes of
consciousness (if any) through that time’’.
The Problem of Computation
It is a commonly held view that ‘there is a crucial barrier
between computer models of minds and real minds: the
barrier of consciousness’ and thus that ‘information-pro-
cessing’ and ‘phenomenal (conscious) experiences’ are
conceptually distinct [23]. But is consciousness a pre-
requisite for genuine cognition and the realisation of
mental states? Certainly Searle believes so, ‘... the study of
the mind is the study of consciousness, in much the same
sense that biology is the study of life’ [24] and this
observation leads him to postulate the Connection Princi-
ple whereby ‘... any mental state must be, at least in
principle, capable of being brought to conscious aware-
ness’ (ibid). Hence, if computational machines are not
capable of enjoying consciousness, they are incapable of
carrying genuine mental states and computation fails as an
adequate metaphor for cognition.
In the following sections I briefly review two well-
known arguments targeting computational accounts of
cognition from Penrose and Searle, which together suggest
computations are neither necessary nor sufficient for mind.
I subsequently outline a simple reductio ad absurdum
argument that suggests there may be equally serious
problems in granting phenomenal (conscious) experience
to systems purely in virtue of their execution of particular
programs; if correct, this argument suggests either strong
computational accounts of consciousness must fail or that
panpsychism is true.
Computations and Understanding: Go
¨
delian Arguments
Against Computationalism
Go
¨
del’s first incompleteness theorem states that ‘... any
effectively generated theory capable of expressing
elementary arithmetic cannot be both consistent and
complete. In particular, for any consistent, effectively
generated formal theory F that proves certain basic
arithmetic truths, there is an arithmetical statement that is
true, but not provable in the theory.’ The resulting true but
unprovable statement Gð
gÞ is often referred to as ‘the
Go
¨
del sentence for the theory (albeit there are infinitely
many other statements in the theory that share with the
Go
¨
del sentence the property of being true but not provable
from the theory).
Arguments based on Go
¨
del’s first incompleteness theo-
rem—initially from Lucas [25, 26] were first criticised by
Benacerraf [27
] and subsequently extended, developed and
widely popularised by Penrose [2831]—typically
endeavour to show that for any such formal system F,
humans can find the Go
¨
del sentence Gð
gÞ whilst the
computation/machine (being itself bound by F) cannot. In
[29] Penrose develops a subtle reformulation of the vanilla
argument that purports to show that ‘the human mathe-
matician can ‘see’ that the Go
¨
del Sentence is true for
consistent F even though the consistent F cannot prove
Gð
gÞ’’:
A detailed discussion of Penrose’s formulation of the
Go
¨
delian argument is outside the scope of this article (for
a critical introduction see [32, 33] and for Penrose’s
response see [31]); here it is simply important to note that
although Go
¨
delian-style arguments purporting to show
‘computations are not necessary for cognition’ have been
extensively
4
and vociferously critiqued in the literature
(see [34] for a review), interest in them—both positive
and negative—still regularly continues to surface (e.g.
[35, 36]).
4
For example, Lucas maintains a web page http://users.ox.ac.uk/
*jrlucas/Godel/referenc.html listing more than 50 such criticisms.
Cogn Comput (2009) 1:221–233 223
123

The Chinese Room Argument
One of the most widely known critics of computational
theories of mind is John Searle. His best-known work on
machine understanding, first presented in the 1980 paper
‘Minds, Brains & Programs’ [37], has become known as
the Chinese Room Argument (CRA). The central claim of
the CRA is that computations alone are not sufficient to
give rise to cognitive states, and hence that computational
theories of mind cannot fully explain human cognition.
More formally Searle stated that the CRA was an attempt
to prove the truth of the premise:
Syntax is not sufficient for semantics.
Which, together with the following two axioms:
(i) Programs are formal (syntactical).
(ii) Minds have semantics (mental content).
... led Searle to conclude that:
Programs are not minds.
... and hence that computationalism—the idea that the
essence of thinking lies in computational processes and that
such processes thereby underlie and explain conscious
thinking—is false [38].
In the CRA Searle emphasises the distinction between
syntax and semantics to argue that while computers can act
in accordance to formal rules, they cannot be said to know
the meaning of the symbols they are manipulating, and
hence cannot be credited with genuinely understanding the
results of the execution of programs those symbols com-
pose. In short, Searle claims that while cognitive compu-
tations may simulate aspects of cognition, they can never
instantiate it.
The CRA describes a situation where a monoglot Searle
is locked in a room and presented with a large batch of
papers covered with Chinese writing that he does not
understand. Indeed, Searle does not even recognise the
writing as Chinese ideograms, as distinct from say Japa-
nese or simply meaningless patterns. A little later Searle is
given a second batch of Chinese symbols together with a
set of rules (in English) that describe an effective method
(algorithm) for correlating the second batch with the first
purely by their form or shape. Finally Searle is given a
third batch of Chinese symbols together with another set of
rules (in English) to enable him to correlate the third batch
with the first two, and these rules instruct him how to return
certain sets of shapes (Chinese symbols) in response to
certain symbols given in the third batch.
Unknown to Searle, the people outside the room call the
first batch of Chinese symbols the script; the second batch
the story; the third questions about the story and the
symbols he returns they call answers to the questions about
the story. The set of rules he is obeying they call the
program. To complicate the matters further, the people
outside also give him stories in English and ask him
questions about them in English, to which he can reply in
English. After a while Searle gets so good at following the
instructions and the outsiders get so good at supplying the
rules which he has to follow, that the answers he gives to
the questions in Chinese symbols become indistinguishable
from those a true Chinese man might give.
From the external point of view the answers to the two
sets of questions—one in English the other in Chinese—are
equally good; Searle-in-the-Chinese-room has passed the
Turing test. Yet in the Chinese case Searle behaves like a
computer and does not understand either the questions he is
given or the answers he returns, whereas in the English
case he does. To highlight the difference consider Searle is
passed a joke first in Chinese and then English. In the
former case Searle-in-the-room might correctly output
appropriate Chinese ideograms signifying ‘ha ha’ whilst
remaining phenomenologically unmoved, whilst in the
latter, if the joke is funny, he may laugh out loud and feel
the joke within.
The decades since its inception have witnessed many
reactions to the CRA from the computational, cognitive
science, philosophical and psychological communities,
with perhaps the most widely held being based on what has
become known as the ‘Systems Reply’. This concedes that,
although the person in the room does not understand Chi-
nese, the entire system (of the person, the room and its
contents) does.
Searle finds this response entirely unsatisfactory and
responds by allowing the person in the room to memorise
everything (the rules, the batches of paper, etc.) so that there
is nothing in the system not internalised within Searle. Now
in response to the questions in Chinese and English there are
two subsystems—the native English speaking Searle and the
internalised Searle-in-the-Chinese-room—but all the same
he [Searle] continues to understand nothing of Chinese, and
a fortiori neither does the system, because there is nothing in
the system that is not just a part of him.
But others are left equally unmoved by Searle’s
response; for example in [39] Haugland asks why should
we unquestioningly accept Searle’s conclusion that ‘the
internalised Chinese room system does not understand
Chinese’, given that Searle’s responses to the questions in
Chinese are all correct? Yet, despite this and other tren-
chant criticism, almost 30 years after its first publication
there continues to be lively interest in the CRA (e.g.
[4047]). In a 2002 volume of analysis [48] comment
ranged from Selmer Bringsjord who observed the CRA to
be ‘arguably the 20th century’s greatest philosophical
polariser’ [49], to Rey who claims that in his definition
of Strong AI Searle ‘burdens the [Computational
224 Cogn Comput (2009) 1:221–233
123

Representational Theory of Thought (Strong AI)] project
with extraneous claims which any serious defender of it
should reject’ [50]. Nevertheless, although opinion on the
argument remains divided, most commentators now agree
that the CRA helped shift research in artificial intelligence
away from classical computationalism (which, pace Newell
and Simon [51], viewed intelligence fundamentally in
terms of symbol manipulation) first to a sub-symbolic
neural-connectionism and more recently, moving even
further away from symbols and representations, towards
embodied and enactive approaches to cognition. Clearly,
whatever the verdict on the soundness of Searle’s Chinese
room argument, the subsequent historical response offers
eloquent testament to his conclusion that programs are not
minds’.
Dancing with Pixies
The core argument I wish to present in this article targeting
computational accounts of cognition—the Dancing with
Pixies (DwP) reductio—derives from ideas originally
outlined by Putnam [52], Maudlin [22], Searle [53] and
subsequently criticised by Chalmers [10], Klein [54] and
Chrisley [55, 56] amongst others
5
. In what follows, instead
of seeking to justify Putnam’s claim that ‘every open
system implements every finite state automaton’ (FSA)
and hence that ‘psychological states of the brain cannot be
functional states of a computer’’, I will seek to establish the
weaker result that, over a finite time window, every open
physical system implements the trace of a FSA Q on fixed,
specified input (I). That this result leads to panpsychism is
clear as, equating FSA Q(I) to a specific computational
system that is claimed to instantiate phenomenal states as it
executes, and following Putnam’s procedure, identical
computational (and ex hypothesi phenomenal) states can be
found in every open physical system.
Formally DwP is a simple reductio ad absurdum argu-
ment that endeavours to demonstrate that:
IF the assumed claim is true: that an appropriately
programmed computer really does instantiate genuine
phenomenal states
THEN panpsychism holds
However, against the backdrop of our immense
scientific knowledge of the closed physical world,
and the corresponding widespread desire to explain
everything ultimately in physical terms, panpsy-
chism has come to seem an implausible view...
HENCE we should reject the assumed claim.
The route-map for this endeavour is as follows: in the
next section I introduce discrete state machines (DSMs)
and FSAs and show how, with input to them defined, their
behaviour can be described by a simple un-branching
sequence of state transitions. I subsequently review Put-
nam’s 1988 argument [52] that purports to show how every
open physical system implements every input-less FSA.
Then I apply Putnam’s construction to one execution trace
of any FSA with known input, such that if the FSA in-
stantiates genuine phenomenal states as it executes, then so
must any open physical system. Finally I apply the pro-
cedure to a robotic system that is claimed to instantiate
machine consciousness purely in virtue of its execution of
an appropriate program. The article is completed by a brief
discussion of some objections to the DwP reductio and
concludes by suggesting, at least with respect to ‘hard’
problems, that it may be necessary to develop an alterna-
tive metaphor for cognition to that of computation.
Discrete State Machines
In his 1950 paper ‘Computing Machinery and Intelligence’
[57] Turing defined DSMs as ‘machines that move in
sudden jumps or clicks from one quite definite state to
another’ and explained that modern digital computers fall
within the class of them. An example DSM from Turing is
a simple machine that cycles through three computational
states Q
1
, Q
2
, Q
3
at discrete clock clicks. Turing demon-
strated that such a device, which continually jumps through
a linear series of state transitions like clockwork may be
implemented by a simple discrete-position-wheel that
revolves through 120 intervals at each clock tick. Basic
input can be added to such a machine by the addition of a
simple brake mechanism and basic output by the addition
of a light that comes on when the machine is in, say,
computational state Q
3
(see Fig. 1).
An input-less FSA is specified by a set of states Q and a
set of state-transitions Q ? Q
0
for each current state Q
specifying the next state Q
0
. Such a device is trivially
implemented by Turing’s discrete-position-wheel machine
and a function that maps each physical wheel position W
n
to a logical computational state Q
n
as required. For
example, considering the simple 3-state input-less FSA
described in Table 1, by labelling the three discrete posi-
tions of the wheel W
1
, W
2
, W
3
we can map computational
states of the FSA, Q
1
, Q
2
, Q
3
, to the physical discrete
positions of the wheel, W
1
, W
2
, W
3
, such that, for example,
(W
1
? Q
1
, W
2
? Q
2
, W
3
? Q
3
).
This mapping is observer relative; the physical position
W
1
of the wheel could equally map to computational states
Q
2
or Q
3
and, with other states appropriately assigned,
the machine’s state transition sequence (and hence its
5
For early discussion of these themes see ‘Minds and Machines’,
4: 4, ‘What is Computation?’, November 1994.
Cogn Comput (2009) 1:221–233 225
123

Citations
More filters
Journal ArticleDOI
TL;DR: The mechanistic framework can, in principle, preserve the idea of computational equivalence even between two different enough kinds of physical systems, say, electrical and hydraulic ones.
Abstract: Is the mathematical function being computed by a given physical system determined by the system’s dynamics? This question is at the heart of the indeterminacy of computation phenomenon (Fresco et al. [unpublished]). A paradigmatic example is a conventional electrical AND-gate that is often said to compute conjunction, but it can just as well be used to compute disjunction. Despite the pervasiveness of this phenomenon in physical computational systems, it has been discussed in the philosophical literature only indirectly, mostly with reference to the debate over realism about physical computation and computationalism. A welcome exception is Dewhurst’s ([2018]) recent analysis of computational individuation under the mechanistic framework. He rejects the idea of appealing to semantic properties for determining the computational identity of a physical system. But Dewhurst seems to be too quick to pay the price of giving up the notion of computational equivalence. We aim to show that the mechanist need not pay this price. The mechanistic framework can, in principle, preserve the idea of computational equivalence even between two different enough kinds of physical systems, say, electrical and hydraulic ones.

15 citations


Cites background from "A Cognitive Computation Fallacy? Co..."

  • ...problematic, and even detrimental to computational analyses of cognitive systems (see, for example, Bishop [2009]; Sprevak [2010]; Shagrir [2012]). In the debate on the metaphysical nature of computation, whether or not a given account of computation is able to ‘resolve’ the indeterminacy of computation problem has been considered a litmus test for its adequacy. The notorious case study from Boolean engineering for analysing this phenomenon has been the duality of two simple Boolean gates: a two-input, single-output AND-gate and a two-input, single-output OR-gate. There is nothing special about these two gates: the phenomenon occurs in twelve out of sixteen two-input, single-output Boolean gates (XOR and XNOR, NAND and NOR, and so on). Two noteworthy, recent discussions of this phenomenon are due to Dewhurst ([2018]) and Lee ([forthcoming]), both of which centre on the question of computational individuation....

    [...]

  • ...199; Shagrir [2001], [2012]; Buechner [2008]; Bishop [2009]; Sprevak [2010]; Fresco [2015]; Piccinini [2015]; Miłkowski [2017]; Coelho Mollo [2018])....

    [...]

  • ...problematic, and even detrimental to computational analyses of cognitive systems (see, for example, Bishop [2009]; Sprevak [2010]; Shagrir [2012])....

    [...]

Book ChapterDOI
01 Jan 2014
TL;DR: ‘Sensorimotor Theory’ offers a new enactive approach to perception that emphasises the role of motor actions and their effect on sensory stimuli and is published in Behavioral and Brain Sciences for open peer commentary in 2001.
Abstract: ‘Sensorimotor Theory’ offers a new enactive approach to perception that emphasises the role of motor actions and their effect on sensory stimuli. The seminal publication that launched the field is the target paper co-authored by J. Kevin O’Regan and Alva Noe and published in Behavioral and Brain Sciences (BBS) for open peer commentary in 2001 [27].

14 citations


Cites background from "A Cognitive Computation Fallacy? Co..."

  • ...In [7] Bishop reviews three arguments (summarised herein) that purport to show that computations are not sufficient for cognition; for example, that the execution of a computational connectionist simulation of the brain cannot instantiate genuine understanding or phenomenal consciousness (qua computation) and hence that there are limits to the use of the computational explanations in cognitive science....

    [...]

  • ...Thus a corollary of O’Regan’s move to close the absolute gap [by a refinement of merely abstract formal processes] implies that it - in common with the other two “VR robot systems” highlighted above - would be phenomenally conscious purely in virtue of its execution of an appropriate computer program; and hence that it would be vulnerable to the various critiques of computationalism [7], in so far as these critiques hold at all....

    [...]

Book ChapterDOI
01 Jan 2017
TL;DR: This chapter proposes the idea of info-computational nature as a framework for answering questions about reality for an agent, and takes cognition to be the process of living of an organism, and thus it appears on different levels of complexity.
Abstract: What is reality for an agent? What is minimal cognition? How does the morphology of a cognitive agent affect cognition? These are still open questions among scientists and philosophers. In this chapter we propose the idea of info-computational nature as a framework for answering those questions. Within the info-computational framework, information is defined as a structure (for an agent), and computation as the dynamics of information (information processing). To an agent, nature therefore appears as an informational structure with computational dynamics. Both information and computation in this context have broader meaning than in everyday use, and both are necessarily grounded in physical implementation. Evolution of increasingly complex living agents is understood as a process of morphological (physical, embodied) computation driven by agents’ interactions with the environment. It is a process much more complex than random variation; instead the mechanisms of change are morphological computational processes of self-organisation (and re-organisation). Reality for an agent emerges as a result of interactions with the environment together with internal information processing. Following Maturana and Varela, we take cognition to be the process of living of an organism, and thus it appears on different levels of complexity, from cellular via organismic to social. The simpler the agent, the simpler its “reality” defined by the network of networks of info-computational processes, which constitute its cognition. The debated topic of consciousness takes its natural place in this framework, as a process of information integration that we suggest naturally evolved in organisms with a nervous system. Computing nature/pancomputationalism is sometimes confused with panpsychism or claimed to necessarily imply panpsychism, which we show is not the case. Even though we focus on natural systems in this chapter, the info-computational approach is general and can be used to model both biological and artifactual cognitive agents.

13 citations

Journal ArticleDOI
23 Aug 2021-Synthese
TL;DR: The phenomenon of the indeterminacy of computation lends some support to the idea that a single neuronal structure may perform multiple cognitive functions, each subserved by a different computation.
Abstract: Do the dynamics of a physical system determine what function the system computes? Except in special cases, the answer is no: it is often indeterminate what function a given physical system computes. Accordingly, care should be taken when the question ‘What does a particular neuronal system do?’ is answered by hypothesising that the system computes a particular function. The phenomenon of the indeterminacy of computation has important implications for the development of computational explanations of biological systems. Additionally, the phenomenon lends some support to the idea that a single neuronal structure may perform multiple cognitive functions, each subserved by a different computation. We provide an overarching conceptual framework in order to further the philosophical debate on the nature of computational indeterminacy and computational explanation.

13 citations


Cites background from "A Cognitive Computation Fallacy? Co..."

  • ...Some, for example Shagrir (2020), Bishop (2009) and Sprevak (2010), appeal to semantic features of the system to render it determinate what computation is being performed....

    [...]

  • ...…computation (Fresco et al. 2016) have been discussed by diverse authors—pre-eminently Shagrir (2001, 2020)—and also (in chronological order) Dennett (1978, 2013), Block (1990), Sorensen (1999), Bishop (2009), Sprevak (2010), Fresco (2015), Piccinini (2015), Coelho Mollo (2017), and Dewhurst (2018)....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the authors argue that consciousness should be attributed to any system that exhibits spatiotemporal physical patterns that match the hypotheses about the correlates of consciousness that are compatible with the current experimental results.
Abstract: Research is starting to identify correlations between consciousness and some of the spatiotemporal patterns in the physical brain. For theoretical and practical reasons, the results of experiments on the correlates of consciousness have ambiguous interpretations. At any point in time a number of hypotheses co-exist about and the correlates of consciousness in the brain, which are all compatible with the current experimental results. This paper argues that consciousness should be attributed to any system that exhibits spatiotemporal physical patterns that match the hypotheses about the correlates of consciousness that are compatible with the current experimental results. Some computers running some programs should be attributed consciousness because they produce spatiotemporal patterns in the physical world that match those that are potentially linked with consciousness in the human brain.

11 citations

References
More filters
Journal ArticleDOI
TL;DR: This article concludes a series of papers concerned with the flow of electric current through the surface membrane of a giant nerve fibre by putting them into mathematical form and showing that they will account for conduction and excitation in quantitative terms.
Abstract: This article concludes a series of papers concerned with the flow of electric current through the surface membrane of a giant nerve fibre (Hodgkinet al, 1952,J Physiol116, 424–448; Hodgkin and Huxley, 1952,J Physiol116, 449–566) Its general object is to discuss the results of the preceding papers (Section 1), to put them into mathematical form (Section 2) and to show that they will account for conduction and excitation in quantitative terms (Sections 3–6)

19,800 citations


Additional excerpts

  • ...biological plausibility; early models include [3–6]....

    [...]

Journal ArticleDOI
TL;DR: This method is used to examine receptive fields of a more complex type and to make additional observations on binocular interaction and this approach is necessary in order to understand the behaviour of individual cells, but it fails to deal with the problem of the relationship of one cell to its neighbours.
Abstract: What chiefly distinguishes cerebral cortex from other parts of the central nervous system is the great diversity of its cell types and interconnexions. It would be astonishing if such a structure did not profoundly modify the response patterns of fibres coming into it. In the cat's visual cortex, the receptive field arrangements of single cells suggest that there is indeed a degree of complexity far exceeding anything yet seen at lower levels in the visual system. In a previous paper we described receptive fields of single cortical cells, observing responses to spots of light shone on one or both retinas (Hubel & Wiesel, 1959). In the present work this method is used to examine receptive fields of a more complex type (Part I) and to make additional observations on binocular interaction (Part II). This approach is necessary in order to understand the behaviour of individual cells, but it fails to deal with the problem of the relationship of one cell to its neighbours. In the past, the technique of recording evoked slow waves has been used with great success in studies of functional anatomy. It was employed by Talbot & Marshall (1941) and by Thompson, Woolsey & Talbot (1950) for mapping out the visual cortex in the rabbit, cat, and monkey. Daniel & Whitteiidge (1959) have recently extended this work in the primate. Most of our present knowledge of retinotopic projections, binocular overlap, and the second visual area is based on these investigations. Yet the method of evoked potentials is valuable mainly for detecting behaviour common to large populations of neighbouring cells; it cannot differentiate functionally between areas of cortex smaller than about 1 mm2. To overcome this difficulty a method has in recent years been developed for studying cells separately or in small groups during long micro-electrode penetrations through nervous tissue. Responses are correlated with cell location by reconstructing the electrode tracks from histological material. These techniques have been applied to

12,923 citations


"A Cognitive Computation Fallacy? Co..." refers background in this paper

  • ...After Hubel and Wiesel [2] this view of the neuron as a specialised feature detector has become treated as established doctrine....

    [...]

Journal ArticleDOI
TL;DR: This article will be concerned primarily with the second and third questions, which are still subject to a vast amount of speculation, and where the few relevant facts currently supplied by neurophysiology have not yet been integrated into an acceptable theory.
Abstract: The first of these questions is in the province of sensory physiology, and is the only one for which appreciable understanding has been achieved. This article will be concerned primarily with the second and third questions, which are still subject to a vast amount of speculation, and where the few relevant facts currently supplied by neurophysiology have not yet been integrated into an acceptable theory. With regard to the second question, two alternative positions have been maintained. The first suggests that storage of sensory information is in the form of coded representations or images, with some sort of one-to-one mapping between the sensory stimulus

8,434 citations


Additional excerpts

  • ...biological plausibility; early models include [3–6]....

    [...]

Book
01 Jan 1988
TL;DR: The second and third questions are still subject to a vast amount of speculation, and where the few relevant facts currently supplied by neurophysiology have not yet been integrated into an acceptable theory as mentioned in this paper.
Abstract: The first of these questions is in the province of sensory physiology, and is the only one for which appreciable understanding has been achieved. This article will be concerned primarily with the second and third questions, which are still subject to a vast amount of speculation, and where the few relevant facts currently supplied by neurophysiology have not yet been integrated into an acceptable theory. With regard to the second question, two alternative positions have been maintained. The first suggests that storage of sensory information is in the form of coded representations or images, with some sort of one-to-one mapping between the sensory stimulus

8,134 citations

Journal ArticleDOI
01 Oct 1950-Mind

7,266 citations