scispace - formally typeset
Search or ask a question

Showing papers in "Minds and Machines in 2000"


Journal ArticleDOI
TL;DR: It is concluded that the Turing Test has been, and will continue to be, an influential and controversial topic.
Abstract: The Turing Test is one of the most disputed topics in artificial intelligence, philosophy of mind, and cognitive science This paper is a review of the past 50 years of the Turing Test Philosophical debates, practical developments and repercussions in related disciplines are all covered We discuss Turing's ideas in detail and present the important comments that have been made on them Within this context, behaviorism, consciousness, the `other minds' problem, and similar topics in philosophy of mind are discussed We also cover the sociological and psychological aspects of the Turing Test Finally, we look at the current situation and analyze programs that have been developed with the aim of passing the Turing Test We conclude that the Turing Test has been, and will continue to be, an influential and controversial topic

345 citations


Journal ArticleDOI
TL;DR: Recently unpublished material by Turing casts fresh light on his thinking and dispels a number of philosophical myths concerning the Turing test.
Abstract: Turing's test has been much misunderstood. Recently unpublished material by Turing casts fresh light on his thinking and dispels a number of philosophical myths concerning the Turing test. Properly understood, the Turing test withstands objections that are popularly believed to be fatal.

166 citations


Journal ArticleDOI
TL;DR: The first test realizes a possibility that philosophers have overlooked: a test that uses a human's linguistic performance in setting an empirical test of intelligence, but does not make behavioral similarity to that performance the criterion of intelligence.
Abstract: On a literal reading of `Computing Machinery and Intelligence', Alan Turing presented not one, but two, practical tests to replace the question `Can machines think?' He presented them as equivalent. I show here that the first test described in that much-discussed paper is in fact not equivalent to the second one, which has since become known as `the Turing Test'. The two tests can yield different results; it is the first, neglected test that provides the more appropriate indication of intelligence. This is because the features of intelligence upon which it relies are resourcefulness and a critical attitude to one's habitual responses; thus the test's applicablity is not restricted to any particular species, nor does it presume any particular capacities. This is more appropriate because the question under consideration is what would count as machine intelligence. The first test realizes a possibility that philosophers have overlooked: a test that uses a human's linguistic performance in setting an empirical test of intelligence, but does not make behavioral similarity to that performance the criterion of intelligence. Consequently, the first test is immune to many of the philosophical criticisms on the basis of which the (so-called) `Turing Test' has been dismissed.

45 citations


Journal ArticleDOI
TL;DR: It is argued that both simple heuristics and complex decision machines are required for effective decision making in real time for complex problems.
Abstract: Information is a force multiplier. Knowledge of the enemy's capability and intentions may be of far more value to a military force than additional troops or firepower. Situation assessment is the ongoing process of inferring relevant information about the forces of concern in a military situation. Relevant information can include force types, firepower, location, and past, present and future course of action. Situation assessment involves the incorporation of uncertain evidence from diverse sources. These include photographs, radar scans, and other forms of image intelligence, or IMINT; electronics intelligence, or ELINT, derived from characteristics (e.g., wavelength) of emissions generated by enemy equipment; communications intelligence, or COMINT, derived from the characteristics of messages sent by the enemy; and reports from human informants (HUMINT). These sources must be combined to form a model of the situation. The sheer volume of data, the ubiquity of uncertainty, the number and complexity of hypotheses to consider, the high-stakes environment, the compressed time frame, and deception and damage from hostile forces, combine to present a staggeringly complex problem. Even if one could formulate a decision problem in reasonable time, explicit determination of an optimal decision policy exceeds any reasonable computational resources. While it is tempting to drop any attempt at rational analysis and rely purely on simple heuristics, we argue that this can lead to catastrophic outcomes. We present an architecture for a ``complex decision machine'' that performs rational deliberation to make decisions in real time. We argue that resource limits require such an architecture to be grounded in simple heuristic reactive processes. We thus argue that both simple heuristics and complex decision machines are required for effective decision making in real time for complex problems. We describe an implementation of our architecture applied to the problem of military situation assessment.

31 citations


Journal ArticleDOI
TL;DR: A study of Turing's rules for the test in the context of his advocated purpose and his other texts finds that there are several independent and mutually reinforcing lines of evidence that support the standard reading, while fitting the literal reading in Turing's work faces severe interpretative difficulties.
Abstract: In the 1950s, Alan Turing proposed his influential test for machine intelligence, which involved a teletyped dialogue between a human player, a machine, and an interrogator. Two readings of Turing's rules for the test have been given. According to the standard reading of Turing's words, the goal of the interrogator was to discover which was the human being and which was the machine, while the goal of the machine was to be indistinguishable from a human being. According to the literal reading, the goal of the machine was to simulate a man imitating a woman, while the interrogator – unaware of the real purpose of the test – was attempting to determine which of the two contestants was the woman and which was the man. The present work offers a study of Turing's rules for the test in the context of his advocated purpose and his other texts. The conclusion is that there are several independent and mutually reinforcing lines of evidence that support the standard reading, while fitting the literal reading in Turing's work faces severe interpretative difficulties. So, the controversy over Turing's rules should be settled in favor of the standard reading.

27 citations


Journal ArticleDOI
Saul Traiger1
TL;DR: The test Turing proposed for machine intelligence is rejected in favor of a test based on the Imitation Game introduced by Turing at the beginning of "Computing Machinery and Intelligence."
Abstract: The test Turing proposed for machine intelligence is usually understood to be a test of whether a computer can fool a human into thinking that the computer is a human. This standard interpretation is rejected in favor of a test based on the Imitation Game introduced by Turing at the beginning of "Computing Machinery and Intelligence."

22 citations


Journal ArticleDOI
TL;DR: This paper examines whether a classical model could be translated into a PDP network using a standard connectionist training technique called extra output learning, representing a precise translation of the classical theory to the connectionist model.
Abstract: This paper examines whether a classical model could be translated into a PDP network using a standard connectionist training technique called extra output learning. In Study 1, standard machine learning techniques were used to create a decision tree that could be used to classify 8124 different mushrooms as being edible or poisonous on the basis of 21 different Features (Schlimmer, 1987). In Study 2, extra output learning was used to insert this decision tree into a PDP network being trained on the identical problem. An interpretation of the trained network revealed a perfect mapping from its internal structure to the decision tree, representing a precise translation of the classical theory to the connectionist model. In Study 3, a second network was trained on the mushroom problem without using extra output learning. An interpretation of this second network revealed a different algorithm for solving the mushroom problem, demonstrating that the Study 2 network was indeed a proper theory translation.

20 citations


Journal ArticleDOI
TL;DR: It is shown that the nonmonotonicity of common sense reasoning is a function of the way the authors use logic, not of the logic they use, and four formal proofs are given that there can be no nonmonotonic consequence relation that is characterized by universal constraints on rational belief structures.
Abstract: Conclusions reached using common sense reasoning from a set of premises are often subsequently revised when additional premises are added. Because we do not always accept previous conclusions in light of subsequent information, common sense reasoning is said to be nonmonotonic. But in the standard formal systems usually studied by logicians, if a conclusion follows from a set of premises, that same conclusion still follows no matter how the premise set is augmented; that is, the consequence relations of standard logics are monotonic. Much recent research in AI has been devoted to the attempt to develop nonmonotonic logics. After some motivational material, we give four formal proofs that there can be no nonmonotonic consequence relation that is characterized by universal constraints on rational belief structures. In other words, a nonmonotonic consequence relation that corresponds to universal principles of rational belief is impossible. We show that the nonmonotonicity of common sense reasoning is a function of the way we use logic, not a function of the logic we use. We give several examples of how nonmonotonic reasoning systems may be based on monotonic logics.

18 citations


Journal ArticleDOI
TL;DR: A network analysis procedure and the results obtained using it provide the basis for an insight into the nature of subsymbols, which is surprising.
Abstract: In 1988, Smolensky proposed that connectionist processing systems should be understood as operating at what he termed the `subsymbolic' level. Subsymbolic systems should be understood by comparing them to symbolic systems, in Smolensky's view. Up until recently, there have been real problems with analyzing and interpreting the operation of connectionist systems which have undergone training. However, recently published work on a network trained on a set of logic problems originally studied by Bechtel and Abrahamsen (1991) seems to offer the potential to provide a detailed, empirically based answer to questions about the nature of subsymbols. In this paper, a network analysis procedure and the results obtained using it are discussed. This provides the basis for an insight into the nature of subsymbols, which is surprising.

18 citations


Journal ArticleDOI
TL;DR: Feminist epistemology can be used to approach this from new directions, in particular, to show how women's knowledge may be left out of consideration by AI's focus on masculine subjects.
Abstract: This paper argues that AI follows classical versions of epistemology in assuming that the identity of the knowing subject is not important. In other words this serves to `delete the subject'. This disguises an implicit hierarchy of knowers involved in the representation of knowledge in AI which privileges the perspective of those who design and build the systems over alternative perspectives. The privileged position reflects Western, professional masculinity. Alternative perspectives, denied a voice, belong to less powerful groups including women. Feminist epistemology can be used to approach this from new directions, in particular, to show how women's knowledge may be left out of consideration by AI's focus on masculine subjects. The paper uncovers the tacitly assumed Western professional male subjects in two flagship AI systems, Cyc and Soar.

11 citations


Journal ArticleDOI
Jim Swan1
TL;DR: Keith Devlin a mathematical logician – sets up Descartes as the bad boy of formal logic and seeks to usher in a new logic, adequate to the full range of natural language, via situation theory and a yet-to-be-worked-out “soft” mathematics.
Abstract: The end of logic? A new cosmology of mind? A lot of science writing these days wants to conjure up a new scientific paradigm by declaring the current one dead. One sure way to make the point is to announce the irrelevance of Descartes, taken as the figurehead of modern scientific rationalism. Antonio Damasio, in Descartes’ Error (1994), sets out to describe the neurological basis for his claim that emotion and reason are fundamentally linked, that we cannot be rational without emotion. Daniel Dennett, arguing for a materialist monism in Consciousness Explained (1991), bulldozes the Cartesian Theater, the improbable model of an infinite regress of minds within minds that is the consequence of Cartesian dualism. Now, Keith Devlin a mathematical logician – sets up Descartes as the bad boy of formal logic and seeks to usher in a new logic, adequate to the full range of natural language, via situation theory and a yet-to-be-worked-out “soft” mathematics. In 1985, inThe Mind’s New Science , Howard Gardner remarks that “only very recently have cognitive scientists begun to wonder whether they can, in fact, afford to treat all information equivalently and to ignore issues of content” (Gardner 1985, p. 22). For Gardner, treating information “equivalently” means treating sentences as all either “true” or “false”, no matter what they might actually say. It means analyzing sentences in the manner of mathematical logic, as a description of formal relations among symbols that are themselves without meaning (Gardner’s “content”). Two years earlier, in 1983, Jon Barwise recounts how it was a reading of Walker Percy’s essays on language (Percy 1975) that brought him to question why linguists as well as logicians like himself had neglected meaning (Barwise and Perry 1983, pp. xiv–xv). Even earlier, in 1981, in “Scenes and Other Situations”, Barwise describes his transition from a theory that focuses on analyzing the truth conditions of a sentence to a theory “in which sentences describe types of situations in the world, with truth an important but derivative notion” (Barwise 1981 [1988a, p. 8]). For such a theory, the inference that Socrates is mortal (according to the old chestnut):

Journal ArticleDOI
TL;DR: Through the course of the book, one reads that a universal or mental grammar is not limited to linguistics and is justified by, because it explains, other versions of mental grammar, such as those for American Sign Language, music, vision, and concepts at large.
Abstract: “Why are we the way we are?” That huge question, simply expressed, opens this book. To begin to explain the way we are, in the first chapter (“Finding Our Way into the Problem: The Nature/Nurture Issue”), Jackendoff advances and defends Chomsky’s two “Fundamental Arguments” as “pathbreaking” “parameters”: “Mental Grammar” and “Innate Knowledge” (p. 6). The first demands a set of unconscious principles, while the second demands a “genetically determined specialization for language” (p. 6). In addition, Jackendoff proffers his own “Fundamental Argument”: “The Construction of Experience” (p. 7). We experience the world, he says, by means of these “unconscious principles that operate in the brain” (p. 7). And yet that summary does not reveal the author’s most ambitious argument. Through the course of the book, one reads that a universal or mental grammar is not limited to linguistics. A universal innate grammar is justified by, because it explains, other versions of mental grammar, such as those for American Sign Language, music, vision, and concepts at large. What kind of universal grammar is it that governs these patterns in the mind? In extrapolating from Chomsky’s argument to other kinds of mental patterns, could Jackendoff have diluted the logic of Chomsky’s thesis about language acquisition? What is the logic for universal grammar if it serves to unify grammars for hand languages, music, vision, and concepts? In Chapter 2 (“The Argument for Mental Grammar”), Jackendoff writes for his general audience without qualification: “The notion of mental grammar stored in the brain of a language user is the central concept of modern linguistics” (p. 15). What is it that is stored? The clear answer is pattern. He says that linguists not only think of the rules for patterning words “but also the patterns of sentences possible in our language. These patterns, in turn, describe not just patterns of words but also patterns of patterns” (p. 14), which are the rules of language that make up “mental grammar”. He opens his discussion of communication with light and sound patterns, diagramming the profiles of two talking heads. In the diagram, “a pattern of light reflected off a tree strikes the eyes” of Harry. Harry might wish to say something about the tree to Sam: “Then Harry’s nervous system causes his


Journal ArticleDOI
TL;DR: Species of Mindmarks their first book project together, and it is, to my knowledge, the first and only book-length effort to expound the fundamental tenets of cognitive ethology.
Abstract: The new field of cognitive ethology owes much to the collaborative efforts of Colin Allen (a philosopher) and Marc Bekoff (a cognitive ethologist). In the past several years, this dynamic duo has published a series of articles defending, clarifying, and extending the cognitive ethological program (see, e.g., Bekoff and Allen, 1992; Allen and Bekoff, 1994, 1995). Species of Mindmarks their first book project together, and it is, to my knowledge, the first and only book-length effort to expound the fundamental tenets of cognitive ethology. For this reason alone, it is a book that belongs on the shelves of anyone interested in questions about nonhuman animal minds, but, happily, there are other reasons to recommend it. The book consists of a preface and nine chapters. In these chapters, Allen and Bekoff describe the basic goals of cognitive ethology, trace its roots from Darwin through Konrad Lorenz, Nikolaas Tinbergen, and Donald Griffin, defend it against charges that it is a hopelessly imprecise and “soft” science, respond to specific philosophical worries about representational content, and consider how and whether cognitive ethologists should study nonhuman animal (henceforth, ‘animal’) consciousness. The most challenging and interesting chapters are Chapter 5, “From Folk Psychology to Cognitive Ethology”; Chapter 6, “Intentionality, Social Play, and Communication”; and Chapter 7, “Antipredatory Behavior: From Vigilance to Recognition to Action”. Chapter 9, “Toward an Interdisciplinary Science of Cognitive Ethology: Synthesizing Field, Laboratory, and Armchair Approaches”, is also notable, but its detailed response to Cecilia Heyes and Anthony Dickinson’s (1990) skepticism concerning intentional explanation in the context of several studies of food-approaching behavior in chicks and rats seems out of step with the measured generality of the rest of the book. Though every chapter in the book stands fairly well on its own, one might fail to detect a couple of very sensible themes unless one reads several chapters. One of these themes concerns pluralism. The authors announce in the Preface that they “favor pluralism in all areas” (p. xii), and they’re not kidding. They are pluralists about how to describe behavior (pp. xvi, 48), about whether investigations of animal minds should take place in the laboratory or the field (pp. 12,13), about what it means to be a naturalist about the mind (p. 10), and about how to understand intentionality (pp. 14, 168). While some might see all this pluralism as waffling, it seems appropriate given cognitive ethology’s fledgling status. It’s too early in the game to overlook any potential route to a clearer understanding of animal minds.

Journal ArticleDOI
TL;DR: It is claimed that the proposed `three-world ontology' offers the most appropriate conceptual framework in which the basic problems concerned with cognition and computation can be suitably expressed and discussed, although the solutions of some of these problems seem to lie beyond the horizon of the authors' current understanding.
Abstract: Discussions about the achievements and limitations of the various approaches to the development of intelligent systems can have an essential impact on empirically based research, and with that also on the future development of computer technologies. However, such discussions are often based on vague concepts and assumptions. In this context, we claim that the proposed `three-world ontology' offers the most appropriate conceptual framework in which the basic problems concerned with cognition and computation can be suitably expressed and discussed, although the solutions of some of these problems seem to lie beyond the horizon of our current understanding. We stress the necessity to differentiate between authentic and functional cognitive abilities; although computation is not a plausible way towards authentic intelligence, we claim that computational systems do offer virtually unlimited possibilities to replicate and surpass human cognitive abilities on the functional level.

Journal ArticleDOI
TL;DR: A “proto-account” of causation for networks is developed, based on an account of Andy Clark's, that shows even superpositionality does not undermine information-based explanation.
Abstract: In this paper I defend the propriety of explaining the behavior of distributed connectionist networks by appeal to selected data stored therein. In particular, I argue that if there is a problem with such explanations, it is a consequence of the fact that information storage in networks is superpositional, and not because it is distributed. I then develop a ``proto-account'' of causation for networks, based on an account of Andy Clark's, that shows even superpositionality does not undermine information-based explanation. Finally, I argue that the resulting explanations are genuinely informative and not vacuous.

Journal ArticleDOI
TL;DR: This response is to respond to James H. Bunn’s commentary on my book Patterns in the Mind, which contains common misunderstandings of the very goals of generative linguistics that the book was intended to explicate.
Abstract: I am grateful to the review editor of Minds and Machinesfor this opportunity to respond to James H. Bunn’s commentary (Bunn, this issue) on my book Patterns in the Mind (PiM, Jackendoff, 1994). In the space available here, I will attempt to rectify only some of the most egregious misunderstandings. These, however, are common misunderstandings of the very goals of generative linguistics that the book was intended to explicate. 1 Judging from the comments of Bunn and others on my book and on Pinker (1994), 2 evidently neither Pinker nor I has yet done an adequate job. So it is not obvious that this further commentary will be of much help. Nevertheless:

Journal ArticleDOI
TL;DR: A cognitive theory of graphical and linguistic reasoning: logic and implementation and a case study of analogy at work, in R. Cummins and J. Pollock (eds.) Philosophy and AI: Essays at the interface.
Abstract: Brooks, R. (1987), ‘Planning is just a way of avoiding figuring out what to do next’, MIT Artificial Intelligence Laboratory Working Paper 103. Burge, T. (l986), ‘Individualism and psychology’, Philosophical Review95, pp. 3–45. Davies, M. (1992), ‘Perceptual content and local supervenience’, Proceedings of the Aristotelian Society92, pp. 21–45. Dinsmore, J. (1991), Partitioned representations , Dordrecht: Kluwer Academic Publishers. Fauconnier, G. (1985), Mental spaces , Cambridge, MA: MIT Press. Lakoff, G. (1987),Women, fire and dangerous things , Chicago: Univ. of Chicago Press. Gärdenfors, P. (1996), ‘Mental representations, conceptual spaces and metaphors’, Synthese106, pp. 21–47. Martins, J.P. and Shapiro, S.C. (1988), ‘A model for belief revision’, Artificial Intelligence35, pp. 25–79. McGinn, C. (1989),Mental Content , Oxford: Basil Blackwell. Mozer, M. and Smolensky, P. (1989), ‘Using relevance to reduce network size automatically’, Connection Science 1, pp. 3–17. Olson, K. (1987),An essay on facts . Stanford: CSLI/Univ. of Chicago Press. Putnam, H. (1975), Mind, language and reality: Philosophical papers , Vol. 2, Cambridge: Cambridge Univ. Press. Putnam, H. (1981), Reason, truth and history , Cambridge: Cambridge University Press. Pylyshyn, Z.W. (1973), ‘What the mind’s eye tells the mind’s brain’, Psychological Bulletin80, pp. 1–24. Shapiro, S.C. and Rapaport, W.J. (1987), ‘SnePS considered as a fully intensional propositional semantic network’, in N. Cercone and G. McCalla (eds.) The knowledge frontier: Essays in the representation of knowledge , New York: Springer-Verlag, pp. 262–315. Shapiro, S.C. and Rapaport, W.J. (1991), ‘Models and Minds: Knowledge representation for naturallanguage competence’, in R. Cummins and J. Pollock (eds.) Philosophy and AI: Essays at the interface, Cambridge: MIT Press, pp. 215–259. Stenning, K. and Oberlander, J. (1994), ‘Spatial inclusion as an analogy for set membership: a case study of analogy at work’, in K. Holyoak and J. Barnden (eds.) Analogical Connections , Hillsdale, N.J.: Erlbaum, pp. 446–486. Stenning, K. and Oberlander, J. (1995), ‘A cognitive theory of graphical and linguistic reasoning: logic and implementation’, Cognitive Science19, pp. 97–140.


Journal ArticleDOI
TL;DR: The study addresses the cyclically temporal aspect of sequence recognition, storage and recall using the Recurrent Oscillatory Self-Organizing Map (ROSOM), first introduced by Kaipainen, Papadopoulos and Karhu (1997).
Abstract: The study addresses the cyclically temporal aspect of sequence recognition, storage and recall using the Recurrent Oscillatory Self-Organizing Map (ROSOM), first introduced by Kaipainen, Papadopoulos and Karhu (1997). The unique solution of the network is that oscillatory States are assigned to network units, corresponding to their `readiness-to-fire'. The ROSOM is a categorizer, a temporal sequence storage system and a periodicity detector designed for use in an ambiguous cyclically repetitive environment. As its external input, the model accepts a multidimensional stream of environment-describing feature configurations with implicit periodicities. The output of the model is one or a few closed cycles abstracted from such a stream, mapped as trajectories on a two-dimensional sheet with an organization reminiscent of multi-dimensional scaling. The model's capabilities are explored with a variety of workbench data.

Journal ArticleDOI
TL;DR: REPSCAI cannot succeed because inner content is not sufficient for cognition, even when the representations that carry the content play a role in generating appropriate behaviour.
Abstract: The confusion between cognitive states and the content of cognitive states that gives rise to psychologism also gives rise to reverse psychologism. Weak reverse psychologism says that we can study cognitive states by studying content – for instance, that we can study the mind by studying linguistics or logic. This attitude is endemic in cognitive science and linguistic theory. Strong reverse psychologism says that we can generate cognitive states by giving computers representations that express the content of cognitive states and that play a role in causing appropriate behaviour. This gives us strong representational, classical AI (REPSCAI), and I argue that it cannot succeed. This is not, as Searle claims in his Chinese Room Argument, because syntactic manipulation cannot generate content. Syntactic manipulation can generate content, and this is abundantly clear in the Chinese Room scenano. REPSCAI cannot succeed because inner content is not sufficient for cognition, even when the representations that carry the content play a role in generating appropriate behaviour.

Journal ArticleDOI
TL;DR: The correlation supplied by this neurological program is presupposed by Trefil by the dualism that he previously claims has been refuted by current neurological research.
Abstract: correlation supplied by this neurological program is presupposedby the dualism that he previously claims has been refuted by current neurological research. Finally, in Chapter 13, “Consciousness and Complexity”, we are treated to the “ultimate solution” to the problem of consciousness. And, as the title suggests, the proferred answer is in terms of complex systems. The remainder of the book is then a protracted bout of handwaving over complex systems and emergent properties. We are told that consciousness is just an emergent property of the brain. Aside from the fact that we’re given no details about how this is supposed to work, Trefil fails to consider the obvious rejoinder that the problem of consciousness can simply be restatedin terms of emergent properties rather than base-level neurophysiology. We now have a (supposed) correlation between emergent properties of the brain and conscious experiences. But emergent properties of physical systems are still just objective, nonqualitative properties of physical systems, and, as such, there is the same kind of gap separating them from the associated subjective, qualitative phenomena. The essential problem simply recurs at a higher level of physical organization (see Schweizer, 1994 for related discussion). And what does all this have to do with human uniqueness? If consciousness is what makes us unique (rather than behavioral outputs, it would seem) and consciousness is just an emergent property of a complex system, then can’t this property be computationally simulated or artificially reproduced? Trefil’s rather inconclusive answer isprobably not . And why not? – because it’s too complex.


Journal Article
TL;DR: Peterson, Philip L. (1996), Review of Richard 1990, Minds and Machines6, pp. 249–253; Quine, Willard van Orman (1960), Word and Object, Cambridge, MA: MIT Press.
Abstract: Peterson, Philip L. (1996), Review of Richard 1990, Minds and Machines6, pp. 249–253. Peterson, Philip L. (1997), Fact Proposition Event , Dordrecht, The Netherlands: Kluwer Academic Publishers. Quine, Willard van Orman (1960), Word and Object , Cambridge, MA: MIT Press. Richard, Mark (1990), Propositional Attitudes: An Essay on Thoughts and How We Ascribe Them , Cambridge, U.K.: Cambridge University Press.


Journal ArticleDOI
TL;DR: The framework Mulhauser adopts and defends features Gregory Chaitin’s (1987) version of Algorithmic Information Theory, which Mulhausers uses to ground a notion of representation and to motivate a way of measuring functional complexity.
Abstract: How do the purely physical goings-on of the human brain generate or underlie consciousness? This question poses what philosopher William Seager (1999) has called ‘the generation problem’ about consciousness. It may seem that no story of a purely physical sort will make the occurrence of consciousness – let alone its particular phenomenal qualitative features – appear inevitable to us, the way macro features of physical objects are taken by us to be determined by aspects of their underlying micro structure. In the jargon of philosopher Joseph Levine (1983), between the physics of a normally functioning brain situated in its environment and the consciousness that it produces or occasions there seems to be an unbridgeable ‘explanatory gap.’ It is, in effect, just this (ontological) generation problem and closely related (epistemological) explanatory gap that are the central concerns of Gregory Mulhauser’s book, Mind Out of Matter . Mulhauser aims to embrace ‘ . . . the rich “ineffable feel” of the phenomenal world while still operating within the constraints set by the laws of physics’ (p. 2). The method he advocates ‘ . . . takes the laws of physics as given and then examines what sort of picture of consciousness and of cognition might be built within some framework consistent with these laws’ (p. 2). But he is prepared for the eventuality that, notwithstanding the ontological determinationof everything by low level physics, we may nevertheless have to settle for explanatory anti-reductionism : ‘ . . . in giving intelligible explanationsof processes, we may well have to rely on entities constructed at a higher level of description commensurate with that at which we describe the processes themselves’ (p. 11). The framework Mulhauser adopts and defends features Gregory Chaitin’s (1987) version of Algorithmic Information Theory, which Mulhauser uses to ground a notion of representation and to motivate a way of measuring functional complexity. Two alternative frameworks are also considered but set aside. Mulhauser’s discussion of these is thoughtful, well-informed, and judicious, though at times rather technical. Quantum Mechanics is considered, but rejected outright. For instance, recent speculations that consciousness arises from quantum effects in neural microtubules are challenged on grounds proffered as internal to Quantum Mechan-

Journal ArticleDOI
TL;DR: This chapter discusses evolution of social information transfer in Monkeys, Apes, and Hominids through the ages through the generations of primates, as well as its role in human evolution.
Abstract: Gill, J. H. (1997),If a Chimpanzee Could Talk and Other Reflections on Language Acquisition , Tucson: University of Arizona Press. Hauser, M. D. (1996), The Evolution of Communication , Cambridge, MA: MIT Press. Hodges, A. (1983), Alan Turing: The Enigma , New York: Simon and Schuster. King, B. J. (1994),The Information Continuum: Evolution of Social Information Transfer in Monkeys, Apes, and Hominids , Santa Fe, NM: SAR Press. Noble, W., and Davidson, I. (1996), Human Evolution, Language and Mind: A Psychological and Archaeological Inquiry, Cambridge, U.K.: Cambridge University Press. Pepperberg, I. (1987), ‘Evidence for Conceptual Quantitative Abilities in the African Gray Parrot: Labeling of Cardinal Sets’, Ethology75, pp. 37–61. Tomarev, S., Callaerts, P., Kos, L., Zinovieva, R., Halder, G., Gehring, H., and Piatigorsky, J. (1997), ‘Squid Pax-6 and Eye Development’, Proceedings of the National Academy of Sciences 94, pp. 2421–2426. Turing, A. (1948), ‘Intelligent Machinery: A Report to the National Physical Laboratory’, in B. Meltzer and D. Michie (eds.), Machine Intelligence , Vol. 5, New York: American Elsevier, 1970, pp. 3–23. Turing, A. (1952), ‘On the Chemical Basis of Morphogenesis’, Philosophical Transactions of the Royal Society of London, Series B , 237, pp. 37–72.. Walker, A., and Shipman, P. (1996), The Wisdom of the Bones: In Search of Human Origins , New York: Knopf.

Journal ArticleDOI
TL;DR: Readers who have previously tracked issues in artificial intelligence or cognitive science will discover little here that is new, and the book’s nine-point typeface and numerous typographical errors will surely daunt all but the most dogged inquirers.
Abstract: This collection consists of twenty essays, most of which originally appeared in the journalInformatica, Volume 19, Number 4 (abstracts are available on the Web at). The essays fall into two primary categories: (1) diagnoses of why artificial intelligence has not lived up to its early hype, along with proposals for new research directions, and (2) discussions of the relationship between computation, intentionality, and consciousness. Most of the material covers pretty familiar territory, and – given that much of it had already seen the light of day – the question naturally arises as to why the editors thought it worth reprinting in book form. Unfortunately, this question is not answered either by Terry Winograd’s preface (which, strangely, makes no specific mention of any of the book’s content) or by the editors’ brief introduction, which mainly provides meager (and often inaccurate) glosses on each article. Readers who have previously tracked issues in artificial intelligence or cognitive science will discover little here that is new, and the book’s nine-point typeface and numerous typographical errors will surely daunt all but the most dogged inquirers. Although the table of contents lists an Author Index, none was included in the copy received for review. Following is an overview of the essays, along with some critical commentary, organized by the two categories mentioned earlier. 1

Journal ArticleDOI
TL;DR: In general, the overall quality of the book is good and I would certainly recommend the book to anybody interested in the general issue of representation.
Abstract: the issue under consideration, and Sloman’s analysis is a first step towards a better philosophical understanding of the notion of a representation. As is so often the case with a book consisting of a collection of papers, the quality of papers varies greatly. Moreover, a judgement about the quality of the papers is to some extent coloured by one’s own interests. The fact that I discussed certain papers in greater detail than others implies a value judgement on my part. However, other reviewers might have chosen to concentrate on other papers in the collection. In general, the overall quality of the book is good and I would certainly recommend the book to anybody interested in the general issue of representation.

Journal ArticleDOI
TL;DR: species of Mind has intelligent and challenging things to say about many of the problems that a study of animal minds must confront and those with an interest in animal minds will want to read this book.
Abstract: I have focused this review on several of the large questions that Allen and Bekoff’s book addresses. Space limitations prevent me from examining other very interesting issues the book discusses. Particularly provocative is Allen and Bekoff’s argument that Ruth Millikan’s (1984) theory of content provides a better framework for cognitive ethology than Dennett’s (1983) well-known hierarchy of intentional orders. There is also Allen and Bekoff’s suggestion that the ability to detect misinformation provides evidence of consciousness. In short, Species of Mindhas intelligent and challenging things to say about many of the problems that a study of animal minds must confront. Those with an interest in animal minds will want to read this book. 1