scispace - formally typeset
Search or ask a question

Showing papers in "Minds and Machines in 2003"


Journal ArticleDOI
TL;DR: This paper provides a window on the historical sequence of contributions made to the overall project of naturalizing the mind by philosophers from Shannon, Wiener, and MacKay, to Dennett, Sayre, Dretske, Fodor, and Perry, among others.
Abstract: This paper traces the application of information theory to philosophical problems of mind and meaning from the earliest days of the creation of the mathematical theory of communication. The use of information theory to understand purposive behavior, learning, pattern recognition, and more marked the beginning of the naturalization of mind and meaning. From the inception of information theory, Wiener, Turing, and others began trying to show how to make a mind from informational and computational materials. Over the last 50 years, many philosophers saw different aspects of the naturalization of the mind, though few saw at once all of the pieces of the puzzle that we now know. Starting with Norbert Wiener himself, philosophers and information theorists used concepts from information theory to understand cognition. This paper provides a window on the historical sequence of contributions made to the overall project of naturalizing the mind by philosophers from Shannon, Wiener, and MacKay, to Dennett, Sayre, Dretske, Fodor, and Perry, among others. At some time between 1928 and 1948, American engineers and mathematicians began to talk about `Theory of Information' and `Information Theory,' understanding by these terms approximately and vaguely a theory for which Hartley's `amount of information' is a basic concept. I have been unable to find out when and by whom these names were first used. Hartley himself does not use them nor does he employ the term `Theory of Transmission of Information,' from which the two other shorter terms presumably were derived. It seems that Norbert Wiener and Claude Shannon were using them in the Mid-Forties. (Yehoshua Bar-Hillel, 1955)

72 citations


Journal ArticleDOI
TL;DR: Syntactic semantics and Fodor and Lepore’s objections to holism are outlined; the nature of communication, miscommunication, and negotiation is discussed; Bruner's ideas about the negotiation of meaning are explored; and some observations on a problem for knowledge representation in AI raised by Winston are presented.
Abstract: Syntactic semantics is a holistic, conceptual-role-semantic theory of how computers can think. But Fodor and Lepore have mounted a sustained attack on holistic semantic theories. However, their major problem with holism (that, if holism is true, then no two people can understand each other) can be fixed by means of negotiating meanings. Syntactic semantics and Fodor and Lepore's objections to holism are outlined; the nature of communication, miscommunication, and negotiation is discussed; Bruner's ideas about the negotiation of meaning are explored; and some observations on a problem for knowledge representation in AI raised by Winston are presented.

71 citations


Journal ArticleDOI
TL;DR: It is argued that the existence of the device does not refute the Church–Turing thesis, but nevertheless may be a counterexample to Gandy's thesis.
Abstract: We describe a possible physical device that computes a function that cannot be computed by a Turing machine. The device is physical in the sense that it is compatible with General Relativity. We discuss some objections, focusing on those which deny that the device is either a computer or computes a function that is not Turing computable. Finally, we argue that the existence of the device does not refute the Church–Turing thesis, but nevertheless may be a counterexample to Gandy's thesis.

64 citations


Journal ArticleDOI
TL;DR: This work shows a natural model of neural computing that gives rise to hyper-computation, and proposes it as standard in the field of analog computation, functioning in a role similar to that of the universal Turing machine in digital computation.
Abstract: ``Neural computing'' is a research field based on perceiving the human brain as an information system. This system reads its input continuously via the different senses, encodes data into various biophysical variables such as membrane potentials or neural firing rates, stores information using different kinds of memories (e.g., short-term memory, long-term memory, associative memory), performs some operations called ``computation'', and outputs onto various channels, including motor control commands, decisions, thoughts, and feelings. We show a natural model of neural computing that gives rise to hyper-computation. Rigorous mathematical analysis is applied, explicating our model's exact computational power and how it changes with the change of parameters. Our analog neural network allows for supra-Turing power while keeping track of computational constraints, and thus embeds a possible answer to the superiority of the biological intelligence within the framework of classical computer science. We further propose it as standard in the field of analog computation, functioning in a role similar to that of the universal Turing machine in digital computation. In particular an analog of the Church-Turing thesis of digital computation is stated where the neural network takes place of the Turing machine.

59 citations


Journal ArticleDOI
TL;DR: Of the authors' mundane and technical concepts, information is currently one of the most important, most widely used and least understood.
Abstract: Of our mundane and technical concepts, information is currently one of the most important, most widely used and least understood. So far, philosophers have done comparatively little work on information and its cognate concepts. This paradoxical situation may soon count as one more “scandal of philosophy”.

51 citations


Journal ArticleDOI
TL;DR: Logico-mathematical reasons, stemming from his own work, helped to convince Alan Turing that it should be possible to reproduce human intelligence, and eventually compete with it, by developing the appropriate kind of digital computer.
Abstract: This paper concerns Alan Turing's ideas about machines, mathematical methods of proof, and intelligence. By the late 1930s, Kurt Godel and other logicians, including Turing himself, had shown that no finite set of rules could be used to generate all true mathematical statements. Yet according to Turing, there was no upper bound to the number of mathematical truths provable by intelligent human beings, for they could invent new rules and methods of proof. So, the output of a human mathematician, for Turing, was not a computable sequence (i.e., one that could be generated by a Turing machine). Since computers only contained a finite number of instructions (or programs), one might argue, they could not reproduce human intelligence. Turing called this the ``mathematical objection'' to his view that machines can think. Logico-mathematical reasons, stemming from his own work, helped to convince Turing that it should be possible to reproduce human intelligence, and eventually compete with it, by developing the appropriate kind of digital computer. He felt it should be possible to program a computer so that it could learn or discover new rules, overcoming the limitations imposed by the incompleteness and undecidability results in the same way that human mathematicians presumably do.

47 citations


Journal ArticleDOI
TL;DR: It is argued that embedded agents will possess or evolve local co-ordinate systems, or points of view, relative to their current positions in space and time, and have a capacity to develop an egocentric space.
Abstract: In this paper we consider the concept of a self-aware agent. In cognitive science agents are seen as embodied and interactively situated in worlds. We analyse the meanings attached to these terms in cognitive science and robotics, proposing a set of conditions for situatedness and embodiment, and examine the claim that internal representational schemas are largely unnecessary for intelligent behaviour in animats. We maintain that current situated and embodied animats cannot be ascribed even minimal self-awareness, and offer a six point definition of embeddedness, constituting minimal conditions for the evolution of a sense of self. This leads to further analysis of the nature of embodiment and situatedness, and a consideration of whether virtual animats in virtual worlds could count as situated and embodied. We propose that self-aware agents must possess complex structures of self-directed goals; multi-modal sensory systems and a rich repertoire of interactions with their worlds. Finally, we argue that embedded agents will possess or evolve local co-ordinate systems, or points of view, relative to their current positions in space and time, and have a capacity to develop an egocentric space. None of these capabilities are possible without powerful internal representational capacities.

39 citations


Journal ArticleDOI
TL;DR: This chapter discusses how issues of information and computation interact with logic today, and what might be a natural extended agenda of investigation in the future.
Abstract: We discuss how issues of information and computation interact with logic today, and what might be a natural extended agenda of investigation.

33 citations


Journal ArticleDOI
TL;DR: The HOTCO computational model of emotional coherence is applied to simulate a rich case of self-deception from Hawthorne's The Scarlet Letter and it is argued that this model is more psychologically realistic than other available accounts ofSelf-Deception.
Abstract: This paper proposes that self-deception results from the emotional coherence of beliefs with subjective goals. We apply the HOTCO computational model of emotional coherence to simulate a rich case of self-deception from Hawthorne's The Scarlet Letter. We argue that this model is more psychologically realistic than other available accounts of self-deception, and discuss related issues such as wishful thinking, intention, and the division of the self.

33 citations


Journal ArticleDOI
TL;DR: This paper investigates contemporary trends and the relation between the Philosophy of Science and the philosophy of Computing and Information, which is equivalent to the present relation between Philosophy of science and Philosophy of Physics.
Abstract: Computing is changing the traditional field of Philosophy of Science in a very profound way. First as a methodological tool, computing makes possible ``experimental Philosophy'' which is able to provide practical tests for different philosophical ideas. At the same time the ideal object of investigation of the Philosophy of Science is changing. For a long period of time the ideal science was Physics (e.g., Popper, Carnap, Kuhn, and Chalmers). Now the focus is shifting to the field of Computing/Informatics. There are many good reasons for this paradigm shift, one of those being a long standing need of a new meeting between the sciences and humanities, for which the new discipline of Computing/Informatics gives innumerable possibilities. Contrary to Physics, Computing/Informatics is very much human-centered. It brings a potential for a new Renaissance, where Science and Humanities, Arts and Engineering can reach a new synthesis, so very much needed in our intellectually split culture. This paper investigates contemporary trends and the relation between the Philosophy of Science and the Philosophy of Computing and Information, which is equivalent to the present relation between Philosophy of Science and Philosophy of Physics.

33 citations


Journal ArticleDOI
TL;DR: The meaning of physical computation is considered in some detail, and arguments in favour of physical hypercomputation are presented, and the relationship between versions of computability corresponding to different models of physics is considered.
Abstract: Does Nature permit the implementation of behaviours that cannot be simulated computationally? We consider the meaning of physical computation in some detail, and present arguments in favour of physical hypercomputation: for example, modern scientific method does not allow the specification of any experiment capable of refuting hypercomputation. We consider the implications of relativistic algorithms capable of solving the (Turing) Halting Problem. We also reject as a fallacy the argument that hypercomputation has no relevance because non-computable values are indistinguishable from sufficiently close computable approximations. In addition to considering the nature of computability relative to any given physical theory, we can consider the relationship between versions of computability corresponding to different models of physics. Deutsch and Penrose have argued on mathematical grounds that quantum computation and Turing computation have equivalent formal power. We suggest this equivalence is invalid when considered from the physical point of view, by highlighting a quantum computational behaviour that cannot meaningfully be considered feasible in the classical universe.

Journal ArticleDOI
TL;DR: It is argued that the important comparisons between the two models of computation are not so much mathematical as epistemological, and the need for new models of computing addressing issues orthogonal to those that have occupied the traditional theory of computation.
Abstract: It has been argued that neural networks and other forms of analog computation may transcend the limits of Turing-machine computation; proofs have been offered on both sides, subject to differing assumptions. In this article I argue that the important comparisons between the two models of computation are not so much mathematical as epistemological. The Turing-machine model makes assumptions about information representation and processing that are badly matched to the realities of natural computation (information representation and processing in or inspired by natural systems). This points to the need for new models of computation addressing issues orthogonal to those that have occupied the traditional theory of computation.

Journal ArticleDOI
TL;DR: This essay compares information as understood by Gibsonian, ecological psychologists with information as understanding in Barwise and Perry's situation semantics and argues that, with suitable massaging, these views of information can be brought into line.
Abstract: Do psychologists and computer/cognitive scientists mean the same thing by the term `information'? In this essay, I answer this question by comparing information as understood by Gibsonian, ecological psychologists with information as understood in Barwise and Perry's situation semantics. I argue that, with suitable massaging, these views of information can be brought into line. I end by discussing some issues in (the philosophy of) cognitive science and artificial intelligence.

Journal ArticleDOI
TL;DR: It is stressed that while a function f(x) may be computable in the sense of recursive function theory, it may nevertheless have undecidable properties in the realm of Fourier analysis.
Abstract: We first discuss some technical questions which arise in connection with the construction of undecidable propositions in analysis, in particular in connection with the notion of the normal form of a function representing a predicate. Then it is stressed that while a function f(x) may be computable in the sense of recursive function theory, it may nevertheless have undecidable properties in the realm of Fourier analysis. This has an implication for a conjecture of Penrose's which states that classical physics is computable.

Journal ArticleDOI
TL;DR: In this article two undecidable problems belonging to the domain of analysis will be constructed and it will be shown that certain logically characterised functions can be represented as limits of functions of the area M.
Abstract: In this article two undecidable problems belonging to the domain of analysis will be constructed. The basic idea is sketched as follows: Let us imagine an area B of functions (rational functions, trigonometric and exponential functions) and certain operations (addition, multiplication, integration over finite or infinite domains, etc.) and consider the smallest quantity M of functions which contains B and is closed with regard to the selected operations. The question will then be examined whether there is in M a function f( x)for which the predicate P( n)≡ � f( x)cos nxdx > 0 is not recursive. It will be shown that by suitably choosing the area B and the operations, the answer comes out positively. We will deal in general with complex functions of real variables, although one could with somewhat more effort carry out all considerations in the real domain. In the first example, new functions will be generated by means of the following operations: addition, multiplication, integration over finite intervals and the solution of Fredholm integral equations of the second kind. Following this, it will be shown that certain logically characterised functions can be represented as limits of functions of the area M. In these constructions care will be taken that the number of integral equations to be solved remains as small as possible (namely two). In the second example, instead of the solution of Fredholm integral equations, we permit integration over infinite intervals, and then prove for this instance the same theorems as in the first example, in the context of which considerable use will be made of the result of M. Davis, H. Putnam and J. Robinson (cf. (1)) on the unsolvability of exponential diophantine equations.

Journal ArticleDOI
TL;DR: The conjecture explores the networked data-information-knowledge continuum as the subject of Turing's notions of search and intelligence, using analogous models from library systems theory and Floridi's philosophy of information is posed as a potential guide to applied information services design of the Turing type.
Abstract: Turing tersely mentioned a notion of ``cultural search'' while otherwise deeply engaged in the design and operations of one of the earliest computers. His idea situated the individual squarely within a collaborative intellectual environment, but did he mean to suggest this in the form of a general information system? In the same writing Turing forecast mechanizations of proofs and outlined genetical searches, much later implemented in cellular automata. The conjecture explores the networked data-information-knowledge continuum as the subject of Turing's notions of search and intelligence, using analogous models from library systems theory. Floridi's philosophy of information is posed as a potential guide to applied information services design of the Turing type. The initial problem is to identify a minimal set of assumptions from Turing's essay beyond the general context of computing. This set will form a bridge to an analogous set of principles in library systems models by eliciting supporting evidence in the literature relating the two. Finally it will be shown how Floridi's philosophy of information more fully encompasses Turing's insight in view of the conjecture.

Journal ArticleDOI
TL;DR: This paper defends a cognitive theory of those emotional reactions which motivate and constrain moral judgment, and tries to shed light on which moral modules there are, which of these modules the authors share with non-human primates, and on the (pre-)history and development of this modular system from pre-humans through gatherer-hunters and on to modern humans.
Abstract: This paper defends a cognitive theory of those emotional reactions which motivate and constrain moral judgment. On this theory, moral emotions result from mental faculties specialized for automatically producing feelings of approval or disapproval in response to mental representations of various social situations and actions. These faculties are modules in Fodor's sense, since they are informationally encapsulated, specialized, and contain innate information about social situations. The paper also tries to shed light on which moral modules there are, which of these modules we share with non-human primates, and on the (pre-)history and development of this modular system from pre-humans through gatherer-hunters and on to modern (i.e. arablist) humans. The theory is not, however, meant to explain all moral reasoning. It is plausible that a non-modular intelligence at least sometimes play a role in conscious moral thought. However, even non-modular moral reasoning is initiated and constrained by moral emotions having modular sources.

Journal ArticleDOI
TL;DR: It is concluded that the viability of computationalism and strong AI depends on their addressing the indeterminacy objection, but that it is currently unclear how this objection can be successfully addressed.
Abstract: I argue that John Searle's (1980) influential Chinese room argument (CRA) against computationalism and strong AI survives existing objections, including Block's (1998) internalized systems reply, Fodor's (1991b) deviant causal chain reply, and Hauser's (1997) unconscious content reply. However, a new ``essentialist'' reply I construct shows that the CRA as presented by Searle is an unsound argument that relies on a question-begging appeal to intuition. My diagnosis of the CRA relies on an interpretation of computationalism as a scientific theory about the essential nature of intentional content; such theories often yield non-intuitive results in non-standard cases, and so cannot be judged by such intuitions. However, I further argue that the CRA can be transformed into a potentially valid argument against computationalism simply by reinterpreting it as an indeterminacy argument that shows that computationalism cannot explain the ordinary distinction between semantic content and sheer syntactic manipulation, and thus cannot be an adequate account of content. This conclusion admittedly rests on the arguable but plausible assumption that thought content is interestingly determinate. I conclude that the viability of computationalism and strong AI depends on their addressing the indeterminacy objection, but that it is currently unclear how this objection can be successfully addressed.

Journal ArticleDOI
TL;DR: If the computational theory of mind is right, then minds are realized by machines; some finite machines realize finitely complex minds; some Turing machines realize potentially infinitely complex minds.
Abstract: If the computational theory of mind is right, then minds are realized by machines. There is an ordered complexity hierarchy of machines. Some finite machines realize finitely complex minds; some Turing machines realize potentially infinitely complex minds. There are many logically possible machines whose powers exceed the Church–Turing limit (e.g. accelerating Turing machines). Some of these supermachines realize superminds. Superminds perform cognitive supertasks. Their thoughts are formed in infinitary languages. They perceive and manipulate the infinite detail of fractal objects. They have infinitely complex bodies. Transfinite games anchor their social relations.

Journal ArticleDOI
TL;DR: The paper deals with the question whether logical truth carry information, and several ways to deal with the dilemma are distinguished, especially syntactic and ontological solutions.
Abstract: The paper deals with the question whether logical truth carry information. On the one hand it seems that we gain new information by drawing inferences or arriving at some theorems. On the other hand the formal accounts of information and information content which are most widely known today say that logical truth carry no information at all. The latter is shown by considering these accounts. Then several ways to deal with the dilemma are distinguished, especially syntactic and ontological solutions. A version of a syntactical solution is favoured.

Journal ArticleDOI
TL;DR: It is argued that the designer’s philosophical position on truth, belief and knowledge has far reaching implications for the design and performance of the resulting agents and support for the view that epistemological theories have a particular relevance for artificial agent design is discussed.
Abstract: Unlike natural agents, artificial agents are, to varying extent, designed according to sets of principles or assumptions We argue that the designer's philosophical position on truth, belief and knowledge has far reaching implications for the design and performance of the resulting agents Of the many sources of design information and background we believe philosophical theories are under-rated as valuable influences on the design process To explore this idea we have implemented some computer-based agents with their control algorithms inspired by two strongly contrasting philosophical positions A series of experiments on these agents shows that, despite having common tasks and goals, the behaviour of the agents is markedly different and this can be attributed to their individual approaches to belief and knowledge We discuss these findings and their support for the view that epistemological theories have a particular relevance for artificial agent design

Journal ArticleDOI
TL;DR: It is suggested that some of the methods that philosophers have developed to address the problems of epistemology may be relevant to the problem of representing knowledge within artificial agents.
Abstract: A situated agent is one which operates within an environment. In most cases, the environment in which the agent exists will be more complex than the agent itself. This means that an agent, human or artificial, which wishes to carry out non-trivial operations in its environment must use techniques which allow an unbounded world to be represented within a cognitively bounded agent. We present a brief description of some important theories within the fields of epistemology and metaphysics. We then discuss ways in which philosophical problems of scepticism are related to the problems faced by knowledge representation. We suggest that some of the methods that philosophers have developed to address the problems of epistemology may be relevant to the problems of representing knowledge within artificial agents.

Journal ArticleDOI
TL;DR: This paper delineates a multi-layered general framework to which different contributions in the field may be traced back and revised and extended in the light of the consideration of a type of multiagent system devoted to afford the issue of scientific discovery.
Abstract: The peculiarity of the relationship between philosophy and Artificial Intelligence (AI) has been evidenced since the advent of AI. This paper aims to put the basis of an extended and well founded philosophy of AI: it delineates a multi-layered general framework to which different contributions in the field may be traced back. The core point is to underline how in the same scenario both the role of philosophy on AI and role of AI on philosophy must be considered. Moreover, this framework is revised and extended in the light of the consideration of a type of multiagent system devoted to afford the issue of scientific discovery both from a conceptual and from a practical point of view.

Journal ArticleDOI
Jordi Fernández1
TL;DR: It is argued that computer simulation may be understood as providing two different kinds of explanation, which makes the notion of explanation by computer simulation ambiguous.
Abstract: My purpose in this essay is to clarify the notion of explanation by computer simulation in artificial intelligence and cognitive science. My contention is that computer simulation may be understood as providing two different kinds of explanation, which makes the notion of explanation by computer simulation ambiguous. In order to show this, I shall draw a distinction between two possible ways of understanding the notion of simulation, depending on how one views the relation in which a computing system that performs a cognitive task stands to the program that the system runs while performing that task. Next, I shall suggest that the kind of explanation that results from simulation is radically different in each case. In order to illustrate the difference, I will point out some prima facie methodological difficulties that need to be addressed in order to ensure that simulation plays a legitimate explanatory role in cognitive science, and I shall emphasize how those difficulties are very different depending on the notion of explanation involved.

Journal ArticleDOI
Donald Levy1
TL;DR: It is argued that the implications Dennett finds are erroneous, regardless of whether such a robot is possible, and therefore that the real existence of metaphysically original intentionality has not been undermined by the possibility of the robot Dennett describes.
Abstract: According to a common philosophical distinction, the `original' intentionality, or `aboutness' possessed by our thoughts, beliefs and desires, is categorically different from the `derived' intentionality manifested in some of our artifacts –- our words, books and pictures, for example. Those making the distinction claim that the intentionality of our artifacts is `parasitic' on the `genuine' intentionality to be found in members of the former class of things. In Kinds of Minds: Toward an Understanding of Consciousness, Daniel Dennett criticizes that claim and the distinction it rests on, and seeks to show that ``metaphysically original intentionality'' is illusory by working out the implications he sees in the practical possibility of a certain type of robot, i.e., one that generates `utterances' which are `inscrutable to the robot's designers' so that we, and they, must consult the robot to discover the meaning of its utterances. I argue that the implications Dennett finds are erroneous, regardless of whether such a robot is possible, and therefore that the real existence of metaphysically original intentionality has not been undermined by the possibility of the robot Dennett describes.


Journal ArticleDOI
Jakob Hohwy1
TL;DR: Since dispositional properties can make statements about the meanings of words true, Kripke-Wittgenstein's arguments against dispositionalism about meaning are mistaken.
Abstract: A central part of Kripke's influential interpretation of Wittgenstein's sceptical argument about meaning is the rejection of dispositional analyses of what it is for a word to mean what it does (Kripke, 1982). In this paper I show that Kripke's arguments prove too much: if they were right, they would preclude not only the idea that dispositional properties can make statements about the meanings of words true, but also the idea that dispositional properties can make true statements about paradigmatic dispositional properties such as a cup's fragility or a person's bravery. However, since dispositional properties can make such statements true, Kripke-Wittgenstein's arguments against dispositionalism about meaning are mistaken.

Journal ArticleDOI
TL;DR: Fetzer’s book is a thorough and enjoyable collection that weaves a complex tapestry of ideas through difficult material, but it should, I think, be directed to an even wider audience, especially as regards the important work on program verification.
Abstract: plied a clear answer that provides a “theoretical alternative” to the computational conception. Nor do I agree that “Newell and Simon thought they had captured the necessary and sufficient conditions for mentality, but they manifestly had not” (p. 157). As Fetzer mentions elsewhere (p. 61), Newell and Simon proposed a working empirical hypothesis with which to direct AI research, albeit, one of which they were pretty confident, but empirical nonetheless. The real problem, I think, not just for Fetzer but for others too, is that not enough is currently known about the mental to warrant emphatic views either for or against the computational conception, or any other conception either. As Fetzer himself acknowledges at just one point, mentality is “a domain about which very little is known” (p. 125). Moreover, Newell and Simon and others notwithstanding, actual AI research is really concerned only with the aims of weak AI (with merely getting the performance right), so I also reject Fetzer’s foundational claim that “the nature of language and mentality is fundamental to research in artificial intelligence” (p. 73). Finally, Fetzer’s book is a thorough and enjoyable collection that weaves a complex tapestry of ideas through difficult material. While I do not agree with every thread, there is much with which I do. It will serve the stated target audience well, but it should, I think, be directed to an even wider audience, especially as regards the important work on program verification.

Journal ArticleDOI
TL;DR: Wooldridge's book describes from a theoretical point of view the peculiar characteristics of an agent system and discusses the role that logical theories of agency can play in the development of agent systems.
Abstract: There is much talk these days about agents both from the academic side and from the industrial side. Furthermore, halfway between academia and industry, we encounter efforts to establish agent specifications in international standard committees, e.g., the FIPA specifications (FIPA, 2000). The applications of agents range from control systems embedded in a dynamic (and often unpredictable) real world (a successful example is NASA’s Deep Space 1 probe (Muscettola et al., 1998); other examples are Ihara et al. (1984), Roberts (1989), Overgaard et al. (1996), and Ekberg (1997)) and knowledge-base management systems (Beck et al., 1994; Chung et al., 1997; Parunak et al., 1999). Indeed, nowadays, so-called software agents sort your mail, adaptively recommend Web pages, assist with scheduling, find people with interests similar to your own, and so on (Jennings et al., 1995; Cousins et al., 1995; Cheong, 1996; Joachims et al., 1997). In spite of this success, several authors have questioned and are still questioning the differences between agent systems and object-oriented systems. The question is whether agents represent a real shift in computer programming or are just a reformulation of object-oriented methodology and technology. In this respect, Wooldridge’s book can help since it describes from a theoretical point of view the peculiar characteristics of an agent system. The book can be roughly divided into five parts. The first part (Chapters 1 and 2) introduces the basic concepts of rational agents and Belief-Desire-Intention (BDI) agents. The second (Chapters 3 and 4) defines the formal theory proposed by the author (the logic LORA). The third (Chapter 5) shows how this framework can be used to capture some properties of individual rational agents. Conversely, the fourth part (Chapters 6, 7, and 8) is devoted to showing how LORA can be used to capture properties of multiagent, social systems. Finally, the fifth part (Chapter 9) discusses the role that logical theories of agency can play in the development of agent systems.