scispace - formally typeset
Search or ask a question

Showing papers in "Synthese in 2007"


Journal ArticleDOI
28 Jul 2007-Synthese
TL;DR: It is argued that there is no such thing as conscious willing: conscious will is, indeed, an illusion, and can be filled by a plausible a priori claim about the causal role of anything deserving to be called ‘a will.’
Abstract: Wegner (Wegner, D. (2002). The illusion of conscious will. MIT Press) argues that conscious will is an illusion, citing a wide range of empirical evidence. I shall begin by surveying some of his arguments. Many are unsuccessful. But one—an argument from the ubiquity of self-interpretation—is more promising. Yet is suffers from an obvious lacuna, offered by so-called ‘dual process’ theories of reasoning and decision making (Evans, J., & Over, D. (1996). Rationality and reasoning. Psychology Press; Stanovich, K. (1999). Who is rational? Studies of individual differences in reasoning. Lawrence Erlbaum; Frankish, K. (2004). Mind and supermind. Cambridge University Press). I shall argue that this lacuna can be filled by a plausible a priori claim about the causal role of anything deserving to be called ‘a will.’ The result is that there is no such thing as conscious willing: conscious will is, indeed, an illusion.

1,032 citations


Journal ArticleDOI
26 Jul 2007-Synthese
TL;DR: Extensions of the logic S5 which can deal with public communications are defined and some completeness, decidability and interpretability results are proved and a general method is formulated that solves certain kind of problems involving public communications.
Abstract: Multi-modal versions of propositional logics S5 or S4—commonly accepted as logics of knowledge—are capable of describing static states of knowledge but they do not reflect how the knowledge changes after communications among agents. In the present paper (part of broader research on logics of knowledge and communications) we define extensions of the logic S5 which can deal with public communications. The logics have natural semantics. We prove some completeness, decidability and interpretability results and formulate a general method that solves certain kind of problems involving public communications—among them well known puzzles of Muddy Children and Mr. Sum & Mr. Product. As the paper gives a formal logical treatment of the operation of restriction of the universe of a Kripke model, it contributes also to investigations of semantics for modal logics.

637 citations


Journal ArticleDOI
01 Dec 2007-Synthese
TL;DR: It is suggested that these perceptual processes are just one emergent property of systems that conform to a free-energy principle, and that the system’s state and structure encode an implicit and probabilistic model of the environment.
Abstract: If one formulates Helmholtz's ideas about perception in terms of modern-day theories one arrives at a model of perceptual inference and learning that can explain a remarkable range of neurobiological facts. Using constructs from statistical physics it can be shown that the problems of inferring what cause our sensory input and learning causal regularities in the sensorium can be resolved using exactly the same principles. Furthermore, inference and learning can proceed in a biologically plausible fashion. The ensuing scheme rests on Empirical Bayes and hierarchical models of how sensory information is generated. The use of hierarchical models enables the brain to construct prior expectations in a dynamic and context-sensitive fashion. This scheme provides a principled way to understand many aspects of the brain's organisation and responses.In this paper, we suggest that these perceptual processes are just one emergent property of systems that conform to a free-energy principle. The free-energy considered here represents a bound on the surprise inherent in any exchange with the environment, under expectations encoded by its state or configuration. A system can minimise free-energy by changing its configuration to change the way it samples the environment, or to change its expectations. These changes correspond to action and perception respectively and lead to an adaptive exchange with the environment that is characteristic of biological systems. This treatment implies that the system's state and structure encode an implicit and probabilistic model of the environment. We will look at models entailed by the brain and how minimisation of free-energy can explain its dynamics and structure.

498 citations


Journal ArticleDOI
21 Sep 2007-Synthese
TL;DR: An overview of an anti-luck epistemology, as set out in the book, Epistemic Luck, is offered and some of the ways in which the strategy of anti- Luck epistemologists can be developed in new directions are sketched.
Abstract: In this paper, I do three things. First, I offer an overview of an anti-luck epistemology, as set out in my book, Epistemic Luck (Oxford University Press, Oxford 2005). Second, I attempt to meet some of the main criticisms that one might level against the key theses that I propose in this work. And finally, third, I sketch some of the ways in which the strategy of anti-luck epistemology can be developed in new directions.

216 citations


Journal ArticleDOI
21 Sep 2007-Synthese
TL;DR: This paper argues that the general conception of knowledge found in the DCVK is fundamentally incorrect and shows that deserving credit cannot be what distinguishes knowledge from merely lucky true belief since knowledge is not something for which a subject always deserves credit.
Abstract: A view of knowledge—what I call the Deserving Credit View of Knowledge(DCVK)—found in much of the recent epistemological literature, particularly among so-called virtue epistemologists, centres around the thesis that knowledge is something for which a subject deserves credit. Indeed, this is said to be the central difference between those true beliefs that qualify as knowledge and those that are true merely by luck—the former, unlike the latter, are achievements of the subject and are thereby creditable to her. Moreover, it is often further noted that deserving credit is what explains the additional value that knowledge has over merely lucky true belief. In this paper, I argue that the general conception of knowledge found in the DCVK is fundamentally incorrect. In particular, I show that deserving credit cannot be what distinguishes knowledge from merely lucky true belief since knowledge is not something for which a subject always deserves credit.

200 citations


Journal ArticleDOI
24 Mar 2007-Synthese
TL;DR: It is argued that versions of the classical, logical, propensity and subjectivist interpretations of probability fall prey to their own variants of the reference class problem, and that conditional probability is the proper primitive of probability theory.
Abstract: The reference class problem arises when we want to assign a probability to a proposition (or sentence, or event) X, which may be classified in various ways, yet its probability can change depending on how it is classified. The problem is usually regarded as one specifically for the frequentist interpretation of probability and is often considered fatal to it. I argue that versions of the classical, logical, propensity and subjectivist interpretations also fall prey to their own variants of the reference class problem. Other versions of these interpretations apparently evade the problem. But I contend that they are all "no-theory" theories of probability - accounts that leave quite obscure why probability should function as a guide to life, a suitable basis for rational inference and action. The reference class problem besets those theories that are genuinely informative and that plausibly constrain our inductive reasonings and decisions. I distinguish a "metaphysical" and an "epistemological" reference class problem. I submit that we can dissolve the former problem by recognizing that probability is fundamentally a two-place notion: conditional probability is the proper primitive of probability theory. However, I concede that the epistemological problem remains.

184 citations


Journal ArticleDOI
08 Feb 2007-Synthese
TL;DR: The aim here is to question and refine the conceptual foundations of many theories of intentional action, and to provide a detailed and principled analysis of the role of beliefs in goal processing.
Abstract: In this article we strive to provide a detailed and principled analysis of the role of beliefs in goal processing—that is, the cognitive transition that leads from a mere desire to a proper intention. The resulting model of belief-based goal processing has also relevant consequences for the analysis of intentions, and constitutes the necessary core of a constructive theory of intentions, i.e. a framework that not only analyzes what an intention is, but also explains how it becomes what it is. We discuss similarities and differences between our approach and other standard accounts of intention, in particular Bratman’s planning theory. The aim here is to question and refine the conceptual foundations of many theories of intentional action: as a consequence, although our analysis is not formal in itself, it is ultimately meant to have deep consequences for formal models of intentional agency.

125 citations


Journal ArticleDOI
01 May 2007-Synthese
TL;DR: This paper shows that the Alternating-time Temporal Logic of Alur, Henzinger, and Kupferman provides an elegant and powerful framework within which to express and understand social laws for multiagent systems.
Abstract: Since it was first proposed by Moses, Shoham, and Tennenholtz, the social laws paradigm has proved to be one of the most compelling approaches to the offline coordination of multiagent systems. In this paper, we make four key contributions to the theory and practice of social laws in multiagent systems. First, we show that the Alternating-time Temporal Logic (atl) of Alur, Henzinger, and Kupferman provides an elegant and powerful framework within which to express and understand social laws for multiagent systems. Second, we show that the effectiveness, feasibility, and synthesis problems for social laws may naturally be framed as atl model checking problems, and that as a consequence, existing atl model checkers may be applied to these problems. Third, we show that the complexity of the feasibility problem in our framework is no more complex in the general case than that of the corresponding problem in the Shoham–Tennenholtz framework (it is np-complete). Finally, we show how our basic framework can easily be extended to permit social laws in which constraints on the legality or otherwise of some action may be explicitly required. We illustrate the concepts and techniques developed by means of a running example.

110 citations


Journal ArticleDOI
21 Sep 2007-Synthese
TL;DR: In this paper, this paper critically evaluate both Pritchard’s account of luck and another account to which Pritchards’ discussion draws the authors' attention—viz.
Abstract: Luck looms large in numerous different philosophical subfields. Unfortunately, work focused exclusively on the nature of luck is in short supply on the contemporary analytic scene. In his highly impressive recent book Epistemic Luck, Duncan Pritchard helps rectify this neglect by presenting a partial account of luck that he uses to illuminate various ways luck can figure in cognition. In this paper, I critically evaluate both Pritchard’s account of luck and another account to which Pritchard’s discussion draws our attention—viz., that due to Nicholas Rescher. I also assess some novel analyses of luck that incorporate plausible elements of Pritchard’s and Rescher’s accounts.

95 citations


Journal ArticleDOI
28 Sep 2007-Synthese
TL;DR: In this article, the authors discuss the view that metacognition has metarepresentational structure and show that properties such as causal contiguity, epistemic transparency and procedural reflexivity are present in metACognition but missing in metare-presentation, while openended recursivity and inferential promiscuity only occur in met-arepresentation.
Abstract: Metacognition is often defined as thinking about thinking. It is exemplified in all the activities through which one tries to predict and evaluate one's own mental dispositions, states and properties for their cognitive adequacy. This article discusses the view that metacognition has metarepresentational structure. Properties such as causal contiguity, epistemic transparency and procedural reflexivity are present in metacognition but missing in metarepresentation, while open-ended recursivity and inferential promiscuity only occur in metarepresentation. It is concluded that, although metarepresentations can redescribe metacognitive contents, metacognition and metarepresentation are functionally distinct.

93 citations


Journal ArticleDOI
24 Mar 2007-Synthese
TL;DR: This paper will survey some recent arguments and results in the dispute about the proper probabilistic explication of relational (or contrastive) conceptions of evidential support, and propose some new work for an old probability puzzle: the “Monty Hall” problem.
Abstract: Likelihoodists and Bayesians seem to have a fundamental disagreement about the proper probabilistic explication of relational (or contrastive) conceptions of evidential support (or confirmation). In this paper, I will survey some recent arguments and results in this area, with an eye toward pinpointing the nexus of the dispute. This will lead, first, to an important shift in the way the debate has been couched, and, second, to an alternative explication of relational support, which is in some sense a “middle way” between Likelihoodism and Bayesianism. In the process, I will propose some new work for an old probability puzzle: the “Monty Hall” problem.

Journal ArticleDOI
31 Jan 2007-Synthese
TL;DR: Comparing various strategies for defending the claim that entitlement can make acceptance of a proposition epistemically rational shows some positive proposals as to how an epistemic consequentialist should characterize epistemic rationality.
Abstract: This paper takes the form of a critical discussion of Crispin Wright’s notion of entitlement of cognitive project. I examine various strategies for defending the claim that entitlement can make acceptance of a proposition epistemically rational, including one which appeals to epistemic consequentialism. Ultimately, I argue, none of these strategies is successful, but the attempt to isolate points of disagreement with Wright issues in some positive proposals as to how an epistemic consequentialist should characterize epistemic rationality.

Journal ArticleDOI
28 Sep 2007-Synthese
TL;DR: It is argued that an agent’s narrative self-conception has a role to play in explaining their agentive judgments, but that agentive experiences are explained by low-level comparator mechanisms that are grounded in the very machinery responsible for action-production.
Abstract: This paper contrasts two approaches to agentive self-awareness: a high- level, narrative-based account, and a low-level comparator-based account. We argue that an agent's narrative self-conception has a role to play in explaining their agentive judgments, but that agentive experiences are explained by low-level comparator mech- anisms that are grounded in the very machinery responsible for action-production.

Journal ArticleDOI
21 Sep 2007-Synthese
TL;DR: It is argued that the best explanation for the consensus that luck undermines knowledge is that knowledge is, complications aside, credit-worthy true believing, and a theory of knowledge is sketched in those terms.
Abstract: It is nearly universally acknowledged among epistemologists that a belief, even if true, cannot count as knowledge if it is somehow largely a matter of luck that the person so arrived at the truth. A striking feature of this literature, however, is that while many epistemologists are busy arguing about which particular technical condition most effectively rules out the offensive presence of luck in true believing, almost no one is asking why it matters so much that knowledge be immune from luck in the first place. I argue that the best explanation for the consensus that luck undermines knowledge is that knowledge is, complications aside, credit-worthy true believing. To make this case, I develop both the notions of luck and credit, and sketch a theory of knowledge in those terms. Furthermore, this account also holds promise for being able to solve the “value problem” for knowledge, and it explains why both internal and external conditions are necessary to turn true belief into knowledge.

Journal ArticleDOI
14 Feb 2007-Synthese
TL;DR: This paper develops a simple model of an agent’s mental states, and defines intention revision operators, and develops a logic of intention dynamics, and investigates some of its properties.
Abstract: Although the change of beliefs in the face of new information has been widely studied with some success, the revision of other mental states has received little attention from the theoretical perspective. In particular, intentions are widely recognised as being a key attitude for rational agents, and while several formal theories of intention have been proposed in the literature, the logic of intention revision has been hardly considered. There are several reasons for this: perhaps most importantly, intentions are very closely connected with other mental states—in particular, beliefs about the future and the abilities of the agent. So, we cannot study them in isolation. We must consider the interplay between intention revision and the revision of other mental states, which complicates the picture considerably. In this paper, we present some first steps towards a theory of intention revision. We develop a simple model of an agent’s mental states, and define intention revision operators. Using this model, we develop a logic of intention dynamics, and then investigate some of its properties.

Journal ArticleDOI
Jakob Hohwy1
26 Sep 2007-Synthese
TL;DR: How and why functional integration may matter for the mind is considered; a general theoretical framework is discussed, based on generative models, that may unify many of the debates surrounding functional integration and the mind.
Abstract: Different cognitive functions recruit a number of different, often overlapping, areas of the brain. Theories in cognitive and computational neuroscience are beginning to take this kind of functional integration into account. The contributions to this special issue consider what functional integration tells us about various aspects of the mind such as perception, language, volition, agency, and reward. Here, I consider how and why functional integration may matter for the mind; I discuss a general theoretical framework, based on generative models, that may unify many of the debates surrounding functional integration and the mind; and I briefly introduce each of the contributions.

Journal ArticleDOI
07 Sep 2007-Synthese
TL;DR: This essay discusses three specific empirical predictions regarding the resulting functional topography of the brain, and some of the evidence supporting them, and considers the implications of these findings for an account of the functional integration of cognitive operations.
Abstract: The massive redeployment hypothesis (MRH) is a theory about the functional topography of the human brain, offering a middle course between strict localization on the one hand, and holism on the other. Central to MRH is the claim that cognitive evolution proceeded in a way analogous to component reuse in software engineering, whereby existing components—originally developed to serve some specific purpose—were used for new purposes and combined to support new capacities, without disrupting their participation in existing programs. If the evolution of cognition was indeed driven by such exaptation, then we should be able to make some specific empirical predictions regarding the resulting functional topography of the brain. This essay discusses three such predictions, and some of the evidence supporting them. Then, using this account as a background, the essay considers the implications of these findings for an account of the functional integration of cognitive operations. For instance, MRH suggests that in order to determine the functional role of a given brain area it is necessary to consider its participation across multiple task categories, and not just focus on one, as has been the typical practice in cognitive neuroscience. This change of methodology will motivate (even perhaps necessitate) the development of a new, domain-neutral vocabulary for characterizing the contribution of individual brain areas to larger functional complexes, and direct particular attention to the question of how these various area roles are integrated and coordinated to result in the observed cognitive effect. Finally, the details of the mix of cognitive functions a given area supports should tell us something interesting not just about the likely computational role of that area, but about the nature of and relations between the cognitive functions themselves. For instance, growing evidence of the role of “motor” areas like M1, SMA and PMC in language processing, and of “language” areas like Broca’s area in motor control, offers the possibility for significantly reconceptualizing the nature both of language and of motor control.

Journal ArticleDOI
John Greco1
21 Sep 2007-Synthese
TL;DR: I take issue with two claims that Duncan Pritchard makes in his recent book, Epistemic Luck: the safety-based response to the lottery problem and the account of the relationship between safety and intellectual virtue.
Abstract: I take issue with two claims that Duncan Pritchard makes in his recent book, Epistemic Luck. The first concerns his safety-based response to the lottery problem; the second his account of the relationship between safety and intellectual virtue.

Journal ArticleDOI
07 Feb 2007-Synthese
TL;DR: It is argued that a consideration of the problem of providing truthmakers for negative truths undermines truthmaker theory and is, in any case, an under-motivated doctrine because the groundedness of truth can be explained without appeal to the truthmaker principle.
Abstract: This paper argues that a consideration of the problem of providing truthmakers for negative truths undermines truthmaker theory. Truthmaker theorists are presented with an uncomfortable dilemma. Either they must take up the challenge of providing truthmakers for negative truths, or else they must explain why negative truths are exceptions to the principle that every truth must have a truthmaker. The first horn is unattractive since the prospects of providing truthmakers for negative truths do not look good neither absences, nor totality states of affairs, nor Graham Priest and J.C. Beall’s ‘polarities’ (Beall, 2000; Priest, 2000) are up to the job. The second horn, meanwhile, is problematic because restricting the truthmaker principle to atomic truths, or weakening it to the thesis that truth supervenes on being, undercuts truthmaker theory’s original motivation. The paper ends by arguing that truthmaker theory is, in any case, an under-motivated doctrine because the groundedness of truth can be explained without appeal to the truthmaker principle. This leaves us free to give the ommonsensical and deflationary explanation of negative truths that common-sense suggests.

Journal ArticleDOI
01 Nov 2007-Synthese
TL;DR: This article discusses the notion of a linguistic universal, and possible sources of such invariant properties of natural languages, and shows how computer simulations can be employed to study the large scale, emergent, consequences of psychologically and psychologically motivated assumptions about the working of horizontal language transmission.
Abstract: In this article we discuss the notion of a linguistic universal, and possible sources of such invariant properties of natural languages. In the first part, we explore the conceptual issues that arise. In the second part of the paper, we focus on the explanatory potential of horizontal evolution. We particularly focus on two case studies, concerning Zipf's Law and universal properties of color terms, respectively. We show how computer simulations can be employed to study the large scale, emergent, consequences of psychologically and psychologically motivated assumptions about the working of horizontal language transmission.

Journal ArticleDOI
24 May 2007-Synthese
TL;DR: This model appears to violate Bayesian conditionalization, but it is argued that this is not the case and by paying close attention to the details of conditionalization in contexts where indexical information is relevant, the hybrid model is in fact consistent with Bayesian kinematics.
Abstract: The Sleeping Beauty problem is test stone for theories about self-locating belief, i.e. theories about how we should reason when data or theories contain indexical information. Opinion on this problem is split between two camps, those who defend the “1/2 view” and those who advocate the “1/3 view”. I argue that both these positions are mistaken. Instead, I propose a new “hybrid” model, which avoids the faults of the standard views while retaining their attractive properties. This model appears to violate Bayesian conditionalization, but I argue that this is not the case. By paying close attention to the details of conditionalization in contexts where indexical information is relevant, we discover that the hybrid model is in fact consistent with Bayesian kinematics. If the proposed model is correct, there are important lessons for the study of self-location, observation selection theory, and “anthropic reasoning”.

Journal ArticleDOI
26 Feb 2007-Synthese
TL;DR: The necessary motion twixt macroscopic and microscopic views of matter in modern chemistry leads to the coexistence of symbolic and iconic representations, and in another way to the deliberate, creative violation of categories.
Abstract: Had more philosophers of science come from chemistry, their thinking would have been different. I begin by looking at a typical chemical paper, in which making something is the leitmotif, and conjecture/refutation is pretty much irrelevant. What in fact might have been, might be, different? The realism of chemists is reinforced by their remarkable ability to transform matter; they buy into reductionism where it serves them, but make no real use of it. Incommensurability is taken without a blink, and actually serves. The preeminence of synthesis in chemistry could have led philosophers of science to take more seriously questions of aesthetics within science, and to find a place in aesthetics for utility. The necessary motion twixt macroscopic and microscopic views of matter in modern chemistry leads to the coexistence of symbolic and iconic representations. And in another way to the deliberate, creative violation of categories.

Journal ArticleDOI
21 Sep 2007-Synthese
TL;DR: This paper argues that the safety condition on knowledge is not a proper formulation of the intuition that knowledge excludes luck, and suggests an alternative proposal in the same spirit as safety, which is found lacking.
Abstract: There is some consensus that for S to know that p, it cannot be merely a matter of luck that S’s belief that p is true. This consideration has led Duncan Pritchard and others to propose a safety condition on knowledge. In this paper, we argue that the safety condition is not a proper formulation of the intuition that knowledge excludes luck. We suggest an alternative proposal in the same spirit as safety, and find it lacking as well.

Journal ArticleDOI
David H. Glass1
26 Jul 2007-Synthese
TL;DR: By adopting a coherence measure to rank competing explanations in terms of their coherence with a piece of evidence, IBE can be made more precise and so a major objection to this mode of reasoning can be addressed.
Abstract: This paper considers an application of work on probabilistic measures of coherence to inference to the best explanation (IBE). Rather than considering information reported from different sources, as is usually the case when discussing coherence measures, the approach adopted here is to use a coherence measure to rank competing explanations in terms of their coherence with a piece of evidence. By adopting such an approach IBE can be made more precise and so a major objection to this mode of reasoning can be addressed. Advantages of the coherence-based approach are pointed out by comparing it with several other ways to characterize ‘best explanation’ and showing that it takes into account their insights while overcoming some of their problems. The consequences of adopting this approach for IBE are discussed in the context of recent discussions about the relationship between IBE and Bayesianism.

Journal ArticleDOI
10 Feb 2007-Synthese
TL;DR: This work investigates the research programme of dynamic doxastic logic (DDL) and analyzes its underlying methodology and develops and compares axiomatically and semantically the logical and philosophical differences between two paradigmatic systems, AGM and KGM.
Abstract: We investigate the research programme of dynamic doxastic logic (DDL) and analyze its underlying methodology. The Ramsey test for conditionals is used to characterize the logical and philosophical differences between two paradigmatic systems, AGM and KGM, which we develop and compare axiomatically and semantically. The importance of Gardenfors's impossibility result on the Ramsey test is highlighted by a comparison with Arrow's impossibility result on social choice. We end with an outlook on the prospects and the future of DDL.

Journal ArticleDOI
02 Sep 2007-Synthese
TL;DR: This work adopts the Neural Engineering Framework (NEF) of Eliasmith & Anderson which identifies implementational principles for neural models and suggests that adopting statistical modeling methods for perception and action will be functionally sufficient for capturing biological behavior.
Abstract: To have a fully integrated understanding of neurobiological systems, we must address two fundamental questions: 1. What do brains do (what is their function)? and 2. How do brains do whatever it is that they do (how is that function implemented)? I begin by arguing that these questions are necessarily inter-related. Thus, addressing one without consideration of an answer to the other, as is often done, is a mistake. I then describe what I take to be the best available approach to addressing both questions. Specifically, to address 2, I adopt the Neural Engineering Framework (NEF) of Eliasmith & Anderson [Neural engineering: Computation representation and dynamics in neurobiological systems. Cambridge, MA: MIT Press, 2003] which identifies implementational principles for neural models. To address 1, I suggest that adopting statistical modeling methods for perception and action will be functionally sufficient for capturing biological behavior. I show how these two answers will be mutually constraining, since the process of model selection for the statistical method in this approach can be informed by known anatomical and physiological properties of the brain, captured by the NEF. Similarly, the application of the NEF must be informed by functional hypotheses, captured by the statistical modeling approach.

Journal ArticleDOI
09 Jun 2007-Synthese
TL;DR: In this paper, the defeasible argumentation scheme for practical reasoning (Walton 1990) is revised and two new schemes are presented, each with a matching set of critical questions.
Abstract: In this paper, the defeasible argumentation scheme for practical reasoning (Walton 1990) is revised. To replace the old scheme, two new schemes are presented, each with a matching set of critical questions. One is a purely instrumental scheme, while the other is a more complex scheme that takes values into account. It is argued that a given instance of practical reasoning can be evaluated, using schemes and sets of critical questions, in three ways: by attacking one or more premises of the argument, by attacking the inferential link between the premises and conclusion, or by mounting a counter-argument. It is argued that such an evaluation can be carried out in many cases using an argument diagram structure in which all components of the practical reasoning in the case are represented as premises, conclusions, and inferential links between them that can be labeled as argumentation schemes. This system works if every critical question can be classified as a assumption of or an exception to the original argument. However, it is also argued that this system does not work in all cases, namely those where epistemic closure is problematic because of intractable disputes about burden of proof.

Journal ArticleDOI
24 Jul 2007-Synthese
TL;DR: This paper first investigates the Bourbaki-inspired assumption that structures are types of set-structured systems and next considers the extent to which this problematic assumption underpins both Suppes’ and recent semantic views of the structure of a scientific theory.
Abstract: Recent semantic approaches to scientific structuralism, aiming to make precise the concept of shared structure between models, formally frame a model as a type of set-structure. This framework is then used to provide a semantic account of (a) the structure of a scientific theory, (b) the applicability of a mathematical theory to a physical theory, and (c) the structural realist’s appeal to the structural continuity between successive physical theories. In this paper, I challenge the idea that, to be so used, the concept of a model and so the concept of shared structure between models must be formally framed within a single unified framework, set-theoretic or other. I first investigate the Bourbaki-inspired assumption that structures are types of set-structured systems and next consider the extent to which this problematic assumption underpins both Suppes’ and recent semantic views of the structure of a scientific theory. I then use this investigation to show that, when it comes to using the concept of shared structure, there is no need to agree with French that “without a formal framework for explicating this concept of ‘structure-similarity’ it remains vague, just as Giere’s concept of similarity between models does ...” (French, 2000, Synthese, 125, pp. 103–120, p. 114). Neither concept is vague; either can be made precise by appealing to the concept of a morphism, but it is the context (and not any set-theoretic type) that determines the appropriate kind of morphism. I make use of French’s (1999, From physics to philosophy (pp. 187–207). Cambridge: Cambridge University Press) own example from the development of quantum theory to show that, for both Weyl and Wigner’s programmes, it was the context of considering the ‘relevant symmetries’ that determined that the appropriate kind of morphism was the one that preserved the shared Lie-group structure of both the theoretical and phenomenological models.

Journal ArticleDOI
20 Apr 2007-Synthese
TL;DR: It will be clear that Newcomb problems are indeed counterexamples to evidential decision theory once it is recognized that deliberating agents are free to believe what they want about their own actions.
Abstract: Richard Jeffrey long held that decision theory should be formulated without recourse to explicitly causal notions. Newcomb problems stand out as putative counterexamples to this ‘evidential’ decision theory. Jeffrey initially sought to defuse Newcomb problems via recourse to the doctrine of ratificationism, but later came to see this as problematic. We will see that Jeffrey’s worries about ratificationism were not compelling, but that valid ratificationist arguments implicitly presuppose causal decision theory. In later work, Jeffrey argued that Newcomb problems are not decisions at all because agents who face them possess so much evidence about correlations between their actions and states of the world that they are unable to regard their deliberate choices as causes of outcomes, and so cannot see themselves as making free choices. Jeffrey’s reasoning goes wrong because it fails to recognize that an agent’s beliefs about her immediately available acts are so closely tied to the immediate causes of these actions that she can create evidence that outweighs any antecedent correlations between acts and states. Once we recognize that deliberating agents are free to believe what they want about their own actions, it will be clear that Newcomb problems are indeed counterexamples to evidential decision theory.

Journal ArticleDOI
20 Oct 2007-Synthese
TL;DR: An attempt is made to defend a general approach to the spatial content of perception, an approach according to which perception is imbued with spatial content in virtue of certain kinds of connections between perceiving organism's sensory input and its behavioral output.
Abstract: An attempt is made to defend a general approach to the spatial content of perception, an approach according to which perception is imbued with spatial content in virtue of certain kinds of connections between perceiving organism’s sensory input and its behavioral output. The most important aspect of the defense involves clearly distinguishing two kinds of perceptuo-behavioral skills—the formation of dispositions, and a capacity for emulation. The former, the formation of dispositions, is argued to by the central pivot of spatial content. I provide a neural information processing interpretation of what these dispositions amount to, and describe how dispositions, so understood, are an obvious implementation of Gareth Evans’ proposal on the topic. Furthermore, I describe what sorts of contribution are made by emulation mechanisms, and I also describe exactly how the emulation framework differs from similar but distinct notions with which it is often unhelpfully confused, such as sensorimotor contingencies and forward models.