scispace - formally typeset
Search or ask a question

Showing papers in "Synthese in 2011"


Journal ArticleDOI
11 Mar 2011-Synthese
TL;DR: A framework for building a unified science of cognition is sketched by showing how functional analyses of cognitive capacities can be integrated with the multilevel mechanistic explanations of neural systems.
Abstract: We sketch a framework for building a unified science of cognition. This unification is achieved by showing how functional analyses of cognitive capacities can be integrated with the multilevel mechanistic explanations of neural systems. The core idea is that functional analyses are sketches of mechanisms, in which some struc- tural aspects of a mechanistic explanation are omitted. Once the missing aspects are filled in, a functional analysis turns into a full-blown mechanistic explanation. By this process, functional analyses are seamlessly integrated with multilevel mechanistic explanations.

281 citations


Journal ArticleDOI
Alisa Bokulich1
01 May 2011-Synthese
TL;DR: It is argued that there are circumstances under which such false models can offer genuine scientific explanations, and a more general account of model explanations is introduced, which specifies the conditions under which models can be counted as explanatory.
Abstract: Scientific models invariably involve some degree of idealization, abstraction, or fictionalization of their target system. Nonetheless, I argue that there are circumstances under which such false models can offer genuine scientific explanations. After reviewing three different proposals in the literature for how models can explain, I shall introduce a more general account of what I call model explanations, which specify the conditions under which models can be counted as explanatory. I shall illustrate this new framework by applying it to the case of Bohr’s model of the atom, and conclude by drawing some distinctions between phenomenological models, explanatory models, and fictional models.

206 citations


Journal ArticleDOI
01 Jun 2011-Synthese
TL;DR: It is argued against the tendency in the philosophy of science literature to link abduction to the inference to the best explanation (IBE), and in particular, to claim that Peircean abduction is a conceptual predecessor to IBE.
Abstract: I argue against the tendency in the philosophy of science literature to link abduction to the inference to the best explanation (IBE), and in particular, to claim that Peircean abduction is a conceptual predecessor to IBE. This is not to discount either abduction or IBE. Rather the purpose of this paper is to clarify the relation between Peircean abduction and IBE in accounting for ampliative inference in science. This paper aims at a proper classification—not justification—of types of scientific reasoning. In particular, I claim that Peircean abduction is an in-depth account of the process of generating explanatory hypotheses, while IBE, at least in Peter Lipton’s thorough treatment, is a more encompassing account of the processes both of generating and of evaluating scientific hypotheses. There is then a two-fold problem with the claim that abduction is IBE. On the one hand, it conflates abduction and induction, which are two distinct forms of logical inference, with two distinct aims, as shown by Charles S. Peirce; on the other hand it lacks a clear sense of the full scope of IBE as an account of scientific inference.

144 citations


Journal ArticleDOI
05 Jul 2011-Synthese
TL;DR: It is argued that computational models in this domain possess explanatory force to the extent that they describe the mechanisms responsible for producing a given phenomenon—paralleling how other mechanistic models explain.
Abstract: The central aim of this paper is to shed light on the nature of explanation in computational neuroscience. I argue that computational models in this domain possess explanatory force to the extent that they describe the mechanisms responsible for producing a given phenomenon—paralleling how other mechanistic models explain. Conceiving computational explanation as a species of mechanistic explanation affords an important distinction between computational models that play genuine explanatory roles and those that merely provide accurate descriptions or predictions of phenomena. It also serves to clarify the pattern of model refinement and elaboration undertaken by computational neuroscientists.

137 citations


Journal ArticleDOI
01 May 2011-Synthese
TL;DR: The paper suggests modifying Giere’s account without going all the way to purely pragmatic conceptions of truth—while giving pragmatics a prominent role in modeling and truth-acquisition.
Abstract: If models can be true, where is their truth located? Giere (Explaining science, University of Chicago Press, Chicago, 1998) has suggested an account of theoretical models on which models themselves are not truth-valued. The paper suggests modifying Giere’s account without going all the way to purely pragmatic conceptions of truth—while giving pragmatics a prominent role in modeling and truth-acquisition. The strategy of the paper is to ask: if I want to relocate truth inside models, how do I get it, what else do I need to accept and reject? In particular, what ideas about model and truth do I need? The case used as an illustration is the world’s first economic model, that of von Thunen (1826/1842) on agricultural land use in the highly idealized Isolated State.

134 citations


Journal ArticleDOI
23 Jun 2011-Synthese
TL;DR: It is argued that cognitive models of the systems that underlie psychological capacities, while superficially similar to mechanistic models, have a substantially more complex relation to the real underlying system.
Abstract: Mechanistic explanation has an impressive track record of advancing our understanding of complex, hierarchically organized physical systems, particularly biological and neural systems. But not every complex system can be understood mechanistically. Psychological capacities are often understood by providing cognitive models of the systems that underlie them. I argue that these models, while superficially similar to mechanistic models, in fact have a substantially more complex relation to the real underlying system. They are typically constructed using a range of techniques for abstracting the functional properties of the system, which may not coincide with its mechanistic organization. I describe these techniques and show that despite being non-mechanistic, these cognitive models can satisfy the normative constraints on good explanations.

106 citations


Journal ArticleDOI
01 Feb 2011-Synthese
TL;DR: An approach to action and practical deliberation according to which the degree of epistemic warrant required for practical rationality varies with practical context is developed, which may provide a strict invariantist account of cases that have been thought to motivate interest-relative or subject-sensitive theories of knowledge and warrant.
Abstract: I develop an approach to action and practical deliberation according to which the degree of epistemic warrant required for practical rationality varies with practical context. In some contexts of practical deliberation, very strong warrant is called for. In others, less will do. I set forth a warrant account, (WA), that captures this idea. I develop and defend (WA) by arguing that it is more promising than a competing knowledge account of action due to John Hawthorne and Jason Stanley. I argue that cases of warranted false belief speak in favor of (WA) and against the knowledge account. Moreover, I note some problems with an “excuse maneuver” that proponents of the knowledge account frequently invoke in response to cases of warranted false belief. Finally, I argue that (WA) may provide a strict invariantist account of cases that have been thought to motivate interest-relative or subject-sensitive theories of knowledge and warrant.

92 citations


Journal ArticleDOI
01 May 2011-Synthese
TL;DR: It is concluded that the issue of whether or not theoretical terms successfully refer is not the key to formulating the appropriate form of scientific realism in response to arguments from theory change, and that the case of phlogiston theory is shown to be readily accommodated by ontic structural realism.
Abstract: The aim of this paper is to revisit the phlogiston theory to see what can be learned from it about the relationship between scientific realism, approximate truth and successful reference. It is argued that phlogiston theory did to some extent correctly describe the causal or nomological structure of the world, and that some of its central terms can be regarded as referring. However, it is concluded that the issue of whether or not theoretical terms successfully refer is not the key to formulating the appropriate form of scientific realism in response to arguments from theory change, and that the case of phlogiston theory is shown to be readily accommodated by ontic structural realism.

64 citations


Journal ArticleDOI
01 Mar 2011-Synthese
TL;DR: It is concluded that there is no contradiction between classical logic and (the authors' dynamic reinterpretation of) quantum logic, and that the Dynamic-Logical perspective leads to a better and deeper understanding of the “non-classicality” of quantum behavior than any perspective based on static Propositional Logic.
Abstract: We address the old question whether a logical understanding of Quantum Mechanics requires abandoning some of the principles of classical logic. Against Putnam and others (Among whom we may count or not E. W. Beth, depending on how we interpret some of his statements), our answer is a clear “no”. Philosophically, our argument is based on combining a formal semantic approach, in the spirit of E. W. Beth’s proposal of applying Tarski’s semantical methods to the analysis of physical theories, with an empirical–experimental approach to Logic, as advocated by both Beth and Putnam, but understood by us in the view of the operational- realistic tradition of Jauch and Piron, i.e. as an investigation of “the logic of yes–no experiments” (or “questions”). Technically, we use the recently-developed setting of Quantum Dynamic Logic (Baltag and Smets 2005, 2008) to make explicit the operational meaning of quantum-mechanical concepts in our formal semantics. Based on our recent results (Baltag and Smets 2005), we show that the correct interpretation of quantum-logical connectives is dynamical, rather than purely propositional. We conclude that there is no contradiction between classical logic and (our dynamic reinterpretation of) quantum logic. Moreover, we argue that the Dynamic-Logical perspective leads to a better and deeper understanding of the “non-classicality” of quantum behavior than any perspective based on static Propositional Logic.

64 citations


Journal ArticleDOI
01 Sep 2011-Synthese
TL;DR: This paper provides a restatement and defense of the data/ phenomena distinction introduced by Jim Bogen and me several decades ago.
Abstract: This paper provides a restatement and defense of the data/ phenomena distinction introduced by Jim Bogen and me several decades ago (e.g., Bogen and Woodward, The Philosophical Review, 303–352, 1988). Additional motivation for the distinction is introduced, ideas surrounding the distinction are clarified, and an attempt is made to respond to several criticisms.

63 citations


Journal ArticleDOI
01 Jul 2011-Synthese
TL;DR: This paper analyzes the structure of trumping cases from the perspective of contrastive causation, and argues that the case is much more complex than it first appears.
Abstract: Jonathan Schaffer introduced a new type of causal structure called ‘trumping’. According to Schaffer, trumping is a species of causal preemption. Both Schaffer and I have argued that causation has a contrastive structure. In this paper, I analyze the structure of trumping cases from the perspective of contrastive causation, and argue that the case is much more complex than it first appears. Nonetheless, there is little reason to regard trumping as a species of causal preemption.

Journal ArticleDOI
01 May 2011-Synthese
TL;DR: This paper outlines a defense of scientific realism against the pessimistic meta-induction which appeals to the phenomenon of the exponential growth of science and offers a framework through which scientific realism can be compared with two types of anti-realism.
Abstract: This paper outlines a defense of scientific realism against the pessimistic meta-induction which appeals to the phenomenon of the exponential growth of science Here, scientific realism is defined as the view that our current successful scientific theories are mostly approximately true, and pessimistic meta-induction is the argument that projects the occurrence of past refutations of successful theories to the present concluding that many or most current successful scientific theories are false The defense starts with the observation that at least 80% of all scientific work ever done has been done since 1950, proceeds with the claim that practically all of our most successful theories were entirely stable during that period of time, and concludes that the projection of refutations of successful theories to the present is unsound In addition to this defense, the paper offers a framework through which scientific realism can be compared with two types of anti-realism The framework is also of help to examine the relationships between these three positions and the three main arguments offered respectively in their support (No-miracle argument, pessimistic meta-induction, underdetermination)

Journal ArticleDOI
01 Feb 2011-Synthese
TL;DR: The diagnosis of part-whole explanation in the biological sciences as well as in other domains exploring evolved, complex, and integrated systems cross-cuts standard philosophical categories of explanation: causal explanation and explanation as unification.
Abstract: A scientific explanatory project, part-whole explanation, and a kind of science, part-whole science are premised on identifying, investigating, and using parts and wholes. In the biological sciences, mechanistic, structuralist, and historical explanations are part-whole explanations. Each expresses different norms, explananda, and aims. Each is associated with a distinct partitioning frame for abstracting kinds of parts. These three explanatory projects can be complemented in order to provide an integrative vision of the whole system, as is shown for a detailed case study: the tetrapod limb. My diagnosis of part-whole explanation in the biological sciences as well as in other domains exploring evolved, complex, and integrated systems (e.g., psychology and cognitive science) cross-cuts standard philosophical categories of explanation: causal explanation and explanation as unification. Part-whole explanation is itself one essential aspect of part-whole science.

Journal ArticleDOI
01 May 2011-Synthese
TL;DR: The problematic roles of heuristic fruitfulness and surplus structure in attempts to break these forms of underdetermination are discussed and an approach emphasizing the relevant structural commonalities is defended.
Abstract: Various forms of underdetermination that might threaten the realist stance are examined. That which holds between different ‘formulations’ of a theory (such as the Hamiltonian and Lagrangian formulations of classical mechanics) is considered in some detail, as is the ‘metaphysical’ underdetermination invoked to support ‘ontic structural realism’. The problematic roles of heuristic fruitfulness and surplus structure in attempts to break these forms of underdetermination are discussed and an approach emphasizing the relevant structural commonalities is defended.

Journal ArticleDOI
01 Jan 2011-Synthese
TL;DR: In the 2005 Kitzmiller v Dover Area School Board case, a federal district court ruled that Intelligent Design creationism was not science, but a disguised religious view and that teaching it in public schools is unconstitutional.
Abstract: In the 2005 Kitzmiller v Dover Area School Board case, a federal district court ruled that Intelligent Design creationism was not science, but a disguised religious view and that teaching it in public schools is unconstitutional. But creationists contend that it is illegitimate to distinguish science and religion, citing philosophers Quinn and especially Laudan, who had criticized a similar ruling in the 1981 McLean v. Arkansas creation-science case on the grounds that no necessary and sufficient demarcation criterion was possible and that demarcation was a dead pseudo-problem. This article discusses problems with those conclusions and their application to the quite different reasoning between these two cases. Laudan focused too narrowly on the problem of demarcation as Popper defined it. Distinguishing science from religion was and remains an important conceptual issue with significant practical import, and philosophers who say there is no difference have lost touch with reality in a profound and perverse way. The Kitzmiller case did not rely on a strict demarcation criterion, but appealed only to a “ballpark” demarcation that identifies methodological naturalism (MN) as a “ground rule” of science. MN is shown to be a distinguishing feature of science both in explicit statements from scientific organizations and in actual practice. There is good reason to think that MN is shared as a tacit assumption among philosophers who emphasize other demarcation criteria and even by Laudan himself.

Journal ArticleDOI
01 May 2011-Synthese
TL;DR: It is argued there is a straightforward manner to consider the recent results as a reason in favour of OntSR, because precisely their newly discovered discernibility prevents them from ‘whithering away’.
Abstract: One of the reasons provided for the shift away from an ontology for physical reality of material objects & properties towards one of physical structures & relations (Ontological Structural Realism: OntSR) is that the quantum-mechanical description of composite physical systems of similar elementary particles entails they are indiscernible. As material objects, they ‘whither away’, and when they wither away, structures emerge in their stead. We inquire into the question whether recent results establishing the weak discernibility of elementary particles pose a threat for this quantum-mechanical reason for OntSR, because precisely their newly discovered discernibility prevents them from ‘whithering away’. We argue there is a straightforward manner to consider the recent results as a reason in favour of OntSR rather than against it.

Journal ArticleDOI
01 Aug 2011-Synthese
TL;DR: It is shown that in cases of extended cognition, the most salient feature explaining S’s believing the truth regarding p may well be external to S, that is, it might be a feature of S�'s (non-human, artifactual) environment.
Abstract: The Credit Theory of Knowledge (CTK)—as expressed by such figures as John Greco, Wayne Riggs, and Ernest Sosa—holds that knowing that p implies deserving epistemic credit for truly believing that p. Opponents have presented three sorts of counterexamples to CTK: S might know that p without deserving credit in cases of (1) innate knowledge (Lackey, Kvanvig); (2) testimonial knowledge (Lackey); or (3) perceptual knowledge (Pritchard). The arguments of Lackey, Kvanvig and Pritchard, however, are effective only in so far as one is willing to accept a set of controversial background assumptions (for instance, that innate knowledge exists or that doxastic voluntarism is wrong). In this paper I mount a fourth argument against CTK, that doesn’t rest on any such controversial premise, and therefore should convince a much wider audience. In particular, I show that in cases of extended cognition (very broadly conceived), the most salient feature explaining S’s believing the truth regarding p may well be external to S, that is, it might be a feature of S’s (non-human, artifactual) environment. If so, the cognitive achievement of knowing that p is not (or only marginally) creditable to S, and hence, CTK is false.

Journal ArticleDOI
01 Apr 2011-Synthese
TL;DR: This article considers the case where available actions are public announcements, and where each agent has a (typically epistemic) goal formula that she would like to become true, and analyzes such public announcement games.
Abstract: Dynamic epistemic logic describes the possible information-changing actions available to individual agents, and their knowledge pre- and post conditions. For example, public announcement logic describes actions in the form of public, truthful announcements. However, little research so far has considered describing and analysing rational choice between such actions, i.e., predicting what rational self-interested agents actually will or should do. Since the outcome of information exchange ultimately depends on the actions chosen by all the agents in the system, and assuming that agents have preferences over such outcomes, this is a game theoretic scenario. This is, in our opinion, an interesting general research direction, combining logic and game theory in the study of rational information exchange. In this article we take some first steps in this direction: we consider the case where available actions are public announcements, and where each agent has a (typically epistemic) goal formula that she would like to become true. What will each agent announce? The truth of the goal formula also depends on the announcements made by other agents. We analyse such public announcement games.

Journal ArticleDOI
01 Mar 2011-Synthese
TL;DR: It is argued that the examples show the need to refer to dynamic, in particular causal laws in an approach to their truth conditions, and it is claimed that a causal notion of consequence is needed.
Abstract: This paper deals with the truth conditions of conditional sentences. It focuses on a particular class of problematic examples for semantic theories for these sentences. I will argue that the examples show the need to refer to dynamic, in particular causal laws in an approach to their truth conditions. More particularly, I will claim that we need a causal notion of consequence. The proposal subsequently made uses a representation of causal dependencies as proposed in Pearl (2000) to formalize a causal notion of consequence. This notion inserted in premise semantics for counterfactuals in the style of Veltman (1976) and Kratzer (1979) will provide a new interpretation rule for conditionals. I will illustrate how this approach overcomes problems of previous proposals and end with some remarks on remaining questions.

Journal ArticleDOI
01 May 2011-Synthese
TL;DR: It is argued that no convincing reason has been given for thinking so that theories ‘underdetermined by the evidence’ are underdetermined, and that there is no reason to regard such a rival as equally well empirically supported and hence no threat to realism.
Abstract: Are theories ‘underdetermined by the evidence’ in any way that should worry the scientific realist? I argue that no convincing reason has been given for thinking so. A crucial distinction is drawn between data equivalence and empirical equivalence. Duhem showed that it is always possible to produce a data equivalent rival to any accepted scientific theory. But there is no reason to regard such a rival as equally well empirically supported and hence no threat to realism. Two theories are empirically equivalent if they share all consequences expressed in purely observational vocabulary. This is a much stronger requirement than has hitherto been recognised—two such ‘rival’ theories must in fact agree on many claims that are clearly theoretical in nature. Given this, it is unclear how much of an impact on realism a demonstration that there is always an empirically equivalent ‘rival’ to any accepted theory would have—even if such a demonstration could be produced. Certainly in the case of the version of realism that I defend—structural realism—such a demonstration would have precisely no impact: two empirically equivalent theories are, according to structural realism, cognitively indistinguishable.

Journal ArticleDOI
01 Feb 2011-Synthese
TL;DR: The nature of mechanisms and the distinction between the relevant and irrelevant parts involved in a mechanism’s operation are examined and a novel account of the distinction that appeals to some resources from Mackie's theory of causation is offered.
Abstract: This paper will examine the nature of mechanisms and the distinction between the relevant and irrelevant parts involved in a mechanism’s operation. I first consider Craver’s account of this distinction in his book on the nature of mechanisms, and explain some problems. I then offer a novel account of the distinction that appeals to some resources from Mackie’s theory of causation. I end by explaining how this account enables us to better understand what mechanisms are and their various features.

Journal ArticleDOI
01 May 2011-Synthese
TL;DR: This paper argues that scientific realism can happily co-exist with models qua abstracta and that fictionalism towards scientific theories inevitable is inevitable.
Abstract: A natural way to think of models is as abstract entities. If theories employ models to represent the world, theories traffic in abstract entities much more widely than is often assumed. This kind of thought seems to create a problem for a scientific realist approach to theories. Scientific realists claim theories should be understood literally. Do they then imply (and are they committed to) the reality of abstract entities? Or are theories simply—and incurably—false (if there are no abstract entities)? Or has the very idea of literal understanding to be abandoned? Is then fictionalism towards scientific theories inevitable? This paper argues that scientific realism can happily co-exist with models qua abstracta.

Journal ArticleDOI
01 Nov 2011-Synthese
TL;DR: This paper investigates the status of the assumptions that Kitcher and Strevens make in their models, by first inquiring whether they are reasonable representations of reality, and then checking the models’ robustness against weakenings of these assumptions, by developing a series of agent-based simulations.
Abstract: Scientific research is almost always conducted by communities of scientists of varying size and complexity Such communities are effective, in part, because they divide their cognitive labor: not every scientist works on the same project Philip Kitcher and Michael Strevens have pioneered efforts to understand this division of cognitive labor by proposing models of how scientists make decisions about which project to work on For such models to be useful, they must be simple enough for us to understand their dynamics, but faithful enough to reality that we can use them to analyze real scientific communities To satisfy the first requirement, we must employ idealizations to simplify the model The second requirement demands that these idealizations not be so extreme that we lose the ability to describe real-world phenomena This paper investigates the status of the assumptions that Kitcher and Strevens make in their models, by first inquiring whether they are reasonable representations of reality, and then by checking the models’ robustness against weakenings of these assumptions To do this, we first argue against the reality of the assumptions, and then develop a series of agent-based simulations to systematically test their effects on model outcomes We find that the models are not robust against weakenings of these idealizations In fact we find that under certain conditions, this can lead to the model predicting outcomes that are qualitatively opposite of the original model outcomes

Journal ArticleDOI
01 Sep 2011-Synthese
TL;DR: It is argued that the distributed, expert-based approach to modeling the discipline of philosophy carries substantial practical and philosophical benefits over alternatives.
Abstract: The application of digital humanities techniques to philosophy is changing the way scholars approach the discipline. This paper seeks to open a discussion about the difficulties, methods, opportunities, and dangers of creating and utilizing a formal representation of the discipline of philosophy. We review our current project, the Indiana Philosophy Ontology (InPhO) project, which uses a combination of automated methods and expert feedback to create a dynamic computational ontology for the discipline of philosophy. We argue that our distributed, expert-based approach to modeling the discipline carries substantial practical and philosophical benefits over alternatives. We also discuss challenges facing our project (and any other similar project) as well as the future directions for digital philosophy afforded by formal modeling.

Journal ArticleDOI
01 Nov 2011-Synthese
TL;DR: It is argued in this paper that the derivation should be based on a model-theoretical relation of logical consequence rather than derivability by means of mechanical (recursive) rules.
Abstract: The modern notion of the axiomatic method developed as a part of the conceptualization of mathematics starting in the nineteenth century. The basic idea of the method is the capture of a class of structures as the models of an axiomatic system. The mathematical study of such classes of structures is not exhausted by the derivation of theorems from the axioms but includes normally the metatheory of the axiom system. This conception of axiomatization satisfies the crucial requirement that the derivation of theorems from axioms does not produce new information in the usual sense of the term called depth information. It can produce new information in a different sense of information called surface information. It is argued in this paper that the derivation should be based on a model-theoretical relation of logical consequence rather than derivability by means of mechanical (recursive) rules. Likewise completeness must be understood by reference to a model-theoretical consequence relation. A correctly understood notion of axiomatization does not apply to purely logical theories. In the latter the only relevant kind of axiomatization amounts to recursive enumeration of logical truths. First-order “axiomatic” set theories are not genuine axiomatizations. The main reason is that their models are structures of particulars, not of sets. Axiomatization cannot usually be motivated epistemologically, but it is related to the idea of explanation.

Journal ArticleDOI
01 Jan 2011-Synthese
TL;DR: It is argued that the probabilities of theories like CSM and ET are not chances, but also that they are not subjective probabilities either, which distinguishes counterfactual probability from chance as a second concept of objective probability.
Abstract: Some have argued that chance and determinism are compatible in order to account for the objectivity of probabilities in theories that are compatible with determinism, like Classical Statistical Mechanics (CSM) and Evolutionary Theory (ET). Contrarily, some have argued that chance and determinism are incompatible, and so such probabilities are subjective. In this paper, I argue that both of these positions are unsatisfactory. I argue that the probabilities of theories like CSM and ET are not chances, but also that they are not subjective probabilities either. Rather, they are a third type of probability, which I call counterfactual probability. The main distinguishing feature of counterfactual-probability is the role it plays in conveying important counterfactual information in explanations. This distinguishes counterfactual probability from chance as a second concept of objective probability.

Journal ArticleDOI
01 Jan 2011-Synthese
TL;DR: The main task of this paper is to show that this objection backfires: the difference between the two forms of updating reflects negatively on Bayesian conditionalisation rather than on objective Bayesian updating.
Abstract: Objective Bayesianism has been criticised on the grounds that objective Bayesian updating, which on a finite outcome space appeals to the maximum entropy principle, differs from Bayesian conditionalisation. The main task of this paper is to show that this objection backfires: the difference between the two forms of updating reflects negatively on Bayesian conditionalisation rather than on objective Bayesian updating. The paper also reviews some existing criticisms and justifications of conditionalisation, arguing in particular that the diachronic Dutch book justification fails because diachronic Dutch book arguments are subject to a reductio: in certain circumstances one can Dutch book an agent however she changes her degrees of belief. One may also criticise objective Bayesianism on the grounds that its norms are not compulsory but voluntary, the result of a stance. It is argued that this second objection also misses the mark, since objective Bayesian norms are tied up in the very notion of degrees of belief.

Journal ArticleDOI
01 Sep 2011-Synthese
TL;DR: This paper develops an analysis of the stabilization of phenomena that integrates two aspects that have largely been treated separately in the literature: one concerns the skills required for empirical work; the other concerns the strategies by which claims about phenomena are validated.
Abstract: The last two decades have seen a rising interest in (a) the notion of a scientific phenomenon as distinct from theories and data, and (b) the intricacies of experimentally producing and stabilizing phenomena. This paper develops an analysis of the stabilization of phenomena that integrates two aspects that have largely been treated separately in the literature: one concerns the skills required for empirical work; the other concerns the strategies by which claims about phenomena are validated. I argue that in order to make sense of the process of stabilization, we need to distinguish between two types of phenomena: phenomena as patterns in the data (“surface regularities”) and phenomena as underlying (or “hidden”) regularities. I show that the epistemic relationships that data bear to each of these types of phenomena are different: Data patterns are instantiated by individual data, whereas underlying regularities are indicated by individual data, insofar as they instantiate a data pattern. Drawing on an example from memory research, I argue that neither of these two kinds of phenomenon can be stabilized in isolation. I conclude that what is stabilized when phenomena are stabilized is the fit between surface regularities and hidden regularities.

Journal ArticleDOI
E. J. Lowe1
01 Jan 2011-Synthese
TL;DR: It is argued that metaphysics, conceived as an inquiry into the ultimate nature of mind-independent reality, is a rationally indispensable intellectual discipline, with the a priori science of formal ontology at its heart.
Abstract: In this paper, it is argued that metaphysics, conceived as an inquiry into the ultimate nature of mind-independent reality, is a rationally indispensable intellectual discipline, with the a priori science of formal ontology at its heart. It is maintained that formal ontology, properly understood, is not a mere exercise in conceptual analysis, because its primary objective is a normative one, being nothing less than the attempt to grasp adequately the essences of things, both actual and possible, with a view to understanding as far as we can the fundamental structure of reality as a whole. Accordingly, it is urged, the deliverances of formal ontology have a modal and epistemic status akin to those of other a priori sciences, such as mathematics and logic, rather than constituting rivals to the claims of the empirical sciences, such as physics.

Journal ArticleDOI
01 Jan 2011-Synthese
TL;DR: It is argued that while scientific inquiry inevitably favours a high degree of consensus in the authors' choices of stance, there is no parallel constraint in the case of philosophical inquiry, such as that concerned with how scientific knowledge should be interpreted.
Abstract: The philosophy of science has produced numerous accounts of how scientific facts are generated, from very specific facilitators of belief, such as neo-Kantian constitutive principles, to global frameworks, such as Kuhnian paradigms. I consider a recent addition to this canon: van Fraassen’s notion of an epistemic stance—a collection of attitudes and policies governing the generation of factual beliefs—and his commitment to voluntarism in this context: the idea that contrary stances and sets of beliefs are rationally permissible. I argue that while scientific inquiry inevitably favours a high degree of consensus in our choices of stance, there is no parallel constraint in the case of philosophical inquiry, such as that concerned with how scientific knowledge should be interpreted. This leads, in the latter case, to a fundamental and apparently irresolvable mystery at the heart of stance voluntarism, regarding the grounds for choosing basic epistemic stances.