scispace - formally typeset
Search or ask a question

Showing papers in "European journal for philosophy of science in 2013"


Journal ArticleDOI
TL;DR: In this paper, the authors argue that value-laden decisions can be systematically avoided by making uncertainties explicit and articulating findings carefully such uncertainty articulation, understood as a methodological strategy, is exemplified by the current practice of the Intergovernmental Panel on Climate Change (IPCC).
Abstract: The ideal of value free science states that the justification of scientific findings should not be based on non-epistemic (eg moral or political) values It has been criticized on the grounds that scientists have to employ moral judgements in managing inductive risks The paper seeks to defuse this methodological critique Allegedly value-laden decisions can be systematically avoided, it argues, by making uncertainties explicit and articulating findings carefully Such careful uncertainty articulation, understood as a methodological strategy, is exemplified by the current practice of the Intergovernmental Panel on Climate Change (IPCC)

149 citations


Journal ArticleDOI
TL;DR: In this article, the authors argue that ontic structural realism (OSR) faces a dilemma: either it remains on the general level of realism with respect to the structure of a given theory, but then it is, like epistemic structural realism, only a partial realism; or it is a complete realism, but it has to answer the question how the structure is implemented, instantiated or realized and thus has to argue for a particular interpretation of the theory in question.
Abstract: This paper argues that ontic structural realism (OSR) faces a dilemma: either it remains on the general level of realism with respect to the structure of a given theory, but then it is, like epistemic structural realism, only a partial realism; or it is a complete realism, but then it has to answer the question how the structure of a given theory is implemented, instantiated or realized and thus has to argue for a particular interpretation of the theory in question. This claim is illustrated by examining how OSR fares with respect to the three main candidates for an ontology of quantum mechanics, namely many worlds-type interpretations, collapse-type interpretations and hidden variable-type interpretations. The result is that OSR as such is not sufficient to answer the question of what the world is like if quantum mechanics is correct.

45 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the extent to which a concept of emergence can be associated with Effective Field Theories (EFTs), and suggest that such a concept can be characterized by microphysicalism and novelty underwritten by the elimination of degrees of freedom from a high-energy theory.
Abstract: This essay considers the extent to which a concept of emergence can be associated with Effective Field Theories (EFTs). I suggest that such a concept can be characterized by microphysicalism and novelty underwritten by the elimination of degrees of freedom from a high-energy theory, and argue that this makes emergence in EFTs distinct from other concepts of emergence in physics that have appeared in the recent philosophical literature.

35 citations


Journal ArticleDOI
TL;DR: The argument from measurability as discussed by the authors is based on the false proposition that measuring mental states requires the existence of an observable ordering satisfying conditions like transitivity, which does not translate into a defense of mental state accounts as accounts of well-being or of measures of happiness and satisfaction as measures of wellbeing.
Abstract: A ubiquitous argument against mental-state accounts of well-being is based on the notion that mental states like happiness and satisfaction simply cannot be measured. The purpose of this paper is to articulate and to assess this “argument from measurability.” My main thesis is that the argument fails: on the most charitable interpretation, it relies on the false proposition that measurement requires the existence of an observable ordering satisfying conditions like transitivity. The failure of the argument from measurability, however, does not translate into a defense of mental-state accounts as accounts of well-being or of measures of happiness and satisfaction as measures of well-being. Indeed, I argue, the ubiquity of the argument from measurability may have obscured other, very real problems associated with mental-state accounts of well-being – above all, that happiness and satisfaction fail to track well-being – and with measures of happiness and satisfaction – above all, the tendency toward reification. I conclude that the central problem associated with the measurement of, e.g., happiness as a subjectively experienced mental state is not that it is too hard to measure, but rather that it is too easy to measure.

27 citations


Journal ArticleDOI
TL;DR: In this article, the spontaneous breaking of local gauge symmetry in quantized gauge theories was investigated from a philosophical angle, and the relation between symmetry breaking and phase transitions was discussed and some general conclusions for the philosophical interpretation of gauge symmetries and their breaking were drawn.
Abstract: The paper investigates the spontaneous breaking of gauge symmetries in gauge theories from a philosophical angle. Local gauge symmetry itself cannot break spontaneously in quantized gauge theories according to Elitzur's theorem---even though the notion of a spontaneously broken local gauge symmetry is widely employed in textbook expositions of the Higgs mechanism, the standard account of mass generation for the weak gauge bosons in the standard model. Nevertheless, gauge symmetry can be broken in gauge theories, namely, in the form of the breaking of remnant subgroups of the original local gauge group under which the theories typically remain invariant after gauge fixing. The paper discusses the relation between these instances of symmetry breaking and phase transitions and draws some more general conclusions for the philosophical interpretation of gauge symmetries and their breaking.

21 citations


Journal ArticleDOI
TL;DR: Fodor as discussed by the authors showed that the real consequence of rejecting a Darwinian approach to the mind is to reject a DarwinIAN theory of phylogenetic evolution, and that the right conclusion is not that Darwin's theory is mistaken but that Fodor's and any other non-Darwinian approaches to mind are wrong.
Abstract: There is only one physically possible process that builds and operates purposive systems in nature: natural selection. What it does is build and operate systems that look to us purposive, goal directed, teleological. There really are not any purposes in nature and no purposive processes ether. It is just one vast network of linked causal chains. Darwinian natural selection is the only process that could produce the appearance of purpose. That is why natural selection must have built and must continually shape the intentional causes of purposive behavior. Fodor’s argument against Darwinian theory involves a biologist’s modus tollens which is a cognitive scientist’s modus ponens. Assuming his argument is valid, the right conclusion is not that Darwin’s theory is mistaken but that Fodor’s and any other non-Darwinian approaches to the mind are wrong. It shows how getting things wrong in the philosophy of biology leads to mistaken conclusions with the potential to damage the acceptance of a theory with harmful consequences for human well-being. Fodor has shown that the real consequence of rejecting a Darwinian approach to the mind is to reject a Darwinian theory of phylogenetic evolution. This forces us to take seriously a notion that otherwise would not have much of a chance: that when it comes to the nature of mental states, indeterminacy rules. This is an insight that should have the most beneficial impact on freeing cognitive neuroscience from demands on the adequacy of its theories that it could never meet.

14 citations


Journal ArticleDOI
TL;DR: One way that scientifically recognized properties are multiply realized is by compensatory differences among realizing properties as discussed by the authors, i.e., if a property G is jointly realized by two properties F 1 and F 2, then G can be multiply realized by having changes in the property F 1 offset changes in F 2.
Abstract: One way that scientifically recognized properties are multiply realized is by “compensatory differences” among realizing properties. If a property G is jointly realized by two properties F1 and F2, then G can be multiply realized by having changes in the property F1 offset changes in the property F2. In some cases, there are scientific laws that articulate how distinct combinations of physical quantities can determine one and the same value of some other physical quantity. One moral to draw is that in such cases we have the multiple realization of a single determinate, “fine grained” property instance that is exactly similar to another instance. As simple as this moral is, it has ramifications for a number of recent discussions of multiple realization in science. Taken collectively, these ramifications indicate that multiple realization by compensatory adjustments merits greater attention in the philosophy of science literature than it has hitherto received.

14 citations


Journal ArticleDOI
TL;DR: In this paper, the authors compare the economics variant of network theory with those of other fields and show how the methodology employed by economists to model networks is shaped by two explanatory desiderata: that the explanandum phenomenon is based on microeconomic foundations and that the explanation is general.
Abstract: Network theory is applied across the sciences to study phenomena as diverse as the spread of SARS, the topology of the cell, the structure of the Internet and job search behaviour. Underlying the study of networks is graph theory. Whether the graph represents a network of neurons, cells, friends or firms, it displays features that exclusively depend on the mathematical properties of the graph itself. However, the way in which graph theory is implemented to the modelling of networks differs significantly across scientific fields. This article compares the economics variant of network theory with those of other fields. It shows how the methodology employed by economists to model networks is shaped by two explanatory desiderata: that the explanandum phenomenon is based on micro-economic foundations and that the explanation is general.

12 citations


Journal ArticleDOI
TL;DR: The Lotka-Volterra predator-prey-model is a widely known example of model-based science as mentioned in this paper, and its development follows a trajectory from a how possibly to a how actually model, and to what extent Volterra and D'Ancona were able to advance their model along that trajectory.
Abstract: The Lotka–Volterra predator-prey-model is a widely known example of model-based science. Here we reexamine Vito Volterra’s and Umberto D’Ancona’s original publications on the model, and in particular their methodological reflections. On this basis we develop several ideas pertaining to the philosophical debate on the scientific practice of modeling. First, we show that Volterra and D’Ancona chose modeling because the problem in hand could not be approached by more direct methods such as causal inference. This suggests a philosophically insightful motivation for choosing the strategy of modeling. Second, we show that the development of the model follows a trajectory from a “how possibly” to a “how actually” model. We discuss how and to what extent Volterra and D’Ancona were able to advance their model along that trajectory. It turns out they were unable to establish that their model was fully applicable to any system. Third, we consider another instance of model-based science: Darwin’s model of the origin and distribution of coral atolls in the Pacific Ocean. Darwin argued more successfully that his model faithfully represents the causal structure of the target system, and hence that it is a “how actually” model.

11 citations


Journal ArticleDOI
TL;DR: In this article, the authors argue that although we do not need to invoke any Platonic insight to explain thought experimentation, Norton's eliminativist account fails to capture the unique epistemological importance of thought experiments qua thought experiments, and they then present their own account according to which thought experiments are a particular type of inductive inference that is uniquely suited to generate new breakthroughs.
Abstract: Several major breakthroughs in the history of physics have been prompted not by new empirical data but by thought experiments. James Robert Brown and John Norton have developed accounts of how thought experiments can yield such advances. Brown argues that knowledge gained via thought experiments demands a Platonic explanation; thought experiments for Brown are a window into the Platonic realm of the laws of nature. Norton argues that thought experiments are just cleverly disguised inductive or deductive arguments, so no new account of their epistemology is needed. In this paper, I argue that although we do not need to invoke any Platonic insight to explain thought experimentation, Norton’s eliminativist account fails to capture the unique epistemological importance of thought experiments qua thought experiments. I then present my own account, according to which thought experiments are a particular type of inductive inference that is uniquely suited to generate new breakthroughs.

9 citations


Journal ArticleDOI
TL;DR: In this paper, a new model of aesthetic evaluations by revising McAllister's core idea of the aesthetic induction is presented, which is based on empirical findings about affection and emotion, and a naturalistic aesthetic theory.
Abstract: In Beauty and Revolution in Science, James McAllister advances a rationalistic picture of science in which scientific progress is explained in terms of aesthetic evaluations of scientific theories. Here I present a new model of aesthetic evaluations by revising McAllister’s core idea of the aesthetic induction. I point out that the aesthetic induction suffers from anomalies and theoretical inconsistencies and propose a model free from such problems. The new model is based, on the one hand, on McAllister’s original model and on further developments by Theo Kuipers in his “Beauty, a Road to the Truth?”. On the other hand, it is based on empirical findings about affection and emotion, and a naturalistic aesthetic theory. The new model is thus a naturalistic model with a wider explanatory range and much more internal consistency that McAllister’s.

Journal ArticleDOI
TL;DR: In this paper, evolutionary biological arguments for two versions of the Extended Mind Thesis (EMT): an argument appealing to Dawkins's "Extended Phenotype Thesis" (EPT) and an argument arguing to "Developmental Systems Theory" (DST) are critically assessed.
Abstract: I critically assess two widely cited evolutionary biological arguments for two versions of the ‘Extended Mind Thesis’ (EMT): namely, an argument appealing to Dawkins’s ‘Extended Phenotype Thesis’ (EPT) and an argument appealing to ‘Developmental Systems Theory’ (DST) Specifically, I argue that, firstly, appealing to the EPT is not useful for supporting the EMT (in either version), as it is structured and motivated too differently from the latter to be able to corroborate or elucidate it Secondly, I extend and defend Rupert’s argument that DST also fails to support or elucidate the EMT (in either version) by showing that the considerations in favour of the former theory have no bearing on the truth of the latter I conclude by noting that the relevance of this discussion goes beyond the debate surrounding the EMT, as it brings out some of the difficulties of introducing evolutionary biological considerations into debates in psychology and philosophy more generally

Journal ArticleDOI
Jan Sprenger1
TL;DR: It is argued that Bayesian model selection procedures are very diverse in their inferential target and their justification, and substantiates this claim by means of case studies on three selected procedures: MML, BIC and DIC.
Abstract: Bayesian model selection has frequently been the focus of philosophical inquiry (e.g., Forster, Br J Philos Sci 46:399–424, 1995; Bandyopadhyay and Boik, Philos Sci 66:S390–S402, 1999; Dowe et al., Br J Philos Sci 58:709–754, 2007). This paper argues that Bayesian model selection procedures are very diverse in their inferential target and their justification, and substantiates this claim by means of case studies on three selected procedures: MML, BIC and DIC. Hence, there is no tight link between Bayesian model selection and Bayesian philosophy. Consequently, arguments for or against Bayesian reasoning based on properties of Bayesian model selection procedures should be treated with great caution.

Journal ArticleDOI
TL;DR: In this article, it was shown that it is impossible to reproduce the phenomenological description of quantum mechanical measurements (in particular the collapse of the state of the measured system) by assuming a suitable mixed initial state.
Abstract: I present a very general and simple argument—based on the no-signalling theorem—showing that within the framework of the unitary Schrodinger equation it is impossible to reproduce the phenomenological description of quantum mechanical measurements (in particular the collapse of the state of the measured system) by assuming a suitable mixed initial state of the apparatus. The thrust of the argument is thus similar to that of the ‘insolubility theorems’ for the measurement problem of quantum mechanics (which, however, focus on the impossibility of reproducing the macroscopic measurement results). Although I believe this form of the argument is new, I argue it is essentially a variant of Einstein’s reasoning in the context of the EPR paradox—which is thereby illuminated from a new angle.

Journal ArticleDOI
TL;DR: In this paper, a non-Markovian causal model for multi-causal forks is proposed and shown to be mathematically tractable, which is a general discussion of the controversy about the Markov condition and related controversy about probabilistic causality.
Abstract: The development of causal modelling since the 1950s has been accompanied by a number of controversies, the most striking of which concerns the Markov condition. Reichenbach's conjunctive forks did satisfy the Markov condition, while Salmon's interactive forks did not. Subsequently some experts in the field have argued that adequate causal models should always satisfy the Markov condition, while others have claimed that non-Markovian causal models are needed in some cases. This paper argues for the second position by considering the multi-causal forks, which are widespread in contemporary medicine (Section 2). A non-Markovian causal model for such forks is introduced and shown to be mathematically tractable (Sections 6, 7, and 8). The paper also gives a general discussion of the controversy about the Markov condition (Section 1), and of the related controversy about probabilistic causality (Sections 3, 4, and 5).

Journal ArticleDOI
TL;DR: Carnap's work was instrumental to the liberalization of empiricism in the 1930s that transformed the logical positivism of the Vienna Circle to what came to be known as logical empiricism as mentioned in this paper.
Abstract: Carnap’s work was instrumental to the liberalization of empiricism in the 1930s that transformed the logical positivism of the Vienna Circle to what came to be known as logical empiricism. A central feature of this liberalization was the deployment of the Principle of Tolerance, originally introduced in logic, but now invoked in an epistemological context in “Testability and Meaning” (Carnap 1936a, 1937b). Immediately afterwards, starting with Foundations of Logic and Mathematics, Carnap (1939) embraced semantics and turned to interpretation to guide the choice of a theoretical language for science. The first thesis of this paper is that recourse to an intended interpretation led to a partial retrenchment of the conventionalism implied by the Principle of Tolerance. It required that the choice of a language be based on abstraction from a (typically empirical) context; this procedure later became a component of the process of explication that was distinctive to Carnap’s mature views. The (typically empirical) interpretive origin of formal systems also ensured their likely syntactic consistency, an issue on which Carnap was strongly criticized by figures such as Beth and Godel. The second thesis of this paper is that this reliance on an intended interpretation enabled constructed formal systems to be relevant to the development of empirical science.

Journal ArticleDOI
TL;DR: The first part of this paper shows that the interventionist nature of variables cannot, in principle, be established based only on an interventionist notion of causation, and demonstrates that standard observational methods identify intervention variables only if they also answer the questions that can be answered by interventionist techniques—which are thus rendered dispensable.
Abstract: The essential precondition of implementing interventionist techniques of causal reasoning is that particular variables are identified as so-called intervention variables. While the pertinent literature standardly brackets the question how this can be accomplished in concrete contexts of causal discovery, the first part of this paper shows that the interventionist nature of variables cannot, in principle, be established based only on an interventionist notion of causation. The second part then demonstrates that standard observational methods that draw on Bayesian networks identify intervention variables only if they also answer the questions that can be answered by interventionist techniques—which are thus rendered dispensable. The paper concludes by suggesting a way of identifying intervention variables that allows for exploiting the whole inferential potential of interventionist techniques.