scispace - formally typeset
Search or ask a question

Showing papers in "Synthese in 2013"


Journal ArticleDOI
01 May 2013-Synthese
TL;DR: It is argued that non-Domination is best understood as a thoroughly generic liberal ideal of freedom to which even negative libertarians are implicitly committed, for non-domination is negative liberty as of right—secured non-interference.
Abstract: I shall first briefly revisit the broad idea of ‘epistemic injustice’, explaining how it can take either distributive or discriminatory form, in order to put the concepts of ‘testimonial injustice’ and ‘hermeneutical injustice’ in place. In previous work I have explored how the wrong of both kinds of epistemic injustice has both an ethical and an epistemic significance—someone is wronged in their capacity as a knower. But my present aim is to show that this wrong can also have a political significance in relation to non-domination, and so to freedom. While it is only the republican conception of political freedom that presents nondomination as constitutive of freedom, I shall argue that non-domination is best understood as a thoroughly generic liberal ideal of freedom to which even negative libertarians are implicitly committed, for non-domination is negative liberty as of right—secured non-interference. Crucially on this conception, non-domination requires that the citizen can contest interferences. Pettit specifies three conditions of contestation, each of which protects against a salient risk of the would-be contester not getting a ‘proper hearing’. But I shall argue that missing from this list is anything to protect against a fourth salient threat: the threat that either kind of epistemic injustice might disable contestation by way of an unjust deflation of either credibility or intelligibility. Thus we see that both testimonial and hermeneutical injustice can render a would-be contester dominated. Epistemic justice is thereby revealed as a constitutive condition of non-domination, and thus of a central liberal political ideal of freedom.

200 citations


Journal Article
30 Apr 2013-Synthese
TL;DR: In this article, the authors proposed a solution fully Bayesian that incorporates the prevision in a global issue and applied the proposed procedures to the Gaussian model and it was possible to reach an explicit form of the various probabilities of errors that the practitioner can make.
Abstract: This work is a generalization and systematization of the methodology for clinical trials in a Bayesian framework. We have used a purely Bayesian sequential aspect. This article provides a solution fully Bayesian that incorporates the prevision in a global issue. In an intermediate analysis, the predictive inference focuses on all the data, the data available and future data, in this manner, the evaluation of the prevision error is not overvalued as in an approach that does take into account the future observation. We applied the proposed procedures to the Gaussian model and it was possible to reach an explicit form of the various probabilities of errors that the practitioner can make. Thus we can make available to the user an implementable tool and fully Bayesian. The sequential aspect of the treatment adopted in this paper is a particularly innovative element compared to existing technology; it also helps to reduce multiphase studies more ambitious than the existing, which for the patient makes the analysis more ethical since it allows a stoppage of the experience shorter and less tardy. Keywords: Predictive methods-Bayesian analysis-Clinical Trials-p-Value. Ce travail est une generalisation et une systematisation de la methodologie pour les essais cliniques dans un cadre Bayesien. Nous avons pu utiliser un aspect sequentiel purement Bayesien. Cet article permet une solution integralement Bayesienne qui incorpore la prevision dans une problematique globale. Dans une analyse intermediaire, l'inference predictive porte sur l'ensemble des donnees, celles disponibles ainsi que les donnees futures, de cette maniere, l'evaluation de l'erreur de prevision n'est pas surevaluee comme dans une approche qui ne prend en compte que l'observation future. Avec des procedures proposees au modele gaussien, il a ere possible d'aboutir a une forme explicite des diverses probabilites d'erreurs que peut commettre le praticien. Ainsi, nous pouvons proposer a l'utilisateur un outil implementable et completement Bayesien. L'aspect sequentiel du traitement adopte dans cet article est un element particulierement innovateur par rapport a la technologie existante, il permet aussi d'alleger des etudes multi phases plus ambitieuses que l'existantes, ce qui pour le patient rend l'analyse plus ethique puisque cela permet un arret de l'experience moins tardif. Mots Cles: Methodes Predictives-Analyse Bayesienne- Essais Cliniques-p-Valeur.

181 citations


Journal ArticleDOI
01 Mar 2013-Synthese
TL;DR: A account of comparative similarity for impossible worlds is developed, and the view that counterpossibles are sometimes non-vacuously true and sometimesNon-Vacuously false is defended, while retaining a Lewisian semantics.
Abstract: Since the publication of David Lewis’ Counterfactuals, the standard line on subjunctive conditionals with impossible antecedents (or counterpossibles) has been that they are vacuously true. That is, a conditional of the form ‘If p were the case, q would be the case’ is trivially true whenever the antecedent, p, is impossible. The primary justification is that Lewis’ semantics best approximates the English subjunctive conditional, and that a vacuous treatment of counterpossibles is a consequence of that very elegant theory. Another justification derives from the classical lore than if an impossibility were true, then anything goes. In this paper we defend non-vacuism, the view that counterpossibles are sometimes non-vacuously true and sometimes non-vacuously false. We do so while retaining a Lewisian semantics, which is to say, the approach we favor does not require us to abandon classical logic or a similarity semantics. It does however require us to countenance impossible worlds. An impossible worlds treatment of counterpossibles is suggested (but not defended) by Lewis (Counterfactuals. Blackwell, Oxford, 1973), and developed by Nolan (Notre Dame J Formal Logic 38:325–527, 1997), Kment (Mind 115:261–310, 2006a: Philos Perspect 20:237–302, 2006b), and Vander Laan (In: Jackson F, Priest G (eds) Lewisian themes. Oxford University Press, Oxford, 2004). We follow this tradition, and develop an account of comparative similarity for impossible worlds.

143 citations


Journal ArticleDOI
01 Jul 2013-Synthese
TL;DR: An argument is developed for the leading idea behind the Toolbox Project, namely, that philosophical dialogue can improve cross-disciplinary science by effecting epistemic changes that lead to better group communication.
Abstract: In this article we argue that philosophy can facilitate improvement in cross-disciplinary science. In particular, we discuss in detail the Toolbox Project, an effort in applied epistemology that deploys philosophical analysis for the purpose of enhancing collaborative, cross-disciplinary scientific research through improvements in cross-disciplinary communication. We begin by sketching the scientific context within which the Toolbox Project operates, a context that features a growing interest in and commitment to cross-disciplinary research (CDR). We then develop an argument for the leading idea behind this effort, namely, that philosophical dialogue can improve cross-disciplinary science by effecting epistemic changes that lead to better group communication. On the heels of this argument, we describe our approach and its output; in particular, we emphasize the Toolbox instrument that generates philosophical dialogue and the Toolbox workshop in which that dialogue takes place. Together, these constitute a philosophical intervention into the life of CDR teams. We conclude by considering the philosophical implications of this intervention.

131 citations


Journal ArticleDOI
01 Jul 2013-Synthese
TL;DR: This paper group some philosophers—mostly from the philosophy of science, social–political philosophy, and moral theory—and some non-philosophers together to provide three different, but related, answers to the question of interdisciplinary communication.
Abstract: In this paper I attempt to answer the question: What is interdisciplinary communication? I attempt to answer this question, rather than what some might consider the ontologically prior question—what is interdisciplinarity (ID)?—for two reasons: (1) there is no generally agreed-upon definition of ID; and (2) one’s views regarding interdisciplinary communication have a normative relationship with one’s other views of ID, including one’s views of its very essence. I support these claims with reference to the growing literature on ID, which has a marked tendency to favor the idea that interdisciplinary communication entails some kind of ‘integration’. The literature on ID does not yet include very many philosophers, but we have something valuable to offer in addressing the question of interdisciplinary communication. Playing somewhat fast-and-loose with traditional categories of the subdisciplines of philosophy, I group some philosophers—mostly from the philosophy of science, social–political philosophy, and moral theory—and some non-philosophers together to provide three different, but related, answers to the question of interdisciplinary communication. The groups are as follows: (1) Habermas–Klein, (2) Kuhn–MacIntyre, and (3) Bataille–Lyotard. These groups can also be thought of in terms of the types of answers they give to the question of interdisciplinary communication, especially in terms of the following key words (where the numbers correspond to the groups from the previous sentence): (1) consensus, (2) incommensurability, and (3) invention.

108 citations


Journal ArticleDOI
05 Jan 2013-Synthese
TL;DR: By designing non-standard Kripke semantics for the language of PAL, it is shown that the proof system based on this core set of axioms does not completely axiomatize PAL without additional axiom and rules.
Abstract: In the literature, different axiomatizations of Public Announcement Logic (PAL) have been proposed. Most of these axiomatizations share a "core set" of the so-called "reduction axioms". In this paper, by designing non-standard Kripke semantics for the language of PAL, we show that the proof system based on this core set of axioms does not completely axiomatize PAL without additional axioms and rules. In fact, many of the intuitive axioms and rules we took for granted could not be derived from the core set. Moreover, we also propose and advocate an alternative yet meaningful axiomatization of PAL without the reduction axioms. The completeness is proved directly by a detour method using the canonical model where announcements are treated as merely labels for modalities as in normal modal logics. This new axiomatization and its completeness proof may sharpen our understanding of PAL and can be adapted to other dynamic epistemic logics.

101 citations


Journal ArticleDOI
01 May 2013-Synthese
TL;DR: The paper makes the general case that, all other things being equal and under some reasonable assumptions, more is smarter, where there is an upper limit to the number of people that can be included in the group: random selection.
Abstract: This paper argues in favor of the epistemic properties of inclusiveness in the context of democratic deliberative assemblies and derives the implications of this argument in terms of the epistemically superior mode of selection of representatives. The paper makes the general case that, all other things being equal and under some reasonable assumptions, more is smarter. When applied to deliberative assemblies of representatives, where there is an upper limit to the number of people that can be included in the group, the argument translates into a defense of a specific selection mode of participants: random selection.

94 citations


Journal ArticleDOI
01 Jul 2013-Synthese
TL;DR: An analysis of the interplay between epistemic dependence between individual experts with different areas of expertise is provided and the cooperative activity they engage in when participating in interdisciplinary research in a group is analyzed.
Abstract: In interdisciplinary research scientists have to share and integrate knowledge between people and across disciplinary boundaries. An important issue for philosophy of science is to understand how scientists who work in these kinds of environments exchange knowledge and develop new concepts and theories across diverging fields. There is a substantial literature within social epistemology that discusses the social aspects of scientific knowledge, but so far few attempts have been made to apply these resources to the analysis of interdisciplinary science. Further, much of the existing work either ignores the issue of differences in background knowledge, or it focuses explicitly on conflicting background knowledge. In this paper we provide an analysis of the interplay between epistemic dependence between individual experts with different areas of expertise. We analyze the cooperative activity they engage in when participating in interdisciplinary research in a group, and we compare our findings with those of other studies in interdisciplinary research.

83 citations


Journal ArticleDOI
Boaz Miller1
01 May 2013-Synthese
TL;DR: It is argued that a consensus is likely to be knowledge based when knowledge is the best explanation of the consensus, and three conditions are identified—social calibration, apparent consilience of evidence, and social diversity, for knowledge being thebest explanation of a consensus.
Abstract: Scientific consensus is widely deferred to in public debates as a social indicator of the existence of knowledge. However, it is far from clear that such deference to consensus is always justified. The existence of agreement in a community of researchers is a contingent fact, and researchers may reach a consensus for all kinds of reasons, such as fighting a common foe or sharing a common bias. Scientific consensus, by itself, does not necessarily indicate the existence of shared knowledge among the members of the consensus community. I address the question of under what conditions it is likely that a consensus is in fact knowledge based. I argue that a consensus is likely to be knowledge based when knowledge is the best explanation of the consensus, and I identify three conditions—social calibration, apparent consilience of evidence, and social diversity, for knowledge being the best explanation of a consensus.

82 citations


Journal ArticleDOI
01 Apr 2013-Synthese
TL;DR: An account (URM) is built of understanding as a certain representational capacity—specifically, understanding x involves possessing a representation of x that could be manipulated in useful ways and captures the insight that understanding is vitally connected to practice.
Abstract: Claims pertaining to understanding are made in a variety of contexts and ways. As a result, few in the philosophical literature have made an attempt to precisely characterize the state that is y understanding x. This paper builds an account that does just that. The account is motivated by two main observations. First, understanding x is somehow related to being able to manipulate x. Second, understanding is a mental phenomenon, and so what manipulations are required to be an understander must only be mental manipulations. Combining these two insights, the paper builds an account (URM) of understanding as a certain representational capacity—specifically, understanding x involves possessing a representation of x that could be manipulated in useful ways. By tying understanding to representation, the account correctly identifies that understanding is a fundamentally cognitive achievement. However, by also demanding that which representations count as understanding-conferring be determined by their practical effects, URM captures the insight that understanding is vitally connected to practice. URM is fully general, and can apply equally well to understanding states of affairs, understanding events, and even understanding people and works of art. The ultimate test of URM is its applicability in actual scientific and philosophical discourse. To that end the paper discusses the importance of understanding in the philosophy of science, psychology, and computer science.

79 citations


Journal ArticleDOI
05 Mar 2013-Synthese
TL;DR: This paper argues that the account of shared intentions this approach yields is less cognitively and conceptually demanding than other accounts and is thus applicable to the intentional joint actions performed by young children, and that it has limitations of its own.
Abstract: Philosophers have proposed accounts of shared intentions that aim at capturing what makes a joint action intentionally joint. On these accounts, having a shared intention typically presupposes cognitively and conceptually demanding theory of mind skills. Yet, young children engage in what appears to be intentional, cooperative joint action long before they master these skills. In this paper, I attempt to characterize a modest or 'lite' notion of shared intention, inspired by Michael Bacharach's approach to team-agency theory in terms of framing, group identification and team reasoning. I argue that the account of shared intentions this approach yields is less cognitively and conceptually demanding than other accounts and is thus applicable to the intentional joint actions performed by young children. I also argue that it has limitations of its own and that considering what these limitations are may help us understand why we sometimes need to take other routes to shared intentions.

Journal ArticleDOI
25 Apr 2013-Synthese
TL;DR: It is argued that propositions, in order to embody both informative and inquisitive content in a satisfactory way, should be defined as non-empty, downward closed sets of possibilities, where each possibility in turn is a set of possible worlds.
Abstract: In classical logic, the proposition expressed by a sentence is construed as a set of possible worlds, capturing the informative content of the sentence. However, sentences in natural language are not only used to provide information, but also to request information. Thus, natural language semantics requires a logical framework whose notion of meaning does not only embody informative content, but also inquisitive content. This paper develops the algebraic foundations for such a framework. We argue that propositions, in order to embody both informative and inquisitive content in a satisfactory way, should be defined as non-empty, downward closed sets of possibilities, where each possibility in turn is a set of possible worlds. We define a natural entailment order over such propositions, capturing when one proposition is at least as informative and inquisitive as another, and we show that this entailment order gives rise to a complete Heyting algebra, with meet, join, and relative pseudo-complement operators. Just as in classical logic, these semantic operators are then associated with the logical constants in a first-order language. We explore the logical properties of the resulting system and discuss its significance for natural language semantics. We show that the system essentially coincides with the simplest and most well-understood existing implementation of inquisitive semantics, and that its treatment of disjunction and existentials also concurs with recent work in alternative semantics. Thus, our algebraic considerations do not lead to a wholly new treatment of the logical constants, but rather provide more solid foundations for some of the existing proposals.

Journal ArticleDOI
01 Apr 2013-Synthese
TL;DR: The explanationist account of inference to the best explanation (IBE) generalizes and draws a clear distinction between IBE and abduction and presents abduction as the first step of IBE.
Abstract: This article generalizes the explanationist account of inference to the best explanation (IBE). It draws a clear distinction between IBE and abduction and presents abduction as the first step of IBE. The second step amounts to the evaluation of explanatory power, which consist in the degree of explanatory virtues that a hypothesis exhibits. Moreover, even though coherence is the most often cited explanatory virtue, on pain of circularity, it should not be treated as one of the explanatory virtues. Rather, coherence should be equated with explanatory power and considered to be derivable from the other explanatory virtues: unification, explanatory depth and simplicity.

Journal ArticleDOI
01 Oct 2013-Synthese
TL;DR: This paper considers the pessimistic induction construed as a deductive argument and as an inductive argument (specifically, inductive generalization) and argues that both formulations of the pessimism induction are fallacious.
Abstract: In this paper, I consider the pessimistic induction construed as a deductive argument (specifically, reductio ad absurdum) and as an inductive argument (specifically, inductive generalization). I argue that both formulations of the pessimistic induction are fallacious. I also consider another possible interpretation of the pessimistic induction, namely, as pointing to counterexamples to the scientific realist’s thesis that success is a reliable mark of (approximate) truth. I argue that this interpretation of the pessimistic induction fails, too. If this is correct, then the pessimistic induction is an utter failure that should be abandoned by scientific anti-realists.

Journal ArticleDOI
01 Aug 2013-Synthese
TL;DR: It is argued that relevant information determines whether considerations of value may be treated as reasons for actions that realize them and against actions that don’t, and incorporating this normative fact requires a revision of the standard ordering semantics for weak (but not for strong) deontic necessity modals.
Abstract: This paper discusses an important puzzle about the semantics of indicative conditionals and deontic necessity modals (should, ought, etc.): the Miner Puzzle (Parfit, ms; Kolodny and MacFarlane, J Philos 107:115–143, 2010). Rejecting modus ponens for the indicative conditional, as others have proposed, seems to solve a version of the puzzle, but is actually orthogonal to the puzzle itself. In fact, I prove that the puzzle arises for a variety of sophisticated analyses of the truth-conditions of indicative conditionals. A comprehensive solution requires rethinking the relationship between relevant information (what we know) and practical rankings of possibilities and actions (what to do). I argue that (i) relevant information determines whether considerations of value may be treated as reasons for actions that realize them and against actions that don’t, (ii) incorporating this normative fact requires a revision of the standard ordering semantics for weak (but not for strong) deontic necessity modals, and (iii) an off-the-shelf semantics for weak deontic necessity modals, due to von Fintel and Iatridou, which distinguishes “basic” and “higher-order” ordering sources, and interprets weak deontic necessity modals relative to both, is well-suited to this task. The prominence of normative considerations in our proposal suggests a more general methodological lesson: formal semantic analysis of natural language modals expressing normative concepts demands that close attention be paid to the nature of the underlying normative phenomena.

Journal ArticleDOI
01 Jul 2013-Synthese
TL;DR: It is argued that a more robust model of interdisciplinary practice will lead to better science by providing resources for understanding the types of value decisions that are entrenched in research models and methods, and providing a lens for identifying the questions that are ignored, under-examined, and rendered invisible through scientific habit or lack of interest.
Abstract: The National Science Foundation (NSF) in the United States, like many other funding agencies all over the globe, has made large investments in interdisciplinary research in the sciences and engineering, arguing that interdisciplinary research is an essential resource for addressing emerging problems, resulting in important social benefits. Using NSF as a case study for problem that might be relevant in other contexts as well, I argue that the NSF itself poses a significant barrier to such research in not sufficiently appreciating the value of the humanities as significant interdisciplinary partners. This essay focuses on the practices of philosophy as a highly valuable but currently under-appreciated partner in achieving the goals of interdisciplinary research. This essay advances a proposal for developing deeper and wider interdisciplinary research in the sciences through coupled ethical-epistemological research. I argue that this more robust model of interdisciplinary practice will lead to better science by providing resources for understanding the types of value decisions that are entrenched in research models and methods, offering resources for identifying the ethical implications of research decisions, and providing a lens for identifying the questions that are ignored, under-examined, and rendered invisible through scientific habit or lack of interest. In this way, we will have better science both in the traditional sense of advancing knowledge by building on and adding to our current knowledge as well as in the broader sense of science for the good of, namely, scientific research that better benefits society.

Journal ArticleDOI
01 Feb 2013-Synthese
TL;DR: Semantic and epistemic questions relating to the problem of underdetermination of theories by data and the debate on realism concerning scientific theories are discussed.
Abstract: String theory promises to be able to provide us with a working theory of quantum gravity and a unified description of all fundamental forces. In string theory there are so called ‘dualities’; i.e. different theoretical formulations that are physically equivalent. In this article these dualities are investigated from a philosophical point of view. Semantic and epistemic questions relating to the problem of underdetermination of theories by data and the debate on realism concerning scientific theories are discussed. Depending on ones views on semantic issues and realism different interpretations are possible of the dualities.

Journal ArticleDOI
01 Aug 2013-Synthese
TL;DR: A mathematical framework, based on fixpoints in continuous mappings between conceptual spaces, is outlined that can be used to model a semantics that is not construed as a mapping of language to the world but rather as a mapped between individual meaning spaces.
Abstract: We present an account of semantics that is not construed as a mapping of language to the world but rather as a mapping between individual meaning spaces. The meanings of linguistic entities are established via a “meeting of minds.” The concepts in the minds of communicating individuals are modeled as convex regions in conceptual spaces. We outline a mathematical framework, based on fixpoints in continuous mappings between conceptual spaces, that can be used to model such a semantics. If concepts are convex, it will in general be possible for interactors to agree on joint meaning even if they start out from different representational spaces. Language is discrete, while mental representations tend to be continuous—posing a seeming paradox. We show that the convexity assumption allows us to address this problem. Using examples, we further show that our approach helps explain the semantic processes involved in the composition of expressions.

Journal ArticleDOI
01 Sep 2013-Synthese
TL;DR: It is argued that BT gives the right verdict on the cases that seem to be counterexamples to CDT and EDT, and gives a prominent role to the notion of a “benchmark” for each state of nature, by comparison with which the value of the available options within states of nature are measured.
Abstract: This article proposes a new theory of rational decision, distinct from both causal decision theory (CDT) and evidential decision theory (EDT). First, some intuitive counterexamples to CDT and EDT are presented. Then the motivation for the new theory is given: the correct theory of rational decision will resemble CDT in that it will not be sensitive to any comparisons of absolute levels of value across different states of nature, but only to comparisons of the differences in value between the available options within states of nature; however, the correct theory will also resemble EDT in that it will rely on conditional probabilities (not unconditional probabilities). The new theory gives a prominent role to the notion of a “benchmark” for each state of nature, by comparison with which the value of the available options in that state of nature are measured, and so it has been called the Benchmark Theory (BT). It is argued that BT gives the right verdict on the cases that seem to be counterexamples to CDT and EDT. Finally, some objections to BT are considered and answered.

Journal ArticleDOI
01 Aug 2013-Synthese
TL;DR: A structural analysis of simulation is built on to provide an evaluative account of the variety of ways in which simulations do fail and the scientific importance of those various forms of failure.
Abstract: ‘The problem with simulations is that they are doomed to succeed.’ So runs a common criticism of simulations—that they can be used to ‘prove’ anything and are thus of little or no scientific value. While this particular objection represents a minority view, especially among those who work with simulations in a scientific context, it raises a difficult question: what standards should we use to differentiate a simulation that fails from one that succeeds? In this paper we build on a structural analysis of simulation developed in previous work to provide an evaluative account of the variety of ways in which simulations do fail. We expand the structural analysis in terms of the relationship between a simulation and its real-world target emphasizing the important role of aspects intended to correspond and also those specifically intended not to correspond to reality. The result is an outline both of the ways in which simulations can fail and the scientific importance of those various forms of failure.

Journal ArticleDOI
01 Nov 2013-Synthese
TL;DR: It is suggested that models should be regarded as a specific kind of signs according to the sign theory put forward by Charles S. Peirce, and, more precisely, as icons, i.e. as signs which are characterized by a similarity relation between sign (model) and object ( original).
Abstract: In this paper, we try to shed light on the ontological puzzle pertaining to models and to contribute to a better understanding of what models are Our suggestion is that models should be regarded as a specific kind of signs according to the sign theory put forward by Charles S Peirce, and, more precisely, as icons, ie as signs which are characterized by a similarity relation between sign (model) and object (original) We argue for this (1) by analyzing from a semiotic point of view the representational relation which is characteristic of models We then corroborate our hypothesis (2) by discussing the conceptual differences between icons, ie models, and indexical and symbolic signs and (3) by putting forward a general classification of all icons into three functional subclasses (images, diagrams, and metaphors) Subsequently, we (4) integratively refine our results by resorting to two influential and, as can be shown, complementary philosophy of science approaches to models This yields the following result: models are determined by a semiotic structure in which a subject intentionally uses an object, ie the model, as a sign for another object, ie the original, in the context of a chosen theory or language in order to attain a specific end by instituting a representational relation in which the syntactic structure of the model, ie its attributes and relations, represents by way of a mapping the properties of the original, which hence are regarded as similar in a relevant manner

Journal ArticleDOI
01 Apr 2013-Synthese
TL;DR: A novel way of reconstructing conceptual change in empirical theories is offered, exemplified and applied in a case study on the development within physics from the original Newtonian mechanics to special relativity theory.
Abstract: This paper offers a novel way of reconstructing conceptual change in empirical theories. Changes occur in terms of the structure of the dimensions—that is to say, the conceptual spaces—underlying the conceptual framework within which a given theory is formulated. Five types of changes are identified: (1) addition or deletion of special laws, (2) change in scale or metric, (3) change in the importance of dimensions, (4) change in the separability of dimensions, and (5) addition or deletion of dimensions. Given this classification, the conceptual development of empirical theories becomes more gradual and rationalizable. Only the most extreme type—replacement of dimensions—comes close to a revolution. The five types are exemplified and applied in a case study on the development within physics from the original Newtonian mechanics to special relativity theory.

Journal ArticleDOI
01 Jan 2013-Synthese
TL;DR: In this article, the concept of a fair finite lottery can best be extended to denumerably infinite lotteries, and techniques and ideas from non-standard analysis are brought to bear on the problem.
Abstract: This article discusses how the concept of a fair finite lottery can best be extended to denumerably infinite lotteries. Techniques and ideas from non-standard analysis are brought to bear on the problem.

Journal ArticleDOI
01 Jan 2013-Synthese
TL;DR: This paper draws on the extended mind thesis to suggest that mathematical symbols enable us to delegate some mathematical operations to the external environment and argues for an intimate relationship between mathematical symbols and mathematical cognition.
Abstract: Recent experimental evidence from developmental psychology and cognitive neuroscience indicates that humans are equipped with unlearned elementary mathematical skills. However, formal mathematics has properties that cannot be reduced to these elementary cognitive capacities. The question then arises how human beings cognitively deal with more advanced mathematical ideas. This paper draws on the extended mind thesis to suggest that mathematical symbols enable us to delegate some mathematical operations to the external environment. In this view, mathematical symbols are not only used to express mathematical concepts—they are constitutive of the mathematical concepts themselves. Mathematical symbols are epistemic actions, because they enable us to represent concepts that are literally unthinkable with our bare brains. Using case-studies from the history of mathematics and from educational psychology, we argue for an intimate relationship between mathematical symbols and mathematical cognition.

Journal ArticleDOI
Ian Evans1
01 Sep 2013-Synthese
TL;DR: After establishing some data and arguing that traditional accounts of basing are unsatisfying, this work introduces a novel theory of the basing relation: the dispositional theory.
Abstract: In days past, epistemologists expended a good deal of effort trying to analyze the basing relation—the relation between a belief and its basis. No satisfying account was offered, and the project was largely abandoned. Younger epistemologists, however, have begun to yearn for an adequate theory of basing. I aim to deliver one. After establishing some data and arguing that traditional accounts of basing are unsatisfying, I introduce a novel theory of the basing relation: the dispositional theory. It begins with the pedestrian observation that beliefs stand or fall with their bases. The theory I offer is an elucidation and refinement of this thought.

Journal ArticleDOI
01 Jun 2013-Synthese
TL;DR: This essay considers how a category-theoretic formulation of structure can be developed that denies (ii), and can be made to do work in the context of formulating theories in physics.
Abstract: Radical ontic structural realism (ROSR) claims that structure exists independently of objects that may instantiate it. Critics of ROSR contend that this claim is conceptually incoherent, insofar as, (i) it entails there can be relations without relata, and (ii) there is a conceptual dependence between relations and relata. In this essay I suggest that (ii) is motivated by a set-theoretic formulation of structure, and that adopting a category-theoretic formulation may provide ROSR with more support. In particular, I consider how a category-theoretic formulation of structure can be developed that denies (ii), and can be made to do work in the context of formulating theories in physics.

Journal ArticleDOI
01 Nov 2013-Synthese
TL;DR: It is concluded that, at least as far as these arguments are concerned, there is no good reason why the topic of reasoning with degrees of belief has received so little attention.
Abstract: In this paper I am concerned with the question of whether degrees of belief can figure in reasoning processes that are executed by humans. It is generally accepted that outright beliefs and intentions can be part of reasoning processes, but the role of degrees of belief remains unclear. The literature on subjective Bayesianism, which seems to be the natural place to look for discussions of the role of degrees of belief in reasoning, does not address the question of whether degrees of belief play a role in real agents’ reasoning processes. On the other hand, the philosophical literature on reasoning, which relies much less heavily on idealizing assumptions about reasoners than Bayesianism, is almost exclusively concerned with outright belief. One possible explanation for why no philosopher has yet developed an account of reasoning with degrees of belief is that reasoning with degrees of belief is not possible for humans. In this paper, I will consider three arguments for this claim. I will show why these arguments are flawed, and conclude that, at least as far as these arguments are concerned, it seems like there is no good reason why the topic of reasoning with degrees of belief has received so little attention.

Journal ArticleDOI
01 Nov 2013-Synthese
TL;DR: It is argued that experiments, computer simulations and thought experiments can contribute to answering the same questions by playing the same epistemic role when they are used to unfold the content of a well-described scenario.
Abstract: Experiments, computer simulations and thought experiments are usually seen as playing different roles in science and as having different epistemologies. Accordingly, they are usually analyzed separately. We argue in this paper that these activities can contribute to answering the same questions by playing the same epistemic role when they are used to unfold the content of a well-described scenario. We emphasize that in such cases, these three activities can be described by means of the same conceptual framework – even if each of them, because they involve different types of processes, falls under these concepts in different ways. We further illustrate our claims by presenting a threefold case study describing how a thought experiment, a computer simulation and an experiment were indeed used in the same role at different periods to answer the same questions about the possibility of a physical Maxwellian demon. We also point at fluid dynamics as another field where these activities seem to be playing the same unfolding role. We analyze the importance of unfolding as a general task of science and highlight how our description in terms of epistemic functions articulates in a noncommittal way with the epistemology of these three activities and accounts for their similarities and the existence of hybrid forms of activities. We finally emphasize that picturing these activities as functionally substitutable does not imply that they are epistemologically substitutable.

Journal ArticleDOI
01 Aug 2013-Synthese
TL;DR: This paper enhances Dung’s well-known abstract argumentation framework with explanatory capabilities and shows that an explanatory argumentation Framework (EAF) obtained in this way is a useful tool for the modeling of scientific debates.
Abstract: argumentation has been shown to be a powerful tool within many fields such as artificial intelligence, logic and legal reasoning. In this paper we enhance Dung’s well-known abstract argumentation framework with explanatory capabilities. We show that an explanatory argumentation framework (EAF) obtained in this way is a useful tool for the modeling of scientific debates. On the one hand, EAFs allow for the representation of explanatory and justificatory arguments constituting rivaling scientific views. On the other hand, different procedures for selecting arguments, corresponding to different methodological and epistemic requirements of theory evaluation, can be formulated in view of our framework.

Journal ArticleDOI
01 Nov 2013-Synthese
TL;DR: It is concluded that it is both possible and desirable to invoke norms for rational argument, and that a Bayesian approach provides solid normative principles with which to do so.
Abstract: Norms—that is, specifications of what we ought to do—play a critical role in the study of informal argumentation, as they do in studies of judgment, decision-making and reasoning more generally. Specifically, they guide a recurring theme: are people rational? Though rules and standards have been central to the study of reasoning, and behavior more generally, there has been little discussion within psychology about why (or indeed if) they should be considered normative despite the considerable philosophical literature that bears on this topic. In the current paper, we ask what makes something a norm, with consideration both of norms in general and a specific example: norms for informal argumentation. We conclude that it is both possible and desirable to invoke norms for rational argument, and that a Bayesian approach provides solid normative principles with which to do so.