scispace - formally typeset
Search or ask a question

Showing papers in "Synthese in 2017"


Journal ArticleDOI
01 Oct 2017-Synthese
TL;DR: A probabilistic account of the vagueness and context-sensitivity of scalar adjectives is derived from a Bayesian approach to communication and interpretation and integrated with a promising new approach to pragmatics and other areas of cognitive science.
Abstract: We derive a probabilistic account of the vagueness and context-sensitivity of scalar adjectives from a Bayesian approach to communication and interpretation. We describe an iterated-reasoning architecture for pragmatic interpretation and illustrate it with a simple scalar implicature example. We then show how to enrich the apparatus to handle pragmatic reasoning about the values of free variables, explore its predictions about the interpretation of scalar adjectives, and show how this model implements Edgington’s (Analysis 2:193–204,1992, Keefe and Smith (eds.) Vagueness: a reader, 1997) account of the sorites paradox, with variations. The Bayesian approach has a number of explanatory virtues: in particular, it does not require any special-purpose machinery for handling vagueness, and it is integrated with a promising new approach to pragmatics and other areas of cognitive science.

110 citations


Journal ArticleDOI
01 Oct 2017-Synthese
TL;DR: It is argued that a less orthodox but more general (Popperian) theory of conditional probability should be used, and that much of the conventional wisdom about probabilistic independence needs to be rethought.
Abstract: According to orthodox (Kolmogorovian) probability theory, conditional probabilities are by definition certain ratios of unconditional probabilities. As a result, orthodox conditional probabilities are regarded as undefined whenever their antecedents have zero unconditional probability. This has important ramifications for the notion of probabilistic independence. Traditionally, independence is defined in terms of unconditional probabilities (the factorization of the relevant joint unconditional probabilities). Various “equivalent” formulations of independence can be given using conditional probabilities. But these “equivalences” break down if conditional probabilities are permitted to have conditions with zero unconditional probability. We reconsider probabilistic independence in this more general setting. We argue that a less orthodox but more general (Popperian) theory of conditional probability should be used, and that much of the conventional wisdom about probabilistic independence needs to be rethought.

99 citations


Journal ArticleDOI
01 Feb 2017-Synthese
TL;DR: The Jeffreys-Lindley paradox as mentioned in this paper shows how the use of a $$p$$676 value in a frequentist hypothesis test can lead to an inference that is radically different from that of a Bayesian hypothesis test in the form advocated by Harold Jeffreys in the 1930s.
Abstract: The Jeffreys–Lindley paradox displays how the use of a $$p$$ value (or number of standard deviations $$z$$ ) in a frequentist hypothesis test can lead to an inference that is radically different from that of a Bayesian hypothesis test in the form advocated by Harold Jeffreys in the 1930s and common today. The setting is the test of a well-specified null hypothesis (such as the Standard Model of elementary particle physics, possibly with “nuisance parameters”) versus a composite alternative (such as the Standard Model plus a new force of nature of unknown strength). The $$p$$ value, as well as the ratio of the likelihood under the null hypothesis to the maximized likelihood under the alternative, can strongly disfavor the null hypothesis, while the Bayesian posterior probability for the null hypothesis can be arbitrarily large. The academic statistics literature contains many impassioned comments on this paradox, yet there is no consensus either on its relevance to scientific communication or on its correct resolution. The paradox is quite relevant to frontier research in high energy physics. This paper is an attempt to explain the situation to both physicists and statisticians, in the hope that further progress can be made.

71 citations


Journal ArticleDOI
01 Jan 2017-Synthese
TL;DR: This paper reconsiders Carnapian explication by comparing it to a different approach: the ‘formalisms as cognitive tools’ conception (Formal languages in logic), noting that formalization is an inherently paradoxical enterprise in general, but one worth engaging in given the ’cognitive boost’ it affords as a tool for discovery.
Abstract: Explication is the conceptual cornerstone of Carnap’s approach to the methodology of scientific analysis. From a philosophical point of view, it gives rise to a number of questions that need to be addressed, but which do not seem to have been fully addressed by Carnap himself. This paper reconsiders Carnapian explication by comparing it to a different approach: the ‘formalisms as cognitive tools’ conception (Formal languages in logic. Cambridge University Press, Cambridge 2012a). The comparison allows us to discuss a number of aspects of the Carnapian methodology, as well as issues pertaining to formalization in general. We start by introducing Carnap’s conception of explication, arguing that there is a tension between his proposed criteria of fruitfulness and similarity; we also argue that his further desideratum of exactness is less crucial than might appear at first. We then bring in the general idea of formalisms as cognitive tools, mainly by discussing the reliability of so-called statistical prediction rules (SPRs), i.e. simple algorithms used to make predictions across a range of areas. SPRs allow for a concrete instantiation of Carnap’s fruitfulness desideratum, which is arguably the most important desideratum for him. Finally, we elaborate on what we call the ‘paradox of adequate formalization’, which for the Carnapian corresponds to the tension between similarity and fruitfulness. We conclude by noting that formalization is an inherently paradoxical enterprise in general, but one worth engaging in given the ‘cognitive boost’ it affords as a tool for discovery.

70 citations


Journal ArticleDOI
01 Apr 2017-Synthese
TL;DR: It is argued that biological organisation can be legitimately conceived of as an intrinsically teleological causal regime, and not any kind of circular regime realises self-determination, which should be specifically understood as self-constraint.
Abstract: This paper argues that biological organisation can be legitimately conceived of as an intrinsically teleological causal regime. The core of the argument consists in establishing a connection between organisation and teleology through the concept of self-determination: biological organisation determines itself in the sense that the effects of its activity contribute to determine its own conditions of existence. We suggest that not any kind of circular regime realises self-determination, which should be specifically understood as self-constraint: in biological systems, in particular, self-constraint takes the form of closure, i.e. a network of mutually dependent constitutive constraints. We then explore the occurrence of intrinsic teleology in the biological domain and beyond. On the one hand, the organisational account might possibly concede that supra-organismal biological systems (as symbioses or ecosystems) could realise closure, and hence be teleological. On the other hand, the realisation of closure beyond the biological realm appears to be highly unlikely. In turn, the occurrence of simpler forms of self-determination remains a controversial issue, in particular with respect to the case of self-organising dissipative systems.

67 citations


Journal ArticleDOI
01 Aug 2017-Synthese
TL;DR: It is argued that there are cases in which a subject, S, should have known that p, even though, given her state of evidence at the time, she was in no position to know it.
Abstract: In this paper I will be arguing that there are cases in which a subject, S, should have known that p, even though, given her state of evidence at the time, she was in no position to know it. My argument for this result will involve making two claims. The uncontroversial claim is this: S should have known that p when (one) another person has, or would have, legitimate expectations regarding S’s epistemic condition, (two) the satisfaction of these expectations would require that S knows that p, and (three) S fails to know that p. The controversial claim is that these three conditions are sometimes jointly satisfied. I will spend the majority of my time defending the controversial claim. I will argue that there are (at least) two main sources of legitimate expectations regarding another’s epistemic condition: participation in a legitimate social practice (where one’s role entitles others to expect things of one); and moral and epistemic expectations more generally (the institutions of morality and epistemic assessment being such as to entitle us to expect various things of one another). In developing my position on this score, I will have an opportunity (i) to defend the doctrine that there are “practice-generated entitlements” to expect certain things, where it can happen that the satisfaction of these expectations requires another’s having certain pieces of knowledge; (ii) to contrast practice-generated entitlements to expect with epistemic reasons to believe; (iii) to defend the idea that moral and epistemic standards themselves can be taken to reflect legitimate expectations we have of each other; (iv) to compare the “should have known” phenomenon with a widely-discussed phenomenon in the ethics literature—that of culpable ignorance; and finally (v) to suggest the bearing of the “should have known” phenomenon to epistemology itself (in particular, the theory of epistemic justification).

63 citations


Journal ArticleDOI
01 Aug 2017-Synthese
TL;DR: It is argued that external information can be constitutive of one’s autobiographical memory and thus also of one's diachronic self, and the complex web of cognitive relations the authors develop and maintain with other people and technological artifacts partly determines their self.
Abstract: This paper explores the implications of extended and distributed cognition theory for our notions of personal identity. On an extended and distributed approach to cognition, external information is under certain conditions constitutive of memory. On a narrative approach to personal identity, autobiographical memory is constitutive of our diachronic self. In this paper, I bring these two approaches together and argue that external information can be constitutive of one’s autobiographical memory and thus also of one’s diachronic self. To develop this claim, I draw on recent empirical work in human-computer interaction, looking at lifelogging technologies in both healthcare and everyday contexts. I argue that personal identity can neither be reduced to psychological structures instantiated by the brain nor by biological structures instantiated by the organism, but should be seen as an environmentally-distributed and relational construct. In other words, the complex web of cognitive relations we develop and maintain with other people and technological artifacts partly determines our self. This view has conceptual, methodological, and normative implications: we should broaden our concepts of the self as to include social and artifactual structures, focus on external memory systems in the (empirical) study of personal identity, and not interfere with people’s distributed minds and selves.

54 citations


Journal ArticleDOI
01 Apr 2017-Synthese
TL;DR: It is shown that with four modifications, Grünbaum’s definition provides a defensible account of placebos for the purpose of constructing placebo controls within clinical trials.
Abstract: Debates about the ethics and effects of placebos and whether ‘placebos’ in clinical trials of complex treatments such as acupuncture are adequate (and hence whether acupuncture is ‘truly’ effective or a ‘mere placebo’) rage. Yet there is currently no widely accepted definition of the ‘placebo’. A definition of the placebo is likely to inform these controversies. Grunbaum’s (1981, 1986) characterization of placebos and placebo effects has been touted by some authors as the best attempt thus far, but has not won widespread acceptance largely because Grunbaum failed to specify what he means by a therapeutic theory and because he does not stipulate a special role for expectation effects. Grunbaum claims that placebos are treatments whose ‘characteristic features’ do not have therapeutic effects on the target disorder. I show that with four modifications, Grunbaum’s definition provides a defensible account of placebos for the purpose of constructing placebo controls within clinical trials. The modifications I introduce are: adding a special role for expectations, insisting that placebo controls control for all and only the effects of the incidental treatment features, relativizing the definition of placebos to patients, and introducing harmful interventions and nocebos to the definitional scheme. I also provide guidance for classifying treatment features as characteristic or incidental.

53 citations


Journal ArticleDOI
03 Jun 2017-Synthese
TL;DR: A pluralistic stance towards uses of the term ‘cognition’ is advocated that eschews the urge to treat cognition as a metaphysically well-defined “natural” kind.
Abstract: Should cognitive scientists be any more embarrassed about their lack of a discipline-fixing definition of cognition than biologists are about their inability to define “life”? My answer is “no” Philosophers seeking a unique “mark of the cognitive” or less onerous but nevertheless categorical characterizations of cognition are working at a level of analysis upon which hangs nothing that either cognitive scientists or philosophers of cognitive science should care about In contrast, I advocate a pluralistic stance towards uses of the term ‘cognition’ that eschews the urge to treat cognition as a metaphysically well-defined “natural” kind

52 citations


Journal ArticleDOI
01 Mar 2017-Synthese
TL;DR: It is claimed that empathy plays a distinctive epistemological role: it alone allows us to know how others feel, independent of the plausibility of simulationism more generally.
Abstract: The concept of empathy has received much attention from philosophers and also from both cognitive and social psychologists. It has, however, been given widely conflicting definitions, with some taking it primarily as an epistemological notion and others as a social one. Recently, empathy has been closely associated with the simulationist approach to social cognition and, as such, it might be thought that the concept's utility stands or falls with that of simulation itself. I suggest that this is a mistake. Approaching the question of what empathy is via the question of what it is for, I claim that empathy plays a distinctive epistemological role: it alone allows us to know how others feel. This is independent of the plausibility of simulationism more generally. With this in view I propose an inclusive definition of empathy, one likely consequence of which is that empathy is not a natural kind. It follows that, pace a number of empathy researchers, certain experimental paradigms tell us not about the nature of empathy but about certain ways in which empathy can be achieved. I end by briefly speculating that empathy, so conceived, may also play a distinctive social role, enabling what I term 'transparent fellow-feeling'.

52 citations


Journal ArticleDOI
01 Nov 2017-Synthese
TL;DR: This paper will survey some of the answers that have been (implicitly or explicitly) given in the embodied, enactive, and extended cognition literature, then suggest reasons to believe that the authors should answer both questions in the negative.
Abstract: An important question in the debate over embodied, enactive, and extended cognition has been what has been meant by “cognition”. What is this cognition that is supposed to be embodied, enactive, or extended? Rather than undertake a frontal assault on this question, however, this paper will take a different approach. In particular, we may ask how cognition is supposed to be related to behavior. First, we could ask whether cognition is supposed to be (a type of) behavior. Second, we could ask whether we should attempt to understand cognitive processes in terms of antecedently understood cognitive behaviors. This paper will survey some of the answers that have been (implicitly or explicitly) given in the embodied, enactive, and extended cognition literature, then suggest reasons to believe that we should answer both questions in the negative.

Journal ArticleDOI
01 Mar 2017-Synthese
TL;DR: It is argued that a number of convergent recent findings with adults have been interpreted as evidence of the existence of two distinct systems for mindreading that draw on separate conceptual resources, but that these findings admit of a more parsimonious explanation.
Abstract: A number of convergent recent findings with adults have been interpreted as evidence of the existence of two distinct systems for mindreading that draw on separate conceptual resources: one that is fast, automatic, and inflexible; and one that is slower, controlled, and flexible. The present article argues that these findings admit of a more parsimonious explanation. This is that there is a single set of concepts made available by a mindreading system that operates automatically where it can, but which frequently needs to function together with domain-specific executive procedures (such as visually rotating an image to figure out what someone else can see) as well as domain-general resources (including both long-term and working memory). This view, too, can be described as a two-systems account. But in this case one of the systems encompasses the other, and the conceptual resources available to each are the same.

Journal ArticleDOI
01 Mar 2017-Synthese
TL;DR: On the basis of new evidence, the orthodox claim can actually be strengthened, corroborated and refined: what emerges around age 4 is an explicit, unified, flexibly conceptual capacity to ascribe propositional attitudes.
Abstract: When do children acquire a propositional attitude folk psychology or theory of mind? The orthodox answer to this central question of developmental ToM research had long been that around age 4 children begin to apply “belief” and other propositional attitude concepts. This orthodoxy has recently come under serious attack, though, from two sides: Scoffers complain that it over-estimates children’s early competence and claim that a proper understanding of propositional attitudes emerges only much later. Boosters criticize the orthodoxy for underestimating early competence and claim that even infants ascribe beliefs. In this paper, the orthodoxy is defended on empirical grounds against these two kinds of attacks. On the basis of new evidence, not only can the two attacks safely be countered, but the orthodox claim can actually be strengthened, corroborated and refined: what emerges around age 4 is an explicit, unified, flexibly conceptual capacity to ascribe propositional attitudes. This unified conceptual capacity contrasts with the less sophisticated, less unified implicit forms of tracking simpler mental states present in ontogeny long before. This refined version of the orthodoxy can thus most plausibly be spelled out in some form of 2-systems-account of theory of mind.

Journal ArticleDOI
01 Dec 2017-Synthese
TL;DR: This paper will try to explain the shift from ecological to cognitive niches and their actual and theoretical overlaps, and take two concepts expressing loose forms of causation in the interaction between organisms and their environment: the biological notion of “enablement” and the psycho-cognitive one of ”affordance”.
Abstract: Cognitive niche theories consist in a theoretical framework that is proving extremely profitable in bridging evolutionary biology, philosophy, cognitive science, and anthropology by offering an inter-disciplinary ground, laden with novel approaches and debates. At the same time, cognitive niche theories are multiple, and differently related to niche theories in theoretical and evolutionary biology. The aim of this paper is to clarify the theoretical and epistemological relationships between cognitive and ecological niche theories. Also, by adopting a constructionist approach (namely by referring principally to ecological and cognitive niche construction theories) we will try to explain the shift from ecological to cognitive niches and their actual and theoretical overlaps. In order to do so, we will take two concepts expressing loose forms of causation in the interaction between organisms and their environment: the biological notion of “enablement” and the psycho-cognitive one of “affordance”.

Journal ArticleDOI
Neil Levy1
01 Feb 2017-Synthese
TL;DR: It is argued that the intellectualist account of knowledge- how, according to which agents have the knowledge-how to $$\upvarphi $$φ in virtue of standing in an appropriate relation to a proposition, is only half right.
Abstract: I argue that the intellectualist account of knowledge-how, according to which agents have the knowledge-how to $$\upvarphi $$ in virtue of standing in an appropriate relation to a proposition, is only half right. On the composition view defended here, knowledge-how at least typically requires both propositional knowledge and motor representations. Motor representations are not mere dispositions to behavior (so the older dispositionalist view isn’t even half right) because they have representational content, and they play a central role in realizing the intelligence in knowledge-how. But since motor representations are not propositional, propositional knowledge is not sufficient for knowledge-how.

Journal ArticleDOI
01 Apr 2017-Synthese
TL;DR: It is argued that ecosystems, by forming more or less resilient assemblages, can evolve even while they do not reproduce and form lineages, and a Persistence Enhancing Propensity account of role functions in ecology is proposed to account for this overlap of evolutionary and ecological processes.
Abstract: We argue that ecology in general and biodiversity and ecosystem function (BEF) research in particular need an understanding of functions which is both ahistorical and evolutionarily grounded. A natural candidate in this context is Bigelow and Pargetter’s (1987) evolutionary forward-looking account which, like the causal role account, assigns functions to parts of integrated systems regardless of their past history, but supplements this with an evolutionary dimension that relates functions to their bearers’ ability to thrive and perpetuate themselves. While Bigelow and Pargetter’s account focused on functional organization at the level of organisms, we argue that such an account can be extended to functional organization at the community and ecosystem levels in a way that broadens the scope of the reconciliation between ecosystem ecology and evolutionary biology envisioned by many BEF researchers (e.g. Holt 1995; Loreau 2010a). By linking an evolutionary forward-looking account of functions to the persistence-based understanding of evolution defended by Bouchard (2008, 2011) and others (e.g. Bourrat 2014; Doolittle 2014), and to the theoretical research on complex adaptive systems (Levin 1999, 2005; Norberg 2004), we argue that ecosystems, by forming more or less resilient assemblages, can evolve even while they do not reproduce and form lineages. We thus propose a Persistence Enhancing Propensity (PEP) account of role functions in ecology to account for this overlap of evolutionary and ecological processes.

Journal ArticleDOI
01 Nov 2017-Synthese
TL;DR: The reasons against defining cognition in representational terms are that doing so needlessly restricts the authors' theorizing, it undermines the empirical status of the representational theory of mind, and it encourages wildly deflationary and explanatorily vacuous conceptions of representation.
Abstract: In various contexts and for various reasons, writers often define cognitive processes and architectures as those involving representational states and structures. Similarly, cognitive theories are also often delineated as those that invoke representations. In this paper, I present several reasons for rejecting this way of demarcating the cognitive. Some of the reasons against defining cognition in representational terms are that doing so needlessly restricts our theorizing, it undermines the empirical status of the representational theory of mind, and it encourages wildly deflationary and explanatorily vacuous conceptions of representation. After criticizing this outlook, I sketch alternative ways we might try to capture what is distinctive about cognition and cognitive theorizing.

Journal ArticleDOI
19 Sep 2017-Synthese
TL;DR: An artifactual approach to models that also addresses their fictional features, and focuses on the culturally established external representational tools that enable, embody, and extend scientific imagination and reasoning.
Abstract: This paper presents an artifactual approach to models that also addresses their fictional features. It discusses first the imaginary accounts of models and fiction that set model descriptions apart from imagined-objects, concentrating on the latter (e.g., Frigg in Synthese 172(2):251–268, 2010; Frigg and Nguyen in The Monist 99(3):225–242, 2016; Godfrey-Smith in Biol Philos 21(5):725–740, 2006; Philos Stud 143(1):101–116, 2009). While the imaginary approaches accommodate surrogative reasoning as an important characteristic of scientific modeling, they simultaneously raise difficult questions concerning how the imagined entities are related to actual representational tools, and coordinated among different scientists, and with real-world phenomena. The artifactual account focuses, in contrast, on the culturally established external representational tools that enable, embody, and extend scientific imagination and reasoning. While there are commonalities between models and fictions, it is argued that the focus should be on the fictional uses of models rather than considering models as fictions.

Journal ArticleDOI
01 Nov 2017-Synthese
TL;DR: It’s argued that believing that p implies having a credence of 1 in p, which is true because the belief that p involves representing p as being the case, and representing P involves not allowing for the possibility of not-p.
Abstract: I argue that believing that p implies having a credence of 1 in p. This is true because the belief that p involves representing p as being the case, representing p as being the case involves not allowing for the possibility of not-p, while having a credence that’s greater than 0 in not-p involves regarding not-p as a possibility.

Journal ArticleDOI
01 May 2017-Synthese
TL;DR: Recent philosophical and psychological work that attempts to account semantically for the apparent oddness of conditionals lacking an internal connection between their parts are discussed.
Abstract: Conditionals whose antecedent and consequent are not somehow internally connected tend to strike us as odd. The received doctrine is that this felt oddness is to be explained pragmatically. Exactly how the pragmatic explanation is supposed to go has remained elusive, however. This paper discusses recent philosophical and psychological work that attempts to account semantically for the apparent oddness of conditionals lacking an internal connection between their parts.

Journal ArticleDOI
01 Sep 2017-Synthese
TL;DR: This paper defends the definition of scientific progress as increasing truthlikeness or verisimilitude, but critical realists turn this argument into an optimistic view about progressive science.
Abstract: Scientific realists use the “no miracle argument” to show that the empirical and pragmatic success of science is an indicator of the ability of scientific theories to give true or truthlike representations of unobservable reality. While antirealists define scientific progress in terms of empirical success or practical problem-solving, realists characterize progress by using some truth-related criteria. This paper defends the definition of scientific progress as increasing truthlikeness or verisimilitude. Antirealists have tried to rebut realism with the “pessimistic metainduction”, but critical realists turn this argument into an optimistic view about progressive science.

Journal ArticleDOI
01 Jul 2017-Synthese
TL;DR: It is argued that such a sophisticated form of naturalism, which preserves the autonomy of metaphysics as an a priori enterprise yet pays due attention to the indications coming from the authors' best science, is not only workable but recommended.
Abstract: The present paper discusses different approaches to metaphysics and defends a specific, non-deflationary approach that nevertheless qualifies as scientifically-grounded and, consequently, as acceptable from the naturalistic viewpoint. By critically assessing some recent work on science and metaphysics, we argue that such a sophisticated form of naturalism, which preserves the autonomy of metaphysics as an a priori enterprise yet pays due attention to the indications coming from our best science, is not only workable but recommended.

Journal ArticleDOI
01 Nov 2017-Synthese
TL;DR: The results suggest that social learning and cognitive diversity produce epistemic benefits only when the epistemic community is faced with problems of sufficient difficulty.
Abstract: When should a scientific community be cognitively diverse? This article presents a model for studying how the heterogeneity of learning heuristics used by scientist agents affects the epistemic efficiency of a scientific community. By extending the epistemic landscapes modeling approach introduced by Weisberg and Muldoon, the article casts light on the micro-mechanisms mediating cognitive diversity, coordination, and problem-solving efficiency. The results suggest that social learning and cognitive diversity produce epistemic benefits only when the epistemic community is faced with problems of sufficient difficulty.

Journal ArticleDOI
01 Sep 2017-Synthese
TL;DR: This paper explores the sources of interpretative injustice, and considers some of the harms to which it gives rise, and argues that if Miranda Fricker’s strategy for treating testimonial injustice is implemented in absence of a treatment ofinterpretative injustice then the authors risk epistemically harming the hearer with little benefit to the speaker.
Abstract: There has been much recent discussion of the harmful role prejudicial stereotypes play in our communicative exchanges. For example, Fricker (Epistemic injustice: power and ethics of knowing, 2007) explores a type of injustice (testimonial injustice) which arises when the credibility judgments we make about speakers are informed by prejudicial stereotypes. This discussion has so far focused on the role stereotypes play in our epistemic assessments of communicative actions, rather than our interpretations of such actions. However, the same prejudicial stereotypes that infect credibility judgments can also infect our interpretation of the speaker, leading to uncharitable interpretation (call this ‘interpretative injustice’). This paper explores the sources of interpretative injustice, and considers some of the harms to which it gives rise. There are several harms caused by interpretative injustice. Firstly, it constitutes a form of silencing. It prevents certain groups from being able to efficiently communicate knowledge to other (perhaps more powerful) groups. Secondly it results in speakers being held epistemically responsible for propositions they never intended to communicate. And thirdly, it contributes to the illusion that prejudicial low credibility judgments are epistemically justified. I close by arguing that if Miranda Fricker’s strategy for treating testimonial injustice is implemented in absence of a treatment of interpretative injustice then we risk epistemically harming the hearer with little benefit to the speaker. Thus testimonial injustice and interpretative injustice are best treated in tandem.

Journal ArticleDOI
01 Apr 2017-Synthese
TL;DR: IBE and Bayesianism can be made compatible, and thus the Bayesian and the proponent of IBE can be friends, as it is proposed that IBE should be regarded as a theory of when the authors ought to “accept” H.
Abstract: In this paper, I consider the relationship between Inference to the Best Explanation (IBE) and Bayesianism, both of which are well-known accounts of the nature of scientific inference. In Sect. 2, I give a brief overview of Bayesianism and IBE. In Sect. 3, I argue that IBE in its most prominently defended forms is difficult to reconcile with Bayesianism because not all of the items that feature on popular lists of “explanatory virtues”—by means of which IBE ranks competing explanations—have confirmational import. Rather, some of the items that feature on these lists are “informational virtues”—properties that do not make a hypothesis \(\hbox {H}_{1}\) more probable than some competitor \(\hbox {H}_{2}\) given evidence E, but that, roughly-speaking, give that hypothesis greater informative content. In Sect. 4, I consider as a response to my argument a recent version of compatibilism which argues that IBE can provide further normative constraints on the objectively correct probability function. I argue that this response does not succeed, owing to the difficulty of defending with any generality such further normative constraints. Lastly, in Sect. 5, I propose that IBE should be regarded, not as a theory of scientific inference, but rather as a theory of when we ought to “accept” H, where the acceptability of H is fixed by the goals of science and concerns whether H is worthy of commitment as research program. In this way, IBE and Bayesianism, as I will show, can be made compatible, and thus the Bayesian and the proponent of IBE can be friends.

Journal ArticleDOI
Karen Neander1
01 Apr 2017-Synthese
TL;DR: It is argued that a minimal notion of function and a notion of normal-proper function are used in explaining how bodies and brains operate, despite a lack of relevant causal efficacy on the part of such functions.
Abstract: This paper argues that a minimal notion of function and a notion of normal-proper function are used in explaining how bodies and brains operate. Neither is Cummins’ (1975) notion, as originally defined, and yet his is often taken to be the clearly relevant notion for such an explanatory context. This paper also explains how adverting to normal-proper functions, even if these are selected functions, can play a significant scientific role in the operational explanations of complex systems that physiologists and neurophysiologists provide, despite a lack of relevant causal efficacy on the part of such functions.

Journal ArticleDOI
Peter Vickers1
01 Sep 2017-Synthese
TL;DR: This paper aims to establish two results: (i) sometimes a proposition is, in an important sense, ‘doing work’, and yet does not warrant realist commitment, and (ii) the realist will be able to respond to PMI-style historical challenges if she can merely show that certain selected posits do not require realistcommitment.
Abstract: One of the popular realist responses to the pessimistic meta-induction (PMI) is the ‘selective’ move, where a realist only commits to the ‘working posits’ of a successful theory, and withholds commitment to ‘idle posits’. Antirealists often criticise selective realists for not being able to articulate exactly what is meant by ‘working’ and/or not being able to identify the working posits except in hindsight. This paper aims to establish two results: (i) sometimes a proposition is, in an important sense, ‘doing work’, and yet does not warrant realist commitment, and (ii) the realist will be able to respond to PMI-style historical challenges if she can merely show that certain selected posits do not require realist commitment (ignoring the question of which posits do). These two results act to significantly adjust the dialectic vis-a-vis PMI-style challenges to selective realism.

Journal ArticleDOI
01 Jun 2017-Synthese
TL;DR: This paper aims to give a new robust virtue epistemological account of knowledge based on a different understanding of the nature and structure of the kind of abilities that give rise to knowledge.
Abstract: What is the nature of knowledge? A popular answer to that long-standing question comes from robust virtue epistemology, whose key idea is that knowing is just a matter of succeeding cognitively—i.e., coming to believe a proposition truly—due to an exercise of cognitive ability. Versions of robust virtue epistemology further developing and systematizing this idea offer different accounts of the relation that must hold between an agent’s cognitive success and the exercise of her cognitive abilities as well as of the very nature of those abilities. This paper aims to give a new robust virtue epistemological account of knowledge based on a different understanding of the nature and structure of the kind of abilities that give rise to knowledge.

Journal ArticleDOI
01 Sep 2017-Synthese
TL;DR: It is argued that EEV captures the kinds of cases philosophers have thought to be evidence for IEV, and a wide range of other cases as well, and introduces an alternative view of these relationships, the “external effect view” (EEV).
Abstract: I argue that discussions of cognitive penetration have been insufficiently clear about (i) what distinguishes perception and cognition, and (ii) what kind of relationship between the two is supposed to be at stake in the debate. A strong reading, which is compatible with many characterizations of penetration, posits a highly specific and directed influence on perception. According to this view, which I call the “internal effect view” (IEV) a cognitive state penetrates a perceptual process if the presence of the cognitive state causes a change to the computation performed by the process, with the result being a distinct output. I produce a novel argument that this strong reading is false. On one well-motivated way of drawing the distinction between perceptual states and cognitive states, cognitive representations cannot play the computational role posited for them by IEV, vis-a-vis perception. This does not mean, however, that there are not important causal relationships between cognitive and perceptual states. I introduce an alternative view of these relationships, the “external effect view” (EEV). EEV posits that each cognitive state is associated with a broad range of possible perceptual outcomes, and biases perception towards any of those perceptual outcomes without determining specific perceptual contents. I argue that EEV captures the kinds of cases philosophers have thought to be evidence for IEV, and a wide range of other cases as well.

Journal ArticleDOI
01 Feb 2017-Synthese
TL;DR: It is concluded that the ATLAS experiment illustrates that, contrary to what previous studies have suggested, there are cases of experimentation in which exploration serves to test theoretical predictions and that theory-ladenness plays an essential role in experimentation being exploratory.
Abstract: In this paper, I propose an account that accommodates the possibility of experimentation being exploratory in cases where the procedures necessary to plan and perform an experiment are dependent on the theoretical accounts of the phenomena under investigation. The present account suggests that experimental exploration requires the implementation of an exploratory procedure that serves to extend the range of possible outcomes of an experiment, thereby enabling it to pursue its objectives. Furthermore, I argue that the present account subsumes the notion of exploratory experimentation, which is often attributed in the relevant literature to the works of Friedrich Steinle and Richard Burian, as a particular type of experimental exploration carried out in the special cases where no well-formed theoretical framework of the phenomena under investigation (yet) exists. I illustrate the present account in the context of the ATLAS experiment at CERN’s Large Hadron Collider, where the long-sought Higgs boson has been discovered in 2012. I argue that the data selection procedure carried out in the ATLAS experiment illustrates an exploratory procedure in the sense suggested by the present account. I point out that this particular data selection procedure is theory-laden in that its implementation is crucially dependent on the theoretical models of high energy particle physics which the ATLAS experiment is aimed to test. However, I argue that the foregoing procedure is not driven by the above-mentioned theoretical models, but rather by a particular data selection strategy. I conclude that the ATLAS experiment illustrates that, contrary to what previous studies have suggested, there are cases of experimentation in which exploration serves to test theoretical predictions and that theory-ladenness plays an essential role in experimentation being exploratory.