scispace - formally typeset
Search or ask a question

Showing papers in "Synthese in 2014"


Journal ArticleDOI
01 Jan 2014-Synthese
TL;DR: It is claimed that remembering is a particular operation of a cognitive system that permits the flexible recombination of different components of encoded traces into representations of possible past events that might or might not have occurred, in the service of constructing mental simulations of possible future events.
Abstract: Misremembering is a systematic and ordinary occurrence in our daily lives. Since it is commonly assumed that the function of memory is to remember the past, misremembering is typically thought to happen because our memory system malfunctions. In this paper I argue that not all cases of misremembering are due to failures in our memory system. In particular, I argue that many ordinary cases of misremembering should not be seen as instances of memory’s malfunction, but rather as the normal result of a larger cognitive system that performs a different function, and for which remembering is just one operation. Building upon extant psychological and neuroscientific evidence, I offer a picture of memory as an integral part of a larger system that supports not only thinking of what was the case and what potentially could be the case, but also what could have been the case. More precisely, I claim that remembering is a particular operation of a cognitive system that permits the flexible recombination of different components of encoded traces into representations of possible past events that might or might not have occurred, in the service of constructing mental simulations of possible future events. So that imagination and memory are but one thing, which for diverse considerations hath diverse names. Thomas Hobbes, Leviathan 1.2.

172 citations


Journal ArticleDOI
28 Feb 2014-Synthese
TL;DR: It is submitted that it is the controlled part of skilled action, that is, that part of an action that accounts for the exact, nuanced ways in which a skilled performer modifies, adjusts and guides her performance for which an adequate, philosophical theory of skill must account.
Abstract: In this paper, I submit that it is the controlled part of skilled action, that is, that part of an action that accounts for the exact, nuanced ways in which a skilled performer modifies, adjusts and guides her performance for which an adequate, philosophical theory of skill must account. I will argue that neither Jason Stanley nor Hubert Dreyfus have an adequate account of control. Further, and perhaps surprisingly, I will argue that both Stanley and Dreyfus relinquish an account of control for precisely the same reason: each reduce control to a passive, mechanistic, automatic process, which then prevents them from producing a substantive account of how controlled processes can be characterized by seemingly intelligent features and integrated with personal-level states. I will end by introducing three different kinds of control, which are constitutive of skilled action: strategic control, selective, top–down, automatic attention, and motor control. It will become clear that Dreyfus cannot account for any of these three kinds of control while Stanley has difficulty tackling the two latter kinds.

116 citations


Journal ArticleDOI
27 Mar 2014-Synthese
TL;DR: It is shown that at a qualitative level many aspects of social belief change can be obtained from a very simple model, which is called ‘threshold influence’, which focuses on the question of what makes the beliefs of a community stable under various dynamical situations.
Abstract: In this paper we explore the relationship between norms of belief revision that may be adopted by members of a community and the resulting dynamic properties of the distribution of beliefs across that community. We show that at a qualitative level many aspects of social belief change can be obtained from a very simple model, which we call ‘threshold influence’. In particular, we focus on the question of what makes the beliefs of a community stable under various dynamical situations. We also consider refinements and alternatives to the ‘threshold’ model, the most significant of which is to consider changes to plausibility judgements rather than mere beliefs. We show first that some such change is mandated by difficult problems with belief-based dynamics related to the need to decide on an order in which different beliefs are considered. Secondly, we show that the resulting plausibility-based account results in a deterministic dynamical system that is non-deterministic at the level of beliefs.

91 citations


Journal ArticleDOI
01 Jan 2014-Synthese
TL;DR: It is argued that computational neuroscience frequently employs a distinct explanatory style, namely, efficient coding explanation, which cannot be assimilated into the mechanistic framework but do bear interesting similarities with evolutionary and optimality explanations elsewhere in biology.
Abstract: In a recent paper, Kaplan (Synthese 183:339–373, 2011) takes up the task of extending Craver’s (Explaining the brain, 2007) mechanistic account of explanation in neuroscience to the new territory of computational neuroscience. He presents the model to mechanism mapping (3M) criterion as a condition for a model’s explanatory adequacy. This mechanistic approach is intended to replace earlier accounts which posited a level of computational analysis conceived as distinct and autonomous from underlying mechanistic details. In this paper I discuss work in computational neuroscience that creates difficulties for the mechanist project. Carandini and Heeger (Nat Rev Neurosci 13:51–62, 2012) propose that many neural response properties can be understood in terms of canonical neural computations. These are “standard computational modules that apply the same fundamental operations in a variety of contexts.” Importantly, these computations can have numerous biophysical realisations, and so straightforward examination of the mechanisms underlying these computations carries little explanatory weight. Through a comparison between this modelling approach and minimal models in other branches of science, I argue that computational neuroscience frequently employs a distinct explanatory style, namely, efficient coding explanation. Such explanations cannot be assimilated into the mechanistic framework but do bear interesting similarities with evolutionary and optimality explanations elsewhere in biology.

73 citations


Journal ArticleDOI
16 Apr 2014-Synthese
TL;DR: This paper presents two case studies in the influence of values on scientific inquiry: feminist values in archaeology and commercial values in pharmaceutical research, and turns to three major approaches to distinguish legitimate from illegitimate influences of values.
Abstract: The controversy over the old ideal of “value-free science” has cooled significantly over the past decade. Many philosophers of science now agree that even ethical and political values may play a substantial role in all aspects of scientific inquiry. Consequently, in the last few years, work in science and values has become more specific: Which values may influence science, and in which ways? Or, how do we distinguish illegitimate from illegitimate kinds of influence? In this paper, I argue that this problem requires philosophers of science to take a new direction. I present two case studies in the influence of values on scientific inquiry: feminist values in archaeology and commercial values in pharmaceutical research. I offer a preliminary assessment of these cases, that the influence of values was legitimate in the feminist case, but not in the pharmaceutical case. I then turn to three major approaches to distinguish legitimate from illegitimate influences of values, including the distinction between epistemic and non-epistemic values and Heather Douglas’ distinction between direct and indirect roles for values. I argue that none of these three approaches gives an adequate analysis of the two cases. In the concluding section, I briefly sketch my own approach, which draws more heavily on ethics than the others, and is more promising as a solution to the current problem. This is the new direction in which I think science and values should move.

61 citations


Journal ArticleDOI
01 May 2014-Synthese
TL;DR: It is argued that cognitive integration can provide a minimalist yet adequate epistemic norm of subjective justification: so long as the agent’s belief-forming process has been integrated in his cognitive character, the agent can be justified in holding the resulting beliefs merely by lacking any doubts there was something wrong in the way he arrived at them.
Abstract: Cognitive integration is a defining yet overlooked feature of our intellect that may nevertheless have substantial effects on the process of knowledge-acquisition. To bring those effects to the fore, I explore the topic of cognitive integration both from the perspective of virtue reliabilism within externalist epistemology and the perspective of extended cognition within externalist philosophy of mind and cognitive science. On the basis of this interdisciplinary focus, I argue that cognitive integration can provide a minimalist yet adequate epistemic norm of subjective justification: so long as the agent’s belief-forming process has been integrated in his cognitive character, the agent can be justified in holding the resulting beliefs merely by lacking any doubts there was something wrong in the way he arrived at them. Moreover, since both externalist philosophy of mind and externalist epistemology treat the process of cognitive integration in the same way, we can claim that epistemic cognitive characters may extend beyond our organismic cognitive capacities to the artifacts we employ or even to other agents we interact with. This move is not only necessary for accounting for advanced cases of knowledge that is the product of the operation of epistemic artifacts or the interactive activity of research teams, but it can further lead to interesting ramifications both for social epistemology and philosophy of science.

59 citations


Journal ArticleDOI
Alex Morgan1
01 Jan 2014-Synthese
TL;DR: It is argued that when the notions of structural and receptor representation are properly explicated, there turns out to be no distinction between them, and to explain the kinds of offline cognitive capacities that have motivated talk of mental models, richer conceptions of mental representation must be developed.
Abstract: Many philosophers and psychologists have attempted to elucidate the nature of mental representation by appealing to notions like isomorphism or abstract structural resemblance. The ‘structural representations’ that these theorists champion are said to count as representations by virtue of functioning as internal models of distal systems. In his 2007 book, Representation Reconsidered, William Ramsey endorses the structural conception of mental representation, but uses it to develop a novel argument against representationalism, the widespread view that cognition essentially involves the manipulation of mental representations. Ramsey argues that although theories within the ‘classical’ tradition of cognitive science once posited structural representations, these theories are being superseded by newer theories, within the tradition of connectionism and cognitive neuroscience, which rarely if ever appeal to structural representations. Instead, these theories seem to be explaining cognition by invoking so-called ‘receptor representations’, which, Ramsey claims, aren’t genuine representations at all—despite being called representations, these mechanisms function more as triggers or causal relays than as genuine stand-ins for distal systems. I argue that when the notions of structural and receptor representation are properly explicated, there turns out to be no distinction between them. There only appears to be a distinction between receptor and structural representations because the latter are tacitly conflated with the ‘mental models’ ostensibly involved in offline cognitive processes such as episodic memory and mental imagery. While structural representations might count as genuine representations, they aren’t distinctively mental representations, for they can be found in all sorts of non-intentional systems such as plants. Thus to explain the kinds of offline cognitive capacities that have motivated talk of mental models, we must develop richer conceptions of mental representation than those provided by the notions of structural and receptor representation.

59 citations


Journal ArticleDOI
01 Jul 2014-Synthese
TL;DR: An alternative method for developing scenarios, a systems dynamics approach called ‘Cross-Impact Balance’ (CIB) analysis, is considered, which distinguishes seven distinct meanings of ‘objective,’ and demonstrates that CIB analysis is more objective than traditional subjective approaches.
Abstract: Climate change assessments rely upon scenarios of socioeconomic developments to conceptualize alternative outcomes for global greenhouse gas emissions. These are used in conjunction with climate models to make projections of future climate. Specifically, the estimations of greenhouse gas emissions based on socioeconomic scenarios constrain climate models in their outcomes of temperatures, precipitation, etc. Traditionally, the fundamental logic of the socioeconomic scenarios—that is, the logic that makes them plausible—is developed and prioritized using methods that are very subjective. This introduces a fundamental challenge for climate change assessment: The veracity of projections of future climate currently rests on subjective ground. We elaborate on these subjective aspects of scenarios in climate change research. We then consider an alternative method for developing scenarios, a systems dynamics approach called ‘Cross-Impact Balance’ (CIB) analysis. We discuss notions of ‘objective’ and ‘objectivity’ as criteria for distinguishing appropriate scenario methods for climate change research. We distinguish seven distinct meanings of ‘objective,’ and demonstrate that CIB analysis is more objective than traditional subjective approaches. However, we also consider criticisms concerning which of the seven meanings of ‘objective’ are appropriate for scenario work. Finally, we arrive at conclusions regarding which meanings of ‘objective’ and ‘objectivity’ are relevant for climate change research. Because scientific assessments uncover knowledge relevant to the responses of a real, independently existing climate system, this requires scenario methodologies employed in such studies to also uphold the seven meanings of ‘objective’ and ‘objectivity.’

57 citations


Journal ArticleDOI
01 Jan 2014-Synthese
TL;DR: It is argued that scientific knowledge is collective knowledge, in a sense to be specified and defended, which makes it so that satisfaction of the justification condition on knowledge ineliminably requires a collective.
Abstract: I argue that scientific knowledge is collective knowledge, in a sense to be specified and defended. I first consider some existing proposals for construing collective knowledge and argue that they are unsatisfactory, at least for scientific knowledge as we encounter it in actual scientific practice. Then I introduce an alternative conception of collective knowledge, on which knowledge is collective if there is a strong form of mutual epistemic dependence among scientists, which makes it so that satisfaction of the justification condition on knowledge ineliminably requires a collective. Next, I show how features of contemporary science support the conclusion that scientific knowledge is collective knowledge in this sense. Finally, I consider implications of my proposal and defend it against objections.

54 citations


Journal ArticleDOI
Arnon Keren1
23 Feb 2014-Synthese
TL;DR: The paper argues that acceptance of a preemptive account of reasons for trust supports the adoption of a doxastic account of trust, for acceptance of such an account both neutralizes central objections toDoxastic accounts of trust and provides independent reasons supporting a doXastic account.
Abstract: According to doxastic accounts of trust, trusting a person to \(\varPhi \) involves, among other things, holding a belief about the trusted person: either the belief that the trusted person is trustworthy or the belief that she actually will \(\varPhi \). In recent years, several philosophers have argued against doxastic accounts of trust. They have claimed that the phenomenology of trust suggests that rather than such a belief, trust involves some kind of non-doxastic mental attitude towards the trusted person, or a non-doxastic disposition to rely upon her. This paper offers a new account of reasons for trust and employs the account to defend a doxastic account of trust. The paper argues that reasons for trust are preemptive reasons for action or belief. Thus the Razian concept of preemptive reasons, which arguably plays a key role in our understanding of relations of authority, is also central to our understanding of relations of trust. Furthermore, the paper argues that acceptance of a preemptive account of reasons for trust supports the adoption of a doxastic account of trust, for acceptance of such an account both neutralizes central objections to doxastic accounts of trust and provides independent reasons supporting a doxastic account.

51 citations


Journal ArticleDOI
22 May 2014-Synthese
TL;DR: This work challenges the idea that the consideration of cognition regarding the absent and the abstract can move the debate about representationalism along.
Abstract: According to a standard representationalist view cognitive capacities depend on internal content-carrying states. Recent alternatives to this view have been met with the reaction that they have, at best, limited scope, because a large range of cognitive phenomena—those involving absent and abstract features—require representational explanations. Here we challenge the idea that the consideration of cognition regarding the absent and the abstract can move the debate about representationalism along. Whether or not cognition involving the absent and the abstract requires the positing of representations depends upon whether more basic forms of cognition require the positing of representations.

Journal ArticleDOI
20 May 2014-Synthese
TL;DR: An account of epistemic justification suitable for the context of theory pursuit, which will adjust the scope of Bonjour’s standards—consistency, inferential density, and explanatory power, and complement them by the requirement of a programmatic character to allow for the evaluation of the “potential coherence” of the given epistemic system.
Abstract: The aim of this paper is to offer an account of epistemic justification suitable for the context of theory pursuit, that is, for the context in which new scientific ideas, possibly incompatible with the already established theories, emerge and are pursued by scientists. We will frame our account paradigmatically on the basis of one of the influential systems of epistemic justification: Laurence Bonjour’s coherence theory of justification. The idea underlying our approach is to develop a set of criteria which indicate that the pursued system is promising of contributing to the epistemic goal of robustness of scientific knowledge and of developing into a candidate for acceptance. In order to realize this we will (a) adjust the scope of Bonjour’s standards—consistency, inferential density, and explanatory power, and (b) complement them by the requirement of a programmatic character. In this way we allow for the evaluation of the “potential coherence” of the given epistemic system.

Journal ArticleDOI
06 Apr 2014-Synthese
TL;DR: It is argued that explanations just ARE those sorts of things that, under the right circumstances and in the right sort of way, bring about understanding.
Abstract: In this paper, I argue that explanations just ARE those sorts of things that, under the right circumstances and in the right sort of way, bring about understanding. This raises the question of why such a seemingly simple account of explanation, if correct, would not have been identified and agreed upon decades ago. The answer is that only recently has it been made possible to analyze explanation in terms of understanding without the risk of collapsing both to merely phenomenological states. For the most part, theories of explanation were for 50 years held hostage to the historical accident that they far outstripped in sophistication corresponding accounts of understanding.

Journal ArticleDOI
01 Feb 2014-Synthese
TL;DR: 7-year-olds pass a verbal false-belief reasoning task, but fail on an equally complex low-verbal task, which suggests that language supports explicit reasoning about beliefs, perhaps by facilitating the cognitive system to keep track of beliefs attributed by people to other people.
Abstract: We can understand and act upon the beliefs of other people, even when these conflict with our own beliefs. Children's development of this ability, known as Theory of Mind, typically happens around age 4. Research using a looking-time para- digm, however, established that toddlers at the age of 15 months old pass a non-verbal false-belief task (Onishi and Baillargeon in Science 308:255-258, 2005). This is well before the age at which children pass any of the verbal false-belief tasks. In this study we present a more complex case of false-belief reasoning with older children. We tested second-order reasoning, probing children's ability to handle the belief of one person about the belief of another person. We find just the opposite: 7-year-olds pass a verbal false-belief reasoning task, but fail on an equally complex low-verbal task. This finding suggests that language supports explicit reasoning about beliefs, perhaps by facilitating the cognitive system to keep track of beliefs attributed by people to other people.

Journal ArticleDOI
01 Jun 2014-Synthese
TL;DR: An account of the cognitive attitude of trust that explains the role trust plays in the planning of rational agents and defends this account against objections that it provides insufficient rational constraints on trust, conflates trust and pretense of trust, and cannot account for the rationality of back-up planning.
Abstract: I provide an account of the cognitive attitude of trust that explains the role trust plays in the planning of rational agents. Many authors have dismissed choosing to trust as either impossible or irrational; however, this fails to account for the role of trust in practical reasoning. A can have therapeutic, coping, or corrective reasons to trust B to \({\phi}\) , even in the absence of evidence that B will \({\phi}\) . One can choose to engage in therapeutic trust to inspire trustworthiness, coping trust to simplify one’s planning, or corrective trust to avoid doing a testimonial injustice. To accommodate such types of trust, without accepting doxastic voluntarism, requires an account of the cognitive attitude of trust broader than belief alone. I argue that trust involves taking the proposition that someone will do something as a premise in one’s practical reasoning, which can be a matter of believing or accepting the proposition. I defend this account against objections that it (i) provides insufficient rational constraints on trust, (ii) conflates trust and pretense of trust, and (iii) cannot account for the rationality of back-up planning.

Journal ArticleDOI
23 Jul 2014-Synthese
TL;DR: The contention is that the ever-present moral/evaluative dimension in gossip—be it tacit or explicit, concerning the objects or the partners of gossip—is best analyzed through the epistemological framework of abduction.
Abstract: Gossip has been the object of a number of different studies in the past 50 years, rehabilitating it not only as something worth being studied, but also as a pivotal informational and social structure of human cognition: Dunbar (Rev Gen Psychol 8(2):100–110, 2004) interestingly linked the emergence of language to nothing less than its ability to afford gossip. Different facets of gossip were analyzed by anthropologists, linguists, psychologists and philosophers, but few attempts were made to frame gossip within an epistemological framework (for instance Ayim in (Good gossip, pp. 85–99, 1994)). Our intention in this paper is to provide a consistent epistemological (applied and social) account of gossip, understood as broadly evaluative talk between two or more people, comfortably acquainted between each other, about an absent third party they are both at least acquainted with. Hence, relying on the most recent multidisciplinary literature about the topic, the first part of this paper will concern the epistemic dynamics of gossip: whereas the sociobiological tradition individuates in gossip the clue for the (theoretically cumbersome) group mind and group-level adaptations Wilson et al. (The evolution of cognition, pp. 347–365, 2002), we will suggest the more parsimonious modeling of gossip as a soft-assembled epistemic synergy, understood as a function-dominant interaction able to project a higher organizational level—in our case, the group as group-of-gossips. We will argue that the aim of this synergy is indeed to update a Knowledge Base of social information between the group (as a projected whole) and its members. The second and third part will instead focus on the epistemological labeling of the inferences characterizing gossip: our contention is that the ever-present moral/evaluative dimension in gossip—be it tacit or explicit, concerning the objects or the partners of gossip—is best analyzed through the epistemological framework of abduction. Consequently, we will suggest that a significant role of gossip is to function as a group-based abductive appraisal of social matter, enacted at various levels.

Journal ArticleDOI
01 Mar 2014-Synthese
TL;DR: It is argued that the celebrated “problem of induction” can no longer be set up and is thereby dissolved.
Abstract: In a formal theory of induction, inductive inferences are licensed by universal schemas In a material theory of induction, inductive inferences are licensed by facts With this change in the conception of the nature of induction, I argue that the celebrated “problem of induction” can no longer be set up and is thereby dissolved Attempts to recreate the problem in the material theory of induction fail They require relations of inductive support to conform to an unsustainable, hierarchical empiricism

Journal ArticleDOI
28 Jun 2014-Synthese
TL;DR: The argument shows that only instrumentalists can avoid positing an embarrassing coincidence between the practical value of believing in accordance with one’s evidence, and the existence of reasons so to believe.
Abstract: According to epistemic instrumentalists the normativity of evidence for belief is best explained in terms of the practical utility of forming evidentially supported beliefs. Traditional arguments for instrumentalism—arguments based on naturalism and motivation—lack suasive force against opponents. A new argument for the view—the Argument from Coincidence—is presented. The argument shows that only instrumentalists can avoid positing an embarrassing coincidence between the practical value of believing in accordance with one’s evidence, and the existence of reasons so to believe. Responses are considered and shown to be inadequate.

Journal ArticleDOI
01 May 2014-Synthese
TL;DR: This paper considers in detail Beall and Restall’s Logical Pluralism—which seeks to accommodate radically different logics by stressing the way that they each fit a general form, the Generalised Tarski Thesis (GTT)—arguing against the claim that different instances of GTT are admissible precisifications of logical consequence.
Abstract: Logical Pluralists maintain that there is more than one genuine/true logical consequence relation. This paper seeks to understand what the position could amount to and some of the challenges faced by its formulation and defence. I consider in detail Beall and Restall’s Logical Pluralism—which seeks to accommodate radically different logics by stressing the way that they each fit a general form, the Generalised Tarski Thesis (GTT)—arguing against the claim that different instances of GTT are admissible precisifications of logical consequence. I then consider what it is to endorse a logic within a pluralist framework and criticise the options Beall and Restall entertain. A case study involving many-valued logics is examined. I next turn to issues of the applications of different logics and questions of which logic a pluralist should use in particular contexts. A dilemma regarding the applicability of admissible logics is tackled and it is argued that application is a red herring in relation to both understanding and defending a plausible form of logical pluralism. In the final section, I consider other ways to be and not to be a logical pluralist by examining analogous positions in debates over religious pluralism: this, I maintain, illustrates further limitations and challenges for a very general logical pluralism. Certain less wide-ranging pluralist positions are more plausible in both cases, I suggest, but assessment of those positions needs to be undertaken on a case-by-case basis.

Journal ArticleDOI
22 Oct 2014-Synthese
TL;DR: This paper argues that the problem is a significant one given the ubiquity of causal cycles in mechanisms, but that the problems can be solved by combining two sorts of solution strategy in a judicious way.
Abstract: Mechanistic philosophy of science views a large part of scientific activity as engaged in modelling mechanisms. While science textbooks tend to offer qualitative models of mechanisms, there is increasing demand for models from which one can draw quantitative predictions and explanations. Casini et al. (Theoria 26(1):5–33, 2011) put forward the Recursive Bayesian Networks (RBN) formalism as well suited to this end. The RBN formalism is an extension of the standard Bayesian net formalism, an extension that allows for modelling the hierarchical nature of mechanisms. Like the standard Bayesian net formalism, it models causal relationships using directed acyclic graphs. Given this appeal to acyclicity, causal cycles pose a prima facie problem for the RBN approach. This paper argues that the problem is a significant one given the ubiquity of causal cycles in mechanisms, but that the problem can be solved by combining two sorts of solution strategy in a judicious way.

Journal ArticleDOI
01 Apr 2014-Synthese
TL;DR: It is shown that two distinct explanatory strategies are employed in narratives, simple and complex; a simple narrative has minimal causal detail and is embedded in a regularity, whereas a complex narrative is more detailed and not embedded.
Abstract: Geologists, Paleontologists and other historical scientists are frequently concerned with narrative explanations targeting single cases. I show that two distinct explanatory strategies are employed in narratives, simple and complex. A simple narrative has minimal causal detail and is embedded in a regularity, whereas a complex narrative is more detailed and not embedded. The distinction is illustrated through two case studies: the ‘snowball earth’ explanation of Neoproterozoic glaciation and recent attempts to explain gigantism in Sauropods. This distinction is revelatory of historical science. I argue that at least sometimes which strategy is appropriate is not a pragmatic issue, but turns on the nature of the target. Moreover, the distinction reveals a counterintuitive pattern of progress in some historical explanation: shifting from simple to complex. Sometimes, historical scientists rightly abandon simple, unified explanations in favour of disunified, complex narratives. Finally I compare narrative and mechanistic explanation, arguing that mechanistic approaches are inappropriate for complex narrative explanations.

Journal ArticleDOI
29 Aug 2014-Synthese
TL;DR: It is argued that the best way to understand the content of thought experiments is to treat them as props for imagining fictional worlds, and it is proposed that they are treated primarily as aesthetic objects, specifically fictions.
Abstract: This paper motivates, explains, and defends a new account of the content of thought experiments. I begin by briefly surveying and critiquing three influential accounts of thought experiments: James Robert Brown’s Platonist account, John Norton’s deflationist account that treats them as picturesque arguments, and a cluster of views that I group together as mental model accounts. I use this analysis to motivate a set of six desiderata for a new approach. I propose that we treat thought experiments primarily as aesthetic objects, specifically fictions, and then use this analysis to characterize their content and ultimately assess their epistemic success. Taking my starting point from Kendall Walton’s account of representation (Mimesis as make-believe, Harvard University Press, Cambridge, 1990), I argue that the best way to understand the content of thought experiments is to treat them as props for imagining fictional worlds. Ultimately, I maintain that, in terms of their form and content, thought experiments share more with literary fictions and pictorial representations than with either argumentation or observations of the Platonic realm. Moreover, while they inspire imaginings, thought experiments themselves are not mental kinds. My approach redirects attention towards what fixes the content of any given thought experiment and scrutinizes the assumptions, cognitive capacities and conventions that generate them. This view helps to explain what seems plausible about Brown’s, Norton’s, and the mental modelers’ views.

Journal ArticleDOI
20 Jun 2014-Synthese
TL;DR: It is shown that there can be no coherence measure that satisfies all constraints, and that subsets of these adequacy constraints motivate two different classes of coherence measures.
Abstract: The debate on probabilistic measures of coherence flourishes for about 15 years now. Initiated by papers that have been published around the turn of the millennium, many different proposals have since then been put forward. This contribution is partly devoted to a reassessment of extant coherence measures. Focusing on a small number of reasonable adequacy constraints I show that (i) there can be no coherence measure that satisfies all constraints, and that (ii) subsets of these adequacy constraints motivate two different classes of coherence measures. These classes do not coincide with the common distinction between coherence as mutual support and coherence as relative set-theoretic overlap. Finally, I put forward arguments to the effect that for each such class of coherence measures there is an outstanding measure that outperforms all other extant proposals. One of these measures has recently been put forward in the literature, while the other one is based on a novel probabilistic measure of confirmation.

Journal ArticleDOI
01 May 2014-Synthese
TL;DR: It is argued that mereological nihilism fails because it cannot answer the special arrangement question: when is it true that the xs are arranged F-wise?
Abstract: I argue that mereological nihilism fails because it cannot answer (what I describe as) the special arrangement question: when is it true that the xs (the mereological simples) are arranged F-wise? I suggest that the answers given in the literature fail and that the obvious responses that could be made look to undermine the motivations for adopting nihilism in the first place.

Journal ArticleDOI
01 Jan 2014-Synthese
TL;DR: The collective dimension of science has been discussed by philosophers of science in various ways, but the use of formal methods has been restricted to some particular areas, such as the treatment of the division of scientific labor, the study of reward schemes or the effects of network structures on the production of scientific knowledge.
Abstract: Scientists are not isolated agents: they collaborate in laboratories, research networks and large-scale international projects. Apart from direct collaboration, scientists interact with each other in various ways: they follow entrenched research programs, trust their peers, embed their work into an existing paradigm, exchange concepts, methods and results, compete for grants or prestige, etc. The collective dimension of science has been discussed by philosophers of science in various ways, but until recently, the use of formal methods has been restricted to some particular areas, such as the treatment of the division of scientific labor, the study of reward schemes or the effects of network structures on the production of scientific knowledge. Given the great promise of these methods for modeling and understanding of the dynamics of scientific research, this blind spot struck us as surprising. At the same time, social aspects of the production and diffusion of knowledge have been

Journal ArticleDOI
01 Jun 2014-Synthese
TL;DR: How and whether friendship gives us reasons to trust the authors' friends, reasons which may outstrip or conflict with their epistemic reasons and some related questions concerning trust based on the trustee’s race, gender, or other social identity are explored.
Abstract: You can trust your friends. You should trust your friends. Not all of your friends all of the time: you can reasonably trust different friends to different degrees, and in different domains. Still, we often trust our friends, and it is often reasonable to do so. Why is this? In this paper I explore how and whether friendship gives us reasons to trust our friends, reasons which may outstrip or conflict with our epistemic reasons. In the final section, I will sketch some related questions concerning trust based on the trustee’s race, gender, or other social identity.

Journal ArticleDOI
01 Jan 2014-Synthese
TL;DR: The incentivized action view is developed by extending it to institutions like property, promises and complex financial organisations like companies like companies by highlighting exactly how the incentivizedaction view differs from the Searlean view.
Abstract: Contemporary discussion concerning institutions focus on, and mostly accept, the Searlean view that institutional objects, i.e. money, borders and the like, exist in virtue of the fact that we collectively represent them as existing. A dissenting note has been sounded by Smit et al. (Econ Philos 27:1–22, 2011), who proposed the incentivized action view of institutional objects. On the incentivized action view, understanding a specific institution is a matter of understanding the specific actions that are associated with the institution and how we are incentivized to perform these actions. In this paper we develop the incentivized action view by extending it to institutions like property, promises and complex financial organisations like companies. We also highlight exactly how the incentivized action view differs from the Searlean view, discuss the method appropriate to such study and discuss some of the virtues of the incentivized action view.

Journal ArticleDOI
01 Mar 2014-Synthese
TL;DR: A dynamic logic of lying is proposed, wherein a ‘lie that $$\varphi $$’ is an action in the sense of dynamic modal logic, that is interpreted as a state transformer relative to the formula $$\ varphi $$.
Abstract: We propose a dynamic logic of lying, wherein a ‘lie that $$\varphi $$ ’ (where $$\varphi $$ is a formula in the logic) is an action in the sense of dynamic modal logic, that is interpreted as a state transformer relative to the formula $$\varphi $$ . The states that are being transformed are pointed Kripke models encoding the uncertainty of agents about their beliefs. Lies can be about factual propositions but also about modal formulas, such as the beliefs of other agents or the belief consequences of the lies of other agents. We distinguish two speaker perspectives: (Obs) an outside observer who is lying to an agent that is modelled in the system, and (Ag) an agent who is lying to another agent, and where both are modelled in the system. We distinguish three addressee perspectives: (Cred) the credulous agent who believes everything that it is told (even at the price of inconsistency), (Skep) the skeptical agent who only believes what it is told if that is consistent with its current beliefs, and (Rev) the belief revising agent who believes everything that it is told by consistently revising its current, possibly conflicting, beliefs. The logics have complete axiomatizations, which can most elegantly be shown by way of their embedding in what is known as action model logic or in the extension of that logic to belief revision.

Journal ArticleDOI
25 Mar 2014-Synthese
TL;DR: It is argued that the phenomenon can indeed arise in groups of perfectly rational agents, which ensures that the tools of formal epistemology can be fully utilized to reason about pluralistic ignorance.
Abstract: Pluralistic ignorance is a socio-psychological phenomenon that involves a systematic discrepancy between people’s private beliefs and public behavior in certain social contexts. Recently, pluralistic ignorance has gained increased attention in formal and social epistemology. But to get clear on what precisely a formal and social epistemological account of pluralistic ignorance should look like, we need answers to at least the following two questions: What exactly is the phenomenon of pluralistic ignorance? And can the phenomenon arise among perfectly rational agents? In this paper, we propose answers to both these questions. First, we characterize different versions of pluralistic ignorance and define the version that we claim most adequately captures the examples cited as paradigmatic cases of pluralistic ignorance in the literature. In doing so, we will stress certain key epistemic and social interactive aspects of the phenomenon. Second, given our characterization of pluralistic ignorance, we argue that the phenomenon can indeed arise in groups of perfectly rational agents. This, in turn, ensures that the tools of formal epistemology can be fully utilized to reason about pluralistic ignorance.

Journal ArticleDOI
01 Apr 2014-Synthese
TL;DR: A new theory of what it is for a physical system to implement an abstract computational model is articulated and defended and arguments, propounded by Putnam and Searle, that computational implementation is trivial are rebutted.
Abstract: I articulate and defend a new theory of what it is for a physical system to implement an abstract computational model. According to my descriptivist theory, a physical system implements a computational model just in case the model accurately describes the system. Specifically, the system must reliably transit between computational states in accord with mechanical instructions encoded by the model. I contrast my theory with an influential approach to computational implementation espoused by Chalmers, Putnam, and others. I deploy my theory to illuminate the relation between computation and representation. I also rebut arguments, propounded by Putnam and Searle, that computational implementation is trivial.