scispace - formally typeset
Search or ask a question

Showing papers in "Synthese in 2010"


Journal ArticleDOI
01 May 2010-Synthese
TL;DR: It is argued that the vagueness based model developed here provides a useful perspective on both the relation between language and reality and the interaction of Universal Grammar with the Conceptual/Intentional System, as well as some of the major dimensions along which languages may vary on this score.
Abstract: The mass/count distinction attracts a lot of attention among cognitive scientists, possibly because it involves in fundamental ways the relation between language (i.e. grammar), thought (i.e. extralinguistic conceptual systems) and reality (i.e. the physical world). In the present paper, I explore the view that the mass/count distinction is a matter of vagueness. While every noun/concept may in a sense be vague, mass nouns/concepts are vague in a way that systematically impairs their use in counting. This idea has never been systematically pursued, to the best of my knowledge. I make it precise relying on supervaluations (more specifically, ‘data semantics’) to model it. I identify a number of universals pertaining to how the mass/count contrast is encoded in the languages of the world, along with some of the major dimensions along which languages may vary on this score. I argue that the vagueness based model developed here provides a useful perspective on both. The outcome (besides shedding light on semantic variation) seems to suggest that vagueness is not just an interface phenomenon that arises in the interaction of Universal Grammar (UG) with the Conceptual/Intentional System (to adopt Chomsky’s terminology), but it is actually part of the architecture of UG.

259 citations


Journal ArticleDOI
01 Jan 2010-Synthese
TL;DR: An intentional conception of representation in science that requires bringing scientific agents and their intentions into the picture is argued, and the recently much discussed idea that claims involving scientific models are really fictions is criticized.
Abstract: I argue for an intentional conception of representation in science that requires bringing scientific agents and their intentions into the picture. So the formula is: Agents (1) intend; (2) to use model, M; (3) to represent a part of the world, W; (4) for some purpose, P. This conception legitimates using similarity as the basic relationship between models and the world. Moreover, since just about anything can be used to represent anything else, there can be no unified ontology of models. This whole approach is further supported by a brief exposition of some recent work in cognitive, or usage-based, linguistics. Finally, with all the above as background, I criticize the recently much discussed idea that claims involving scientific models are really fictions.

234 citations


Journal ArticleDOI
01 Jan 2010-Synthese
TL;DR: The authors argue that models share important aspects in common with literary fiction, and that therefore theories of fiction can be brought to bear on these questions, in particular the pretence theory as developed by Walton (1990, Mimesis as make-believe).
Abstract: Most scientific models are not physical objects, and this raises important questions. What sort of entity are models, what is truth in a model, and how do we learn about models? In this paper I argue that models share important aspects in common with literary fiction, and that therefore theories of fiction can be brought to bear on these questions. In particular, I argue that the pretence theory as developed by Walton (1990, Mimesis as make-believe: on the foundations of the representational arts. Harvard University Press, Cambridge/MA) has the resources to answer these questions. I introduce this account, outline the answers that it offers, and develop a general picture of scientific modelling based on it.

218 citations


Journal ArticleDOI
12 Mar 2010-Synthese
TL;DR: It is argued that all theories of knowledge need to accommodate the ability intuition that knowledge involves cognitive ability, but that once this requirement is understood correctly there is no reason why one could not have a conception of cognitive ability that was consistent with the extended cognition thesis.
Abstract: This paper explores the ramifications of the extended cognition thesis in the philosophy of mind for contemporary epistemology. In particular, it argues that all theories of knowledge need to accommodate the ability intuition that knowledge involves cognitive ability, but that once this requirement is understood correctly there is no reason why one could not have a conception of cognitive ability that was consistent with the extended cognition thesis. There is thus, surprisingly, a straightforward way of developing our current thinking about knowledge such that it incorporates the extended cognition thesis.

140 citations


Journal ArticleDOI
03 Dec 2010-Synthese
TL;DR: This paper argues that besides mechanistic explanations, there is a kind of explanation that relies upon “topological” properties of systems in order to derive the explanandum as a consequence, and which does not consider mechanisms or causal processes.
Abstract: This paper argues that besides mechanistic explanations, there is a kind of explanation that relies upon "topological" properties of systems in order to derive the explanandum as a consequence, and which does not consider mechanisms or causal processes I first investigate topological explanations in the case of ecological research on the stability of ecosystems Then I contrast them with mechanistic explanations, thereby distinguishing the kind of realization they involve from the realization relations entailed by mechanistic explanations, and explain how both kinds of explanations may be articulated in practice The second section, expanding on the case of ecological stability, considers the phenomenon of robustness at all levels of the biological hierarchy in order to show that topological explanations are indeed pervasive there Reasons are suggested for this, in which "neutral network" explanations are singled out as a form of topological explanation that spans across many levels Finally, I appeal to the distinction of explanatory regimes to cast light on a controversy in philosophy of biology, the issue of contingence in evolution, which is shown to essentially involve issues about realization

131 citations


Journal ArticleDOI
01 May 2010-Synthese
TL;DR: Empirical arguments are provided that color adjectives are in fact ambiguous between gradable and nongradable interpretations, and that this simple ambiguity accounts for the Travis facts in a simpler, more constrained, and thus ultimately more successful fashion than recent contextualist analyses.
Abstract: Color adjectives have played a central role in work on language typology and variation, but there has been relatively little investigation of their meanings by researchers in formal semantics. This is surprising given the fact that color terms have been at the center of debates in the philosophy of language over foundational questions, in particular whether the idea of a compositional, truth-conditional theory of natural language semantics is even coherent. The challenge presented by color terms is articulated in detail in the work of Charles Travis. Travis argues that structurally isomorphic sentences containing color adjectives can shift truth value from context to context depending on how they are used and in the absence of effects of vagueness or ambiguity/polysemy, and concludes that a deterministic mapping from structures to truth conditions is impossible. The goal of this paper is to provide a linguistic perspective on this issue, which we believe defuses Travis’ challenge. We provide empirical arguments that color adjectives are in fact ambiguous between gradable and nongradable interpretations, and that this simple ambiguity, together with independently motivated options concerning scalar dimension within the gradable reading accounts for the Travis facts in a simpler, more constrained, and thus ultimately more successful fashion than recent contextualist analyses such as those in Szabo (Perspectives on semantics, pragmatics and discourse: A festschrift for Ferenc Kiefer, 2001) or Rothschild and Segal (Mind Lang, 2009).

107 citations


Journal ArticleDOI
01 Jan 2010-Synthese
TL;DR: This paper axiomatize the theory of choice functions and shows these axioms are necessary for coherence using a set of probability/almost-state-independent utility pairs, and gives sufficient conditions when a choice function satisfying the authors' axiom is represented by a setof probability/state- independent utility pairs with a common utility.
Abstract: We discuss several features of coherent choice functions—where the admissible options in a decision problem are exactly those that maximize expected utility for some probability/utility pair in fixed set S of probability/utility pairs. In this paper we consider, primarily, normal form decision problems under uncertainty—where only the probability component of S is indeterminate and utility for two privileged outcomes is determinate. Coherent choice distinguishes between each pair of sets of probabilities regardless the “shape” or “connectedness” of the sets of probabilities. We axiomatize the theory of choice functions and show these axioms are necessary for coherence. The axioms are sufficient for coherence using a set of probability/almost-state-independent utility pairs. We give sufficient conditions when a choice function satisfying our axioms is represented by a set of probability/state-independent utility pairs with a common utility.

95 citations


Journal ArticleDOI
01 Jan 2010-Synthese
TL;DR: This paper critically examines arguments by some functionalists to the effect that informational theories are flawed, and contends that, as it turns out, informational and functional theories are importantly complementary.
Abstract: Recent work in the philosophy of science has generated an apparent conflict between theories attempting to explicate the nature of scientific representation. On one side, there are what one might call ‘informational’ views, which emphasize objective relations (such as similarity, isomorphism, and homomorphism) between representations (theories, models, simulations, diagrams, etc.) and their target systems. On the other side, there are what one might call ‘functional’ views, which emphasize cognitive activities performed in connection with these targets, such as interpretation and inference. The main sources of the impression of conflict here are arguments by some functionalists to the effect that informational theories are flawed: it is suggested that relations typically championed by informational theories are neither necessary nor sufficient for scientific representation, and that any theory excluding functions is inadequate. In this paper I critically examine these arguments, and contend that, as it turns out, informational and functional theories are importantly complementary.

86 citations


Journal ArticleDOI
01 Nov 2010-Synthese
TL;DR: In this paper, the authors present a framework of concepts intended to account for the rationality of semantic change and variation, suggesting that each scientific concept consists of three components of content: (1) reference, (2) inferential role, and (3) the epistemic goal pursued with the concept's use.
Abstract: The discussion presents a framework of concepts that is intended to account for the rationality of semantic change and variation, suggesting that each scientific concept consists of three components of content: (1) reference, (2) inferential role, and (3) the epistemic goal pursued with the concept’s use. I argue that in the course of history a concept can change in any of these components, and that change in the concept’s inferential role and reference can be accounted for as being rational relative to the third component, the concept’s epistemic goal. This framework is illustrated and defended by application to the history of the gene concept. It is explained how the molecular gene concept grew rationally out of the classical gene concept despite a change in reference, and why the use and reference of the contemporary molecular gene concept may legitimately vary from context to context.

85 citations


Journal ArticleDOI
06 Aug 2010-Synthese
TL;DR: This paper uses a static base from the existing awareness literature, extending it into a dynamic system that includes traditional acts of observation, but also adding and dropping formulas from the current ‘awareness’ set, and gives a completeness theorem, and shows how this dynamics updates explicit knowledge.
Abstract: Classical epistemic logic describes implicit knowledge of agents about facts and knowledge of other agents based on semantic information. The latter is produced by acts of observation or communication that are described well by dynamic epistemic logics. What these logics do not describe, however, is how significant information is also produced by acts of inference—and key axioms of the system merely postulate “deductive closure”. In this paper, we take the view that all information is produced by acts, and hence we also need a dynamic logic of inference steps showing what effort on the part of the agent makes a conclusion explicit knowledge. Strong omniscience properties of agents should be seen not as static idealizations, but as the result of dynamic processes that agents engage in. This raises two questions: (a) how to define suitable information states of agents and matching notions of explicit knowledge, (b) how to define natural processes over these states that generate new explicit knowledge. To this end, we use a static base from the existing awareness literature, extending it into a dynamic system that includes traditional acts of observation, but also adding and dropping formulas from the current ‘awareness’ set. We give a completeness theorem, and we show how this dynamics updates explicit knowledge. Then we extend our approach to multi-agent scenarios where awareness changes may happen privately. Finally, we mention further directions and related approaches. Our contribution can be seen as a ‘dynamification’ of existing awareness logics.

84 citations


Journal ArticleDOI
01 Jul 2010-Synthese
TL;DR: This essay provides a contrary analysis by introducing a formal account of Euclid's proofs, termed Eu, which specifies what diagrams Euclid’s diagrams are, in a precise formal sense, and defines generality-preserving proof rules in terms of them.
Abstract: Though pictures are often used to present mathematical arguments, they are not typically thought to be an acceptable means for presenting mathematical arguments rigorously. With respect to the proofs in the Elements in particular, the received view is that Euclid’s reliance on geometric diagrams undermines his efforts to develop a gap-free deductive theory. The central difficulty concerns the generality of the theory. How can inferences made from a particular diagrams license general mathematical results? After surveying the history behind the received view, this essay provides a contrary analysis by introducing a formal account of Euclid’s proofs, termed Eu. Eu solves the puzzle of generality surrounding Euclid’s arguments. It specifies what diagrams Euclid’s diagrams are, in a precise formal sense, and defines generality-preserving proof rules in terms of them. After the central principles behind the formalization are laid out, its implications with respect to the question of what does and does not constitute a genuine picture proof are explored.

Journal ArticleDOI
01 Jan 2010-Synthese
TL;DR: It is argued that current discussions of criteria for actual causation are ill-posed and standard methods will not lead to such an account, and a different approach is required.
Abstract: We argue that current discussions of criteria for actual causation are ill-posed in several respects. (1) The methodology of current discussions is by induc- tion from intuitions about an infinitesimal fraction of the possible examples and coun- terexamples; (2) cases with larger numbers of causes generate novel puzzles; (3) "neuron" and causal Bayes net diagrams are, as deployed in discussions of actual causation, almost always ambiguous; (4) actual causation is (intuitively) relative to an initial system state since state changes are relevant, but most current accounts ignore state changes through time; (5) more generally, there is no reason to think that philosophical judgements about these sorts of cases are normative; but (6) there is a dearth of relevant psychological research that bears on whether various philosophical accounts are descriptive. Our skepticism is not directed towards the possibility of a

Journal ArticleDOI
01 Jan 2010-Synthese
TL;DR: The importance of developing a satisfactory understanding of descriptions of missing systems and the face value practice is highlighted to put pressure on philosophical accounts which rely on the practice, and to help us assess the viability of certain approaches to thinking about models, theory structure, and scientific representation.
Abstract: Call a bit of scientific discourse a description of a missing system when (i) it has the surface appearance of an accurate description of an actual, concrete system (or kind of system) from the domain of inquiry, but (ii) there are no actual, concrete systems in the world around us fitting the description it contains, and (iii) that fact is recognised from the outset by competent practitioners of the scientific discipline in question. Scientific textbooks, classroom lectures, and journal articles abound with such passages; and there is a widespread practice of talking and thinking as though there are systems which fit the descriptions they contain perfectly, despite the recognition that no actual, concrete systems do so—call this the face value practice. There are, furthermore, many instances in which philosophers engage in the face value practice whilst offering answers to epistemological and methodological questions about the sciences. Three questions, then: (1) How should we interpret descriptions of missing systems? (2) How should we make sense of the face value practice? (3) Is there a set of plausible answers to (1) and (2) which legitimates reliance on the face value practice in our philosophical work, and can support the weight of the accounts which are entangled with that practice? In this paper I address these questions by considering three answers to the first: that descriptions of missing systems are straightforward descriptions of abstract objects, that they are indirect descriptions of “property-containing” abstracta, and that they are (in a different way) indirect descriptions of mathematical structures. All three proposals are present in the literature, but I find them wanting. The result is to highlight the importance of developing a satisfactory understanding of descriptions of missing systems and the face value practice, to put pressure on philosophical accounts which rely on the practice, and to help us assess the viability of certain approaches to thinking about models, theory structure, and scientific representation.

Journal ArticleDOI
01 Jan 2010-Synthese
TL;DR: A game theoretic model of the simplest case, where one sender and one receiver have pure common interest is introduced, and the answers involve surprising subtleties.
Abstract: Transfer of information between senders and receivers, of one kind or another, is essential to all life. David Lewis introduced a game theoretic model of the simplest case, where one sender and one receiver have pure common interest. How hard or easy is it for evolution to achieve information transfer in Lewis signaling?. The answers involve surprising subtleties. We discuss some if these in terms of evolutionary dynamics in both finite and infinite populations, with and without mutation.

Journal ArticleDOI
01 May 2010-Synthese
TL;DR: It is claimed that particularly fruitful insights are gained by seeing themes such as the analytic-synthetic distinction, the axiomatic method, the hierarchical order of sciences and the status of logic as a science against the background of the Classical Model of Science.
Abstract: Throughout more than two millennia philosophers adhered massively to ideal standards of scientific rationality going back ultimately to Aristotle’s Analytica posteriora. These standards got progressively shaped by and adapted to new scientific needs and tendencies. Nevertheless, a core of conditions capturing the fundamentals of what a proper science should look like remained remarkably constant all along. Call this cluster of conditions the Classical Model of Science. In this paper we will do two things. First of all, we will propose a general and systematized account of the Classical Model of Science. Secondly, we will offer an analysis of the philosophical significance of this model at different historical junctures by giving an overview of the connections it has had with a number of important topics. The latter include the analytic-synthetic distinction, the axiomatic method, the hierarchical order of sciences and the status of logic as a science. Our claim is that particularly fruitful insights are gained by seeing themes such as these against the background of the Classical Model of Science. In an appendix we deal with the historiographical background of this model by considering the systematizations of Aristotle’s theory of science offered by Heinrich Scholz, and in his footsteps by Evert W. Beth.

Journal ArticleDOI
13 Mar 2010-Synthese
TL;DR: A useful context is provided by Floridi’s account of the relationship between ‘ontic’ and ‘epistemic’ structural realisms and some brief remarks on possible extensions of OSR into other scientific domains are made.
Abstract: According to ‘Ontic Structural Realism’ (OSR), physical objects—qua metaphysical entities—should be reconceptualised, or, more strongly, eliminated in favour of the relevant structures. In this paper I shall attempt to articulate the relationship between these putative objects and structures in terms of certain accounts of metaphysical dependence currently available. This will allow me to articulate the differences between the different forms of OSR and to argue in favour of the ‘eliminativist’ version. A useful context is provided by Floridi’s account of the relationship between ‘ontic’ and ‘epistemic’ structural realisms and I shall conclude with some brief remarks on possible extensions of OSR into other scientific domains.

Journal ArticleDOI
07 Sep 2010-Synthese
TL;DR: Trust is a central concept in the philosophy of science as discussed by the authors and it has been argued that scientists should extend their efforts to develop normative conceptions of trust that can serve to facilitate trust between scientific experts and ordinary citizens.
Abstract: Trust is a central concept in the philosophy of science. We highlight how trust is important in the wide variety of interactions between science and society. We claim that examining and clarifying the nature and role of trust (and distrust) in relations between science and society is one principal way in which the philosophy of science is socially relevant. We argue that philosophers of science should extend their efforts to develop normative conceptions of trust that can serve to facilitate trust between scientific experts and ordinary citizens. The first project is the development of a rich normative theory of expertise and experience that can explain why the various epistemic insights of diverse actors should be trusted in certain contexts and how credibility deficits can be bridged. The second project is the development of concepts that explain why, in certain cases, ordinary citizens may distrust science, which should inform how philosophers of science conceive of the formulation of science policy when conditions of distrust prevail. The third project is the analysis of cases of successful relations of trust between scientists and non-scientists that leads to understanding better how ‘postnormal’ science interactions are possible using trust.

Journal ArticleDOI
01 Jan 2010-Synthese
TL;DR: This paper distinguishes scientific models in three kinds on the basis of their ontological status—material models, mathematical models and fictional models, and develops and defends an account of fictional models as fictional objects.
Abstract: In this paper, I distinguish scientific models in three kinds on the basis of their ontological status—material models, mathematical models and fictional models, and develop and defend an account of fictional models as fictional objects—i.e. abstract objects that stand for possible concrete objects.

Journal ArticleDOI
01 Jan 2010-Synthese
TL;DR: This paper proposes an alternative account of theoretical modelling that draws upon Kendall Walton’s ‘make-believe’ theory of representation in art that allows us to understand theoretical modelling without positing any object of which scientists’ modelling assumptions are true.
Abstract: The descriptions and theoretical laws scientists write down when they model a system are often false of any real system. And yet we commonly talk as if there were objects that satisfy the scientists’ assumptions and as if we may learn about their properties. Many attempt to make sense of this by taking the scientists’ descriptions and theoretical laws to define abstract or fictional entities. In this paper, I propose an alternative account of theoretical modelling that draws upon Kendall Walton’s ‘make-believe’ theory of representation in art. I argue that this account allows us to understand theoretical modelling without positing any object of which scientists’ modelling assumptions are true.

Journal ArticleDOI
01 Jul 2010-Synthese
TL;DR: It is claimed that it is on the ground of the interaction with a measurement system that a partition can be induced on the domain of entities under measurement and that relations among such entities can be established, and that the usage of measurement systems guarantees a degree of objectivity and intersubjectivity to measurement results.
Abstract: Measurement is a process aimed at acquiring and codifying information about properties of empirical entities. In this paper we provide an interpretation of such a process comparing it with what is nowadays considered the standard measurement theory, i.e., representational theory of measurement. It is maintained here that this theory has its own merits but it is incomplete and too abstract, its main weakness being the scant attention reserved to the empirical side of measurement, i.e., to measurement systems and to the ways in which the interactions of such systems with the entities under measurement provide a structure to an empirical domain. In particular it is claimed that (1) it is on the ground of the interaction with a measurement system that a partition can be induced on the domain of entities under measurement and that relations among such entities can be established, and that (2) it is the usage of measurement systems that guarantees a degree of objectivity and intersubjectivity to measurement results. As modeled in this paper, measurement systems link the abstract theory of measuring, as developed in representational terms, and the practice of measuring, as coded in standard documents such as the International Vocabulary of Metrology.

Journal ArticleDOI
10 Sep 2010-Synthese
TL;DR: Expectations of knowledge sharing are focused on, using examples of “knowledge-sharing whistleblowers” to illustrate how failures in knowledge sharing with lay communities can erode epistemic trust in scientific communities, particularly in the case of marginalized communities.
Abstract: Feminist philosophers of science have been prominent amongst social epistemologists who draw attention to communal aspects of knowing. As part of this work, I focus on the need to examine the relations between scientific communities and lay communities, particularly marginalized communities, for understanding the epistemic merit of scientific practices. I draw on Naomi Scheman’s argument (2001) that science earns epistemic merit by rationally grounding trust across social locations. Following this view, more turns out to be relevant to epistemic assessment than simply following the standards of “normal science”. On such an account, philosophers of science need to attend to the relations between scientific communities and various lay communities, especially marginalized communities, to understand how scientific practices can rationally ground trust and thus earn their status as “good ways of knowing”. Trust turns out to involve a wide set of expectations on behalf of lay communities. In this paper I focus on expectations of knowledge sharing, using examples of “knowledge-sharing whistleblowers” to illustrate how failures in knowledge sharing with lay communities can erode epistemic trust in scientific communities, particularly in the case of marginalized communities.

Journal ArticleDOI
07 Dec 2010-Synthese
TL;DR: This paper illustrates various ways in which SRPOS can provide social benefits, as well as benefits to scientific practice and philosophy itself, and calls for an expansion of philosophy of science to include more of this type of work.
Abstract: This paper provides an argument for a more socially relevant philosophy of science (SRPOS) Our aims in this paper are to characterize this body of work in philosophy of science, to argue for its importance, and to demonstrate that there are significant opportunities for philosophy of science to engage with and support this type of research The impetus of this project was a keen sense of missed opportunities for philosophy of science to have a broader social impact We illustrate various ways in which SRPOS can provide social benefits, as well as benefits to scientific practice and philosophy itself Also, SRPOS is consistent with some historical and contemporary goals of philosophy of science We’re calling for an expansion of philosophy of science to include more of this type of work In order to support this expansion, we characterize philosophy of science as an epistemic community and examine the culture and practices of philosophy of science that can help or hinder research in this area

Journal ArticleDOI
01 Aug 2010-Synthese
TL;DR: This paper offers a different approach to the species problem, suggesting that the authors should be skeptical of the species category, but not skepticism of the existence of those taxa biologists call ‘species.’
Abstract: Biologists and philosophers that debate the existence of the species category fall into two camps. Some believe that the species category does not exist and the term ‘species’ should be eliminated from biology. Others believe that with new biological insights or the application of philosophical ideas, we can be confident that the species category exists. This paper offers a different approach to the species problem. We should be skeptical of the species category, but not skeptical of the existence of those taxa biologists call ‘species.’ And despite skepticism over the species category, there are pragmatic reasons for keeping the word ‘species.’ This approach to the species problem is not new. Darwin employed a similar strategy to the species problem 150 years ago.

Journal ArticleDOI
01 Jul 2010-Synthese
TL;DR: In this paper, the authors propose a new and systematic route through the long-lasting debate on creative hypothesis formation and argue that the debate is not about factual issues, but about the interpretation of these factual issues in Darwinian terms.
Abstract: Over the last four decades arguments for and against the claim that creative hypothesis formation is based on Darwinian ‘blind’ variation have been put forward. This paper offers a new and systematic route through this long-lasting debate. It distinguishes between undirected, random, and unjustified variation, to prevent widespread confusions regarding the meaning of undirected variation. These misunderstandings concern Lamarckism, equiprobability, developmental constraints, and creative hypothesis formation. The paper then introduces and develops the standard critique that creative hypothesis formation is guided rather than blind, integrating developments from contemporary research on creativity. On that basis, I discuss three compatibility arguments that have been used to answer the critique. These arguments do not deny guided variation but insist that an important analogy exists nonetheless. These compatibility arguments all fail, even though they do so for different reasons: trivialisation, conceptual confusion, and lack of evidence respectively. Revisiting the debate in this manner not only allows us to see where exactly a ‘Darwinian’ account of creative hypothesis formation goes wrong, but also to see that the debate is not about factual issues, but about the interpretation of these factual issues in Darwinian terms.

Journal ArticleDOI
01 May 2010-Synthese
TL;DR: It is claimed that different adjectives are associated with different types of measures whose special characteristics, together with features of the relations denoted by unit names, explain the puzzling limited distribution of measure phrases, as well as unit-based comparisons between predicates.
Abstract: This paper presents a novel semantic analysis of unit names (like pound and meter) and gradable adjectives (like tall, short and happy), inspired by measurement theory (Krantz et al. In Foundations of measurement: Additive and Polynomial Representations, 1971). Based on measurement theory’s four-way typology of measures, I claim that different adjectives are associated with different types of measures whose special characteristics, together with features of the relations denoted by unit names, explain the puzzling limited distribution of measure phrases, as well as unit-based comparisons between predicates (as in the table is longer than it is wide). All considered, my analyses support the view that the grammar of natural languages is sensitive to features of measurement theory.

Journal ArticleDOI
01 Nov 2010-Synthese
TL;DR: It is argued that an alternative approach combining an end-relational theory of normativity with a comparative probabilistic semantics for ‘ought’ provides a more satisfactory solution to vexing ‘detaching problems’.
Abstract: Some intuitive normative principles raise vexing ‘detaching problems’ by their failure to license modus ponens. I examine three such principles (a self-reliance principle and two different instrumental principles) and recent stategies employed to resolve their detaching problems. I show that solving these problems necessitates postulating an indefinitely large number of senses for ‘ought’. The semantics for ‘ought’ that is standard in linguistics offers a unifying strategy for solving these problems, but I argue that an alternative approach combining an end-relational theory of normativity with a comparative probabilistic semantics for ‘ought’ provides a more satisfactory solution.

Journal ArticleDOI
26 Oct 2010-Synthese
TL;DR: It is argued that engaged case study work and interdisciplinarity have been central to the success of feminist philosophy of science in producing socially relevant scholarship, and that its future lies in the continued development of robust and dynamic philosophical frameworks for modeling social values in science.
Abstract: Feminist philosophy of science has led to improvements in the practices and products of scientific knowledge-making, and in this way it exemplifies socially relevant philosophy of science It has also yielded important insights and original research questions for philosophy Feminist scholarship on science thus presents a worthy thought-model for considering how we might build a more socially relevant philosophy of science—the question posed by the editors of this special issue In this analysis of the history, contributions, and challenges faced by feminist philosophy of science, I argue that engaged case study work and interdisciplinarity have been central to the success of feminist philosophy of science in producing socially relevant scholarship, and that its future lies in the continued development of robust and dynamic philosophical frameworks for modeling social values in science Feminist philosophers of science, however, have often encountered marginalization and persistent misunderstandings, challenges that must be addressed within the institutional and intellectual culture of American philosophy

Journal ArticleDOI
01 Oct 2010-Synthese
TL;DR: It is argued that philosophy of science as a field can learn from the successes as well as the mistakes of bioethics and begin to develop a new model that includes robust contributions to the science classroom, research collaborations with scientists, and a role for public philosophy through involvement in science policy development.
Abstract: The goal of this paper is to articulate and advocate for an enhanced role for philosophers of science in the domain of science policy as well as within the science curriculum. I argue that philosophy of science as a field can learn from the successes as well as the mistakes of bioethics and begin to develop a new model that includes robust contributions to the science classroom, research collaborations with scientists, and a role for public philosophy through involvement in science policy development. Through an analysis of two case studies, I illustrate how philosophers of science can make effective and productive contributions to science education as well as to interdisciplinary scientific research, and argue for the essential role of philosophers of science in the realm of science policy.

Journal ArticleDOI
01 Aug 2010-Synthese
TL;DR: A way of incorporating the role played by intentions into a character-based semantics for indexicals is developed and it is argued that the framework I prefer is superior to an alternative which has been proposed by others.
Abstract: A number of authors have argued that the fact that certain indexicals depend for their reference-determination on the speaker’s referential intentions demonstrates the inadequacy of associating such expressions with functions from contexts to referents (characters). By distinguishing between different uses to which the notion of context is put in these argument, I show that this line of argument fails. In the course of doing so, I develop a way of incorporating the role played by intentions into a character-based semantics for indexicals and I argue that the framework I prefer is superior to an alternative which has been proposed by others.

Journal ArticleDOI
01 Feb 2010-Synthese
TL;DR: A quantitative likeness-definition of verisimilitude based on relevant elements which provably agrees with the qualitative relevant content-definition on all pairs of comparable theories and how the shortcomings of Popper’s original definition can be repaired by the relevant element approach is explained.
Abstract: Zwart and Franssen’s impossibility theorem reveals a conflict between the possible-world-based content-definition and the possible-world-based likeness-definition of verisimilitude. In Sect. 2 we show that the possible-world-based content-definition violates four basic intuitions of Popper’s consequence-based content-account to verisimilitude, and therefore cannot be said to be in the spirit of Popper’s account, although this is the opinion of some prominent authors. In Sect. 3 we argue that in consequence-accounts, content-aspects and likeness-aspects of verisimilitude are not in conflict with each other, but in agreement. We explain this fact by pointing towards the deep difference between possible-world- and the consequence-accounts, which does not lie in the difference between syntactic (object-language) versus semantic (meta-language) formulations, but in the difference between ‘disjunction-of-possible-worlds’ versus ‘conjunction-of-parts’ representations of theories. Drawing on earlier work, we explain in Sect. 4 how the shortcomings of Popper’s original definition can be repaired by what we call the relevant element approach. We propose a quantitative likeness-definition of verisimilitude based on relevant elements which provably agrees with the qualitative relevant content-definition of verisimilitude on all pairs of comparable theories. We conclude the paper with a plea for consequence-accounts and a brief analysis of the problem of language-dependence (Sect. 6).