scispace - formally typeset
Search or ask a question

Showing papers in "Synthese in 2012"


Journal ArticleDOI
01 Jul 2012-Synthese
TL;DR: This article provided an introductory review of the theory of judgment aggregation and the paradoxes of majority voting that originally motivated the field, explains several key results on the impossibility of propositionwise judgment aggregation, presents a pedagogical proof of one of those results, discusses escape routes from the impossibility and relates judgment aggregation to some other salient aggregation problems, such as preference aggregation, abstract aggregation and probability aggregation.
Abstract: This paper provides an introductory review of the theory of judgment aggregation It introduces the paradoxes of majority voting that originally motivated the field, explains several key results on the impossibility of propositionwise judgment aggregation, presents a pedagogical proof of one of those results, discusses escape routes from the impossibility and relates judgment aggregation to some other salient aggregation problems, such as preference aggregation, abstract aggregation and probability aggregation The present illustrative rather than exhaustive review is intended to give readers who are new to the field of judgment aggregation a sense of this rapidly growing research area

125 citations


Journal ArticleDOI
01 Mar 2012-Synthese
TL;DR: It is concluded that scientists, philosophers, and ethicists should discard the project of defining life, because empirical evidence shows that life cannot be defined.
Abstract: In several disciplines within science—evolutionary biology, molecular biology, astrobiology, synthetic biology, artificial life—and outside science—primarily ethics—efforts to define life have recently multiplied. However, no consensus has emerged. In this article, I argue that this is no accident. I propose a dilemma showing that the project of defining life is either impossible or pointless. The notion of life at stake in this project is either the folk concept of life or a scientific concept. In the former case, empirical evidence shows that life cannot be defined. In the latter case, I argue that, although defining life may be possible, it is pointless. I conclude that scientists, philosophers, and ethicists should discard the project of defining life.

124 citations


Journal ArticleDOI
01 Nov 2012-Synthese
TL;DR: The goal here is to present a more compelling version of Wittgenstein’s account of the structure of reasons which can evade difficulties of the traditional scepticism/anti-scepticism debate.
Abstract: In his final notebooks, published as On Certainty, Wittgenstein offers a distinctive conception of the nature of reasons. Central to this conception is the idea that at the heart of our rational practices are essentially arational commitments. This proposal marks a powerful challenge to the standard picture of the structure of reasons. In particular, it has been thought that this account might offer us a resolution of the traditional scepticism/anti-scepticism debate. It is argued, however, that some standard ways of filling out the details of this proposal ultimately lead to an epistemology which is highly problematic. The goal here is to present a more compelling version of Wittgenstein’s account of the structure of reasons which can evade these difficulties.

107 citations


Journal ArticleDOI
01 Mar 2012-Synthese
TL;DR: It is argued that life on Earth today provides an empirically inadequate foundation for theorizing about life considered generally, and a strategy for procuring the needed additional examples of life without the guidance of a definition or theory of life is sketched.
Abstract: The question ‘what is life?’ has long been a source of philosophical debate and in recent years has taken on increasing scientific importance. The most popular approach among both philosophers and scientists for answering this question is to provide a “definition” of life. In this article I explore a variety of different definitional approaches, both traditional and non-traditional, that have been used to “define” life. I argue that all of them are deeply flawed. It is my contention that a scientifically compelling understanding of the nature of life presupposes an empirically adequate scientific theory (vs. definition) of life; as I argue, scientific theories are not the sort of thing that can be encapsulated in definitions. Unfortunately, as I also discuss, scientists are currently in no position to formulate even a tentative version of such a theory. Recent discoveries in biology and biochemistry have revealed that familiar Earth life represents a single example that may not be representative of life. If this is the case, life on Earth today provides an empirically inadequate foundation for theorizing about life considered generally. I sketch a strategy for procuring the needed additional examples of life without the guidance of a definition or theory of life, and close with an application to NASA’s fledgling search for extraterrestrial life.

87 citations


Journal ArticleDOI
27 Mar 2012-Synthese
TL;DR: By respecting the theoretical distinction between the objects of assertion and compositional values certain conflicts between compositionality and contextualism are avoided and the conflict between eternalism and the semantics of tense is investigated.
Abstract: This essay investigates whether the things we say are identical to the things our sentences mean. It is argued that these theoretical notions should be distinguished, since assertoric content does not respect the compositionality principle. As a paradigmatic example, Kaplan’s formal language LD is shown to exemplify a failure of compositionality. It is demonstrated that by respecting the theoretical distinction between the objects of assertion and compositional values certain conflicts between compositionality and contextualism are avoided. This includes the conflict between eternalism and the semantics of tense, the embedding problems for contextualism about epistemic modals and taste claims, and the conflict between direct reference and the semantics of bound pronouns (and monstrous operators). After presenting the theoretical picture which distinguishes assertoric content from compositional semantic value, some objections to the picture are addressed. In so doing, the objection from King (Philos Perspect 17(1):195–246, 2003) stemming from apparent complications with the interaction of temporal expressions and attitude reports is assessed and shown to be non-threatening.

83 citations


Journal ArticleDOI
01 Mar 2012-Synthese
TL;DR: A possible general principle that underlies those evolutionary transitions, which allow for the open-ended redefinition of autonomous systems: namely, the relative dynamic decoupling that must be articulated among distinct parts, modules or modes of operation in these systems.
Abstract: Our aim in the present paper is to approach the nature of life from the perspective of autonomy, showing that this perspective can be helpful for overcoming the traditional Cartesian gap between the physical and cognitive domains. We first argue that, although the phenomenon of life manifests itself as highly complex and multidimensional, requiring various levels of description, individual organisms constitute the core of this multifarious phenomenology. Thereafter, our discussion focuses on the nature of the organization of individual living entities, proposing autonomy as the main concept to grasp it. In the second part of the article we show how autonomy is also fundamental to explaining major evolutionary transitions, in an attempt to rethink evolution from the point of view of the organizational structure of the entities/organisms involved. This gives further support to the idea of autonomy not only as a key to understanding life in general but also the complex expressions of it that we observe on our planet. Finally, we suggest a possible general principle that underlies those evolutionary transitions, which allow for the open-ended redefinition of autonomous systems: namely, the relative dynamic decoupling that must be articulated among distinct parts, modules or modes of operation in these systems.

80 citations


Journal ArticleDOI
01 Jul 2012-Synthese
TL;DR: It is argued that the Bayesian approach is neither sufficient not necessary for the rationality of beliefs and that there are many situations in which there is not sufficient information for an individual to generate a Bayesian prior.
Abstract: Economic theory reduces the concept of rationality to internal consistency. As far as beliefs are concerned, rationality is equated with having a prior belief over a “Grand State Space”, describing all possible sources of uncertainties. We argue that this notion is too weak in some senses and too strong in others. It is too weak because it does not distinguish between rational and irrational beliefs. Relatedly, the Bayesian approach, when applied to the Grand State Space, is inherently incapable of describing the formation of prior beliefs. On the other hand, this notion of rationality is too strong because there are many situations in which there is not sufficient information for an individual to generate a Bayesian prior. It follows that the Bayesian approach is neither sufficient not necessary for the rationality of beliefs. 1. Rationality of Belief and Belief Formation Economic theory is both the birthplace and the prime application of the rational choice paradigm. Throughout the 20 th century, economics has relied on rationality, and refined the definition of rational choice, offering concepts such as subjective expected utility maximization and Nash equilibrium, which have proved useful in several other disciplines.

79 citations


Journal ArticleDOI
07 Mar 2012-Synthese
TL;DR: It is argued that diagrammatic interpretations of Kant's theory of geometrical intuition can, at best, capture only part of what Kant’s conception involves and that, for example, they cannot explain why Kant takes geometric constructions in the style of Euclid to provide us with an a priori framework for physical space.
Abstract: I use recent work on Kant and diagrammatic reasoning to develop a reconsideration of central aspects of Kant’s philosophy of geometry and its relation to spatial intuition. In particular, I reconsider in this light the relations between geometrical concepts and their schemata, and the relationship between pure and empirical intuition. I argue that diagrammatic interpretations of Kant’s theory of geometrical intuition can, at best, capture only part of what Kant’s conception involves and that, for example, they cannot explain why Kant takes geometrical constructions in the style of Euclid to provide us with an a priori framework for physical space. I attempt, along the way, to shed new light on the relationship between Kant’s theory of space and the debate between Newton and Leibniz to which he was reacting, and also on the role of geometry and spatial intuition in the transcendental deduction of the categories.

75 citations


Journal ArticleDOI
01 Feb 2012-Synthese
TL;DR: A crucial improvement to the traditional view is proposed, relying on an intuitive and independently plausible metaphysical distinction pertaining to the manifestation of intellectual powers, which supplements the traditional components of justification, truth and belief.
Abstract: Is knowledge justified true belief? Most philosophers believe that the answer is clearly ‘no’, as demonstrated by Gettier cases. But Gettier cases don’t obviously refute the traditional view that knowledge is justified true belief (JTB). There are ways of resisting Gettier cases, at least one of which is partly successful. Nevertheless, when properly understood, Gettier cases point to a flaw in JTB, though it takes some work to appreciate just what it is. The nature of the flaw helps us better understand the nature of knowledge and epistemic justification. I propose a crucial improvement to the traditional view, relying on an intuitive and independently plausible metaphysical distinction pertaining to the manifestation of intellectual powers, which supplements the traditional components of justification, truth and belief.

74 citations


Journal ArticleDOI
01 Jul 2012-Synthese
TL;DR: The paper shows why and how an empirical study of fast-and-frugal heuristics can provide norms of good reasoning, and thus how rationality can be naturalized, and argues that in uncertain environments, more information and computation are not always better (the “less-can-be-more” doctrine).
Abstract: The paper shows why and how an empirical study of fast-and-frugal heuristics can provide norms of good reasoning, and thus how (and how far) rationality can be naturalized. We explain the heuristics that humans often rely on in solving problems, for example, choosing investment strategies or apartments, placing bets in sports, or making library searches. We then show that heuristics can lead to judgments that are as accurate as or even more accurate than strategies that use more information and computation, including optimization methods. A standard way to defend the use of heuristics is by reference to accuracy-effort trade-offs. We take a different route, emphasizing ecological rationality (the relationship between cognitive heuristics and environment), and argue that in uncertain environments, more information and computation are not always better (the “less-can-be-more” doctrine). The resulting naturalism about rationality is thus normative because it not only describes what heuristics people use, but also in which specific environments one should rely on a heuristic in order to make better inferences. While we desist from claiming that the scope of ecological rationality is unlimited, we think it is of wide practical use.

74 citations


Journal ArticleDOI
01 Jul 2012-Synthese
TL;DR: It is maintained, contrary to Arntzenius, that an agent facing Egan's decisions can rationally choose actions that she knows she will later regret, and CDT gets Egan’s cases exactly right.
Abstract: Andy Egan has recently produced a set of alleged counterexamples to causal decision theory (CDT) in which agents are forced to decide among causally unratifiable options, thereby making choices they know they will regret. I show that, far from being counterexamples, CDT gets Egan’s cases exactly right. Egan thinks otherwise because he has misapplied CDT by requiring agents to make binding choices before they have processed all available information about the causal consequences of their acts. I elucidate CDT in a way that makes it clear where Egan goes wrong, and which explains why his examples pose no threat to the theory. My approach has similarities to a modification of CDT proposed by Frank Arntzenius, but it differs in the significance that it assigns to potential regrets. I maintain, contrary to Arntzenius, that an agent facing Egan’s decisions can rationally choose actions that she knows she will later regret. All rationality demands of agents it that they maximize unconditional causal expected utility from an epistemic perspective that accurately reflects all the available evidence about what their acts are likely to cause. This yields correct answers even in outlandish cases in which one is sure to regret whatever one does.

Journal ArticleDOI
01 Jul 2012-Synthese
TL;DR: This paper briefly review studies that have documented the description–experience gap, offers several explanations for this gap, and discusses to what extent people’s decisions from experience are in conflict with benchmarks of rationality.
Abstract: Most investigations into how people make risky choices have employed a simple drosophila: monetary gambles involving stated outcomes and probabilities. People are asked to make decisions from description. When people decide whether to back up their computer hard drive, cross a busy street, or go out on a date, however, they do not enjoy the convenience of stated outcomes and probabilities. People make such decisions either in the void of ignorance or in the twilight of their own often limited experience of such real-world options. In the latter case, they make decisions from experience. Recent research has consistently documented that decisions from description and decisions from experience can lead to substantially different choices. Key in this description–experience gap is people’s treatment of rare events. In this paper, I briefly review studies that have documented the description–experience gap, offer several explanations for this gap, and discuss to what extent people’s decisions from experience are in conflict with benchmarks of rationality.

Journal ArticleDOI
01 Apr 2012-Synthese
TL;DR: The logical structure underlying the conditions that trigger emotions are studied and then hierarchically organized and the insights gained are used to guide a formalization of emotion triggers, which proceeds in three stages to provide different levels of commitment to formalisms.
Abstract: This paper formalizes part of a well-known psychological model of emo- tions.Inparticular,thelogicalstructureunderlyingtheconditionsthattriggeremotions are studied and then hierarchically organized. The insights gained therefrom are used to guide a formalization of emotion triggers, which proceeds in three stages. The first stage captures the conditions that trigger emotions in a semiformal way, i.e., without committing to an underlying formalism and semantics. The second stage cap- tures the main psychological notions used in the emotion model in dynamic doxastic logic. The third stage introduces a BDI-based framework (belief-desire-intention) with achievement goals, which is used to firmly ground the preceding stages. The result is a formalization of emotion triggers for BDI agents with achievement goals. The idea of proceeding in these stages is to provide different levels of commitment to formalisms, so that it remains relatively easy to extend or replace the used formalisms without having to start from scratch. Finally, we show that the formalization renders properties of emotions that are in line with the psychological model on which it is based.

Journal ArticleDOI
01 Aug 2012-Synthese
TL;DR: It is shown here that the apparent problem arises from an objectionable notion of derivability from assumptions in an axiomatic system, and a necessitation rule with this restriction permits a proof of the deduction theorem in its usual formulation.
Abstract: Various sources in the literature claim that the deduction theorem does not hold for normal modal or epistemic logic, whereas others present versions of the deduction theorem for several normal modal systems. It is shown here that the apparent problem arises from an objectionable notion of derivability from assumptions in an axiomatic system. When a traditional Hilbert-type system of axiomatic logic is generalized into a system for derivations from assumptions, the necessitation rule has to be modified in a way that restricts its use to cases in which the premiss does not depend on assumptions. This restriction is entirely analogous to the restriction of the rule of universal generalization of first-order logic. A necessitation rule with this restriction permits a proof of the deduction theorem in its usual formulation. Other suggestions presented in the literature to deal with the problem are reviewed, and the present solution is argued to be preferable to the other alternatives. A contraction- and cut-free sequent calculus equivalent to the Hilbert system for basic modal logic shows the standard failure argument untenable by proving the underivability of \({\square\,A}\) from A.

Journal ArticleDOI
04 Oct 2012-Synthese
TL;DR: This paper presents a novel approach of creating hybrid methods that incorporate features from psychological models in conjunction with machine learning in order to create significantly improved models for predicting people’s decisions.
Abstract: Creating agents that proficiently interact with people is critical for many applications. Towards creating these agents, models are needed that effectively pre- dict people's decisions in a variety of problems. To date, two approaches have been suggested to generally describe people's decision behavior. One approach creates a-prioripredictionsaboutpeople'sbehavior,eitherbasedontheoreticalrationalbehav- ior or based on psychological models, including bounded rationality. A second type of approach focuses on creating models based exclusively on observations of people's behavior.Attheforefrontofthesetypesofmethodsarevariousmachinelearningalgo- rithms.This paper explores how these two approaches can be compared and combined in different types of domains. In relatively simple domains, both psychological mod- els and machine learning yield clear prediction models with nearly identical results. In more complex domains, the exact action predicted by psychological models is not even clear, and machine learning models are even less accurate. Nonetheless, we present a novel approach of creating hybrid methods that incorporate features from

Journal ArticleDOI
01 Mar 2012-Synthese
TL;DR: The standard philosophical notion of emergence posits the wrong dichotomies, confuses compositional physicalism with explanatory physicalism, and is unable to represent the type of dynamic processes that both generate emergent properties and express downward causation.
Abstract: Philosophical accounts of emergence have been explicated in terms of logical relationships between statements (derivation) or static properties (function and realization). Jaegwon Kim is a modern proponent. A property is emergent if it is not explainable by (or reducible to) the properties of lower level components. This approach, I will argue, is unable to make sense of the kinds of emergence that are widespread in scientific explanations of complex systems. The standard philosophical notion of emergence posits the wrong dichotomies, confuses compositional physicalism with explanatory physicalism, and is unable to represent the type of dynamic processes (self-organizing feedback) that both generate emergent properties and express downward causation.

Journal ArticleDOI
01 Jun 2012-Synthese
TL;DR: This work presents a streamlined axiom system of special relativity in first-order logic and “derive” an axiomSystem of general relativity in two natural steps to make general relativity more accessible for the non-specialist.
Abstract: We present a streamlined axiom system of special relativity in first-order logic. From this axiom system we “derive” an axiom system of general relativity in two natural steps. We will also see how the axioms of special relativity transform into those of general relativity. This way we hope to make general relativity more accessible for the non-specialist.

Journal ArticleDOI
01 Jan 2012-Synthese
TL;DR: A formal measure of epistemic justification motivated by the dual goal of cognition, which is to increase true beliefs and reduce false beliefs, is described and an explanation of the conjunction fallacy is proposed.
Abstract: This paper describes a formal measure of epistemic justification motivated by the dual goal of cognition, which is to increase true beliefs and reduce false beliefs. From this perspective the degree of epistemic justification should not be the conditional probability of the proposition given the evidence, as it is commonly thought. It should be determined instead by the combination of the conditional probability and the prior probability. This is also true of the degree of incremental confirmation, and I argue that any measure of epistemic justification is also a measure of incremental confirmation. However, the degree of epistemic justification must meet an additional condition, and all known measures of incremental confirmation fail to meet it. I describe this additional condition as well as a measure that meets it. The paper then applies the measure to the conjunction fallacy and proposes an explanation of the fallacy.

Journal ArticleDOI
01 Mar 2012-Synthese
TL;DR: It is argued that emergent phenomena represent, in effect, a subset of a larger universe of cooperative, synergistic effects in the natural world.
Abstract: Despite its current popularity, “emergence” is a concept with a venerable history and an elusive, ambiguous standing in contemporary evolutionary theory. This paper briefly recounts the history of the term and details some of its current usages. Not only are there radically varying interpretations about how to define emergence but “reductionist” and “holistic” theorists hold very different views about the issue of causation. However, these two seemingly polar positions are not irreconcilable. Reductionism, or detailed analysis of the parts and their interactions, is essential for answering the “how” question in evolution—how does a complex living system work? But holism is equally necessary for answering the “why” question—why did a particular arrangement of parts evolve? In order to answer the “why” question, a broader, multi-leveled paradigm is required. The reductionist approach to explaining emergent complexity has entailed a search for underlying “laws of emergence.” In contrast, the “Synergism Hypothesis” focuses on the “economics”—the functional effects produced by emergent wholes and their selective consequences in evolutionary change. This paper also argues that emergent phenomena represent, in effect, a subset of a larger universe of cooperative, synergistic effects in the natural world.

Journal ArticleDOI
01 Apr 2012-Synthese
TL;DR: This thesis that an event is good or bad luck for an individual only if it is significant for that individual is explored, showing that it raises questions about interests, well-being, and the philosophical uses of luck.
Abstract: Recent work on the nature of luck widely endorses the thesis that an event is good or bad luck for an individual only if it is significant for that individual. In this paper, I explore this thesis, showing that it raises questions about interests, well-being, and the philosophical uses of luck. In Sect. 1, I examine several accounts of significance, due to Pritchard (2005), Coffman (2007), and Rescher (1995). Then in Sect. 2 I consider what some theorists want to ‘do’ with luck, taking important examples from epistemology (explaining Gettier-style examples) and political philosophy (offering a rationale for the just distribution of resources in society), while suggesting implications for significance. Drawing together lessons from Sects. 1 and 2, I develop a new account of significance in Sect. 3 before concluding with reflections on the debate in Sect. 4.

Journal ArticleDOI
01 May 2012-Synthese
TL;DR: An organized review of the literature on independence for sets of probability distributions; new results on graphoid properties and on the justification of “strong independence” (using exchangeability) are presented; and the connection between Kyburg and Pittarelli's results and recent developments on the axiomatization of non-binary preferences are described.
Abstract: This paper analyzes concepts of independence and assumptions of convexity in the theory of sets of probability distributions. The starting point is Kyburg and Pittarelli’s discussion of “convex Bayesianism” (in particular their proposals concerning E-admissibility, independence, and convexity). The paper offers an organized review of the literature on independence for sets of probability distributions; new results on graphoid properties and on the justification of “strong independence” (using exchangeability) are presented. Finally, the connection between Kyburg and Pittarelli’s results and recent developments on the axiomatization of non-binary preferences, and its impact on “complete” independence, are described.

Journal ArticleDOI
01 Feb 2012-Synthese
TL;DR: The article argues that it is necessary and sufficient to be embedded in a network of questions and answers that correctly accounts for semantic information, and shows that an information flow network of type A fulfils such a requirement, by warranting that the erotetic deficit is correctly satisfied by the information flow of correct answers provided by an informational source.
Abstract: The article addresses the problem of how semantic information can be upgraded to knowledge. The introductory section explains the technical terminology and the relevant background. Section 2 argues that, for semantic information to be upgraded to knowledge, it is necessary and sufficient to be embedded in a network of questions and answers that correctly accounts for it. Section 3 shows that an information flow network of type A fulfils such a requirement, by warranting that the erotetic deficit, characterising the target semantic information t by default, is correctly satisfied by the information flow of correct answers provided by an informational source s. Section 4 illustrates some of the major advantages of such a Network Theory of Account (NTA) and clears the ground of a few potential difficulties. Section 5 clarifies why NTA and an informational analysis of knowledge, according to which knowledge is accounted semantic information, is not subject to Gettier-type counterexamples. A concluding section briefly summarises the results obtained.

Journal ArticleDOI
Matthias Steup1
01 Sep 2012-Synthese
TL;DR: This paper begins with a critical examination of William Alston’s defense of involuntarism and then focuses on the question of whether belief is intentional.
Abstract: In this paper, I argue that the rejection of doxastic voluntarism is not as straightforward as its opponents take it to be. I begin with a critical examination of William Alston’s defense of involuntarism and then focus on the question of whether belief is intentional.

Journal ArticleDOI
01 Jul 2012-Synthese
TL;DR: It is argued that heuristics as algorithmic-level explanations of how cognizers compute f are actually dysfunctional, and computational-level theory revision is proposed as a principled and workable alternative.
Abstract: Many cognitive scientists, having discovered that some computational-level characterization f of a cognitive capacity \({\phi}\) is intractable, invoke heuristics as algorithmic-level explanations of how cognizers compute f. We argue that such explanations are actually dysfunctional, and rebut five possible objections. We then propose computational-level theory revision as a principled and workable alternative.

Journal ArticleDOI
24 May 2012-Synthese
TL;DR: It is argued that neural selection should be construed, by the selected effect theorist, as a distinct type of function-bestowing process in addition to natural selection.
Abstract: A common misunderstanding of the selected effects theory of function is that natural selection operating over an evolutionary time scale is the only function-bestowing process in the natural world. This construal of the selected effects theory conflicts with the existence and ubiquity of neurobiological functions that are evolutionary novel, such as structures underlying reading ability. This conflict has suggested to some that, while the selected effects theory may be relevant to some areas of evolutionary biology, its relevance to neuroscience is marginal. This line of reasoning, however, neglects the fact that synapses, entire neurons, and potentially groups of neurons can undergo a type of selection analogous to natural selection operating over an evolutionary time scale. In the following, I argue that neural selection should be construed, by the selected effect theorist, as a distinct type of function-bestowing process in addition to natural selection. After explicating a generalized selected effects theory of function and distinguishing it from similar attempts to extend the selected effects theory, I do four things. First, I show how it allows one to identify neural selection as a distinct function-bestowing process, in contrast to other forms of neural structure formation such as neural construction. Second, I defend the view from one major criticism, and in so doing I clarify the content of the view. Third, I examine drug addiction to show the potential relevance of neural selection to neuroscientific and psychological research. Finally, I endorse a modest pluralism of function concepts within biology.

Journal ArticleDOI
01 Jun 2012-Synthese
TL;DR: It is proved that the Heyting algebra thus associated to A arises as a basis for the internal Gelfand spectrum (in the sense of Banaschewski–Mulvey) of the “Bohrification” of A, which is a commutative Rickart C*-algebra in the topos of functors from A to the category of sets.
Abstract: Following Birkhoff and von Neumann, quantum logic has traditionally been based on the lattice of closed linear subspaces of some Hilbert space, or, more generally, on the lattice of projections in a von Neumann algebra A. Unfortunately, the logical interpretation of these lattices is impaired by their nondistributivity and by various other problems. We show that a possible resolution of these difficulties, suggested by the ideas of Bohr, emerges if instead of single projections one considers elementarypropositionstobefamiliesofprojectionsindexedbyapartiallyorderedset C(A) of appropriate commutative subalgebras of A. In fact, to achieve both maximal generality and ease of use within topos theory, we assume that A is a so-called Rickart C*-algebra and that C(A) consists of all unital commutative Rickart C*-subalgebras of A. Such families of projections form a Heyting algebra in a natural way, so that the associatedpropositionallogicisintuitionistic:distributivityisrecoveredattheexpense of the law of the excluded middle. Subsequently, generalizing an earlier computation for n × n matrices, we prove that the Heyting algebra thus associated to A arises as a basis for the internal Gelfand spectrum (in the sense of Banaschewski-Mulvey) of the "Bohrification" A of A, which is a commutative Rickart C*-algebra in the topos of functors from C(A) to the category of sets. We explain the relationship of this

Journal ArticleDOI
01 Jul 2012-Synthese
TL;DR: It is argued that indeterminate probabilities are not only rationally permissible for a Bayesian agent, but they may even be rationally required.
Abstract: We argue that indeterminate probabilities are not only rationally permissible for a Bayesian agent, but they may even be rationally required. Our first argument begins by assuming a version of interpretivism: your mental state is the set of probability and utility functions that rationalize your behavioral dispositions as well as possible. This set may consist of multiple probability functions. Then according to interpretivism, this makes it the case that your credal state is indeterminate. Our second argument begins with our describing a world that plausibly has indeterminate chances. Rationality requires a certain alignment of your credences with corresponding hypotheses about the chances. Thus, if you hypothesize the chances to be indeterminate, your will inherit their indeterminacy in your corresponding credences. Our third argument is motivated by a dilemma. Epistemic rationality requires you to stay open-minded about contingent matters about which your evidence has not definitively legislated. Practical rationality requires you to be able to act decisively at least sometimes. These requirements can conflict with each other-for thanks to your open-mindedness, some of your options may have undefined expected utility, and if you are choosing among them, decision theory has no advice to give you. Such an option is playing Nover and Hajek’s Pasadena Game, and indeed any option for which there is a positive probability of playing the Pasadena Game. You can serve both masters, epistemic rationality and practical rationality, with an indeterminate credence to the prospect of playing the Pasadena game. You serve epistemic rationality by making your upper probability positive-it ensures that you are open-minded. You serve practical rationality by making your lower probability 0-it provides guidance to your decision-making. No sharp credence could do both.

Journal ArticleDOI
01 Jun 2012-Synthese
TL;DR: In this paper, a graphical framework for Bayesian inference is introduced, which is sufficiently general to accommodate not only the standard case but also recent proposals for a theory of quantum bayesian inference wherein one considers density operators rather than probability distributions as representative of degrees of belief.
Abstract: We introduce a graphical framework for Bayesian inference that is sufficiently general to accommodate not just the standard case but also recent proposals for a theory of quantum Bayesian inference wherein one considers density operators rather than probability distributions as representative of degrees of belief. The diagrammatic framework is stated in the graphical language of symmetric monoidal categories and of compact structures and Frobenius structures therein, in which Bayesian inversion boils down to transposition with respect to an appropriate compact structure. We characterize classical Bayesian inference in terms of a graphical property and demonstrate that our approach eliminates some purely conventional elements that appear in common representations thereof, such as whether degrees of belief are represented by probabilities or entropic quantities. We also introduce a quantum-like calculus wherein the Frobenius structure is noncommutative and show that it can accommodate Leifer’s calculus of ‘conditional density operators’. The notion of conditional independence is also generalized to our graphical setting and we make some preliminary connections to the theory of Bayesian networks. Finally, we demonstrate how to construct a graphical Bayesian calculus within any dagger compact category.

Journal ArticleDOI
01 May 2012-Synthese
TL;DR: The geometrical/logical methods developed in the paper are applied to prove a series of trivialization theorems against question-invariance as a constraint on acceptance rules and against rational monotonicity as an axiom of conditional logic in situations of uncertainty.
Abstract: We defend a set of acceptance rules that avoids the lottery paradox, that is closed under classical entailment, and that accepts uncertain propositions without ad hoc restrictions. We show that the rules we recommend provide a semantics that validates exactly Adams’ conditional logic and are exactly the rules that preserve a natural, logical structure over probabilistic credal states that we call probalogic. To motivate probalogic, we first expand classical logic to geo-logic, which fills the entire unit cube, and then we project the upper surfaces of the geo-logical cube onto the plane of probabilistic credal states by means of standard, linear perspective, which may be interpreted as an extension of the classical principle of indifference. Finally, we apply the geometrical/logical methods developed in the paper to prove a series of trivialization theorems against question-invariance as a constraint on acceptance rules and against rational monotonicity as an axiom of conditional logic in situations of uncertainty.

Journal ArticleDOI
01 Aug 2012-Synthese
TL;DR: Inference versus consequence, an invited lecture at the LOGICA 1997 conference at Castle Liblice, was part of a series of articles for which I did research during a Stockholm sabbatical in the autumn of 1995 and is republished here with only bibliographical changes and an afterword.
Abstract: Inference versus consequence, an invited lecture at the LOGICA 1997 conference at Castle Liblice, was part of a series of articles for which I did research during a Stockholm sabbatical in the autumn of 1995. The article seems to have been fairly effective in getting its point across and addresses a topic highly germane to the Uppsala workshop. Owing to its appearance in the LOGICA Yearbook 1997, Filosofia Publishers, Prague, 1998, it has been rather inaccessible. Accordingly it is republished here with only bibliographical changes and an afterword.