scispace - formally typeset
Search or ask a question

Showing papers in "Synthese in 2009"


Journal ArticleDOI
01 Aug 2009-Synthese
TL;DR: It is argued that simulations fundamentally differ from experiments with regard to the background knowledge that is invoked to argue for the “external validity” of the investigation.
Abstract: Simulations (both digital and analog) and experiments share many features. But what essential features distinguish them? I discuss two proposals in the literature. On one proposal, experiments investigate nature directly, while simulations merely investigate models. On another proposal, simulations differ from experiments in that simulationists manipulate objects that bear only a formal (rather than material) similarity to the targets of their investigations. Both of these proposals are rejected. I argue that simulations fundamentally differ from experiments with regard to the background knowledge that is invoked to argue for the “external validity” of the investigation.

350 citations


Journal ArticleDOI
01 Aug 2009-Synthese
TL;DR: Reasons are given to justify the claim that computer simulations and computational science constitute a distinctively new set of scientific methods and that these methods introduce new issues in the philosophy of science.
Abstract: Reasons are given to justify the claim that computer simulations and computational science constitute a distinctively new set of scientific methods and that these methods introduce new issues in the philosophy of science. These issues are both epistemological and methodological in kind.

226 citations


Journal ArticleDOI
Wendy S. Parker1
01 Aug 2009-Synthese
TL;DR: The extent to which materiality (in a particular sense) is important when it comes to making justified inferences about target systems on the basis of experimental results is considered.
Abstract: A number of recent discussions comparing computer simulation and traditional experimentation have focused on the significance of “materiality.” I challenge several claims emerging from this work and suggest that computer simulation studies are material experiments in a straightforward sense. After discussing some of the implications of this material status for the epistemology of computer simulation, I consider the extent to which materiality (in a particular sense) is important when it comes to making justified inferences about target systems on the basis of experimental results.

210 citations


Journal ArticleDOI
01 Feb 2009-Synthese
TL;DR: It is shown how substance assumptions block genuine ontological emergence, especially the emergence of normativity, and how a process framework permits a thermodynamic-based account of normative emergence.
Abstract: A shift from a metaphysical framework of substance to one of process enables an integrated account of the emergence of normative phenomena. I show how substance assumptions block genuine ontological emergence, especially the emergence of normativity, and how a process framework permits a thermodynamic-based account of normative emergence. The focus is on two foundational forms of normativity, that of normative function and of representation as emergent in a particular kind of function. This process model of representation, called interactivism, compels changes in many related domains. The discussion ends with brief attention to three domains in which changes are induced by the representational model: perception, learning, and language.

180 citations


Journal ArticleDOI
25 Jun 2009-Synthese
TL;DR: It is argued that claims that computer simulations call into question philosophical understanding of scientific ontology, the epistemology and semantics of models and theories, and the relation between experimentation and theorising are overblown and that simulations, far from demanding a new metaphysics, theology, semantics and methodology, raise few if any new philosophical problems.
Abstract: Computer simulations are an exciting tool that plays important roles in many scientific disciplines. This has attracted the attention of a number of philosophers of science. The main tenor in this literature is that computer simulations not only constitute interesting and powerful new science, but that they also raise a host of new philosophical issues. The protagonists in this debate claim no less than that simulations call into question our philosophical understanding of scientific ontology, the epistemology and semantics of models and theories, and the relation between experimentation and theorising, and submit that simulations demand a fundamentally new philosophy of science in many respects. The aim of this paper is to critically evaluate these claims. Our conclusion will be sober. We argue that these claims are overblown and that simulations, far from demanding a new metaphysics, epistemology, semantics and methodology, raise few if any new philosophical problems. The philosophical problems that do come up in connection with simulations are not specific to simulations and most of them are variants of problems that have been discussed in other contexts before.

135 citations


Journal ArticleDOI
24 Feb 2009-Synthese
TL;DR: It is argued that modal logics are particularly adapted to represent agents’ mental attitudes and to reason about them, and a specific modal logic is used that is called Logic of Emotions in order to provide logical definitions of all but two of their 22 emotions.
Abstract: In this paper, we provide a logical formalization of the emotion triggering process and of its relationship with mental attitudes, as described in Ortony, Clore, and Collins’s theory. We argue that modal logics are particularly adapted to represent agents’ mental attitudes and to reason about them, and use a specific modal logic that we call Logic of Emotions in order to provide logical definitions of all but two of their 22 emotions. While these definitions may be subject to debate, we show that they allow to reason about emotions and to draw interesting conclusions from the theory.

123 citations


Journal ArticleDOI
01 Jul 2009-Synthese
TL;DR: It is argued that endorsing pluralism does not lead to eliminativism about concepts as an object of scientific interest, and outline a pluralist theory of concepts that rejects this assumption.
Abstract: Traditionally, theories of concepts in psychology assume that concepts are a single, uniform kind of mental representation. But no single kind of representation can explain all of the empirical data for which concepts are responsible. I argue that the assumption that concepts are uniformly the same kind of mental structure is responsible for these theories’ shortcomings, and outline a pluralist theory of concepts that rejects this assumption. On pluralism, concepts should be thought of as being constituted by multiple representational kinds, with the particular kind of concept used on an occasion being determined by the context. I argue that endorsing pluralism does not lead to eliminativism about concepts as an object of scientific interest.

122 citations


Journal ArticleDOI
01 Sep 2009-Synthese
TL;DR: This paper characterize three types of tradeoffs theorists may confront and show that several of these relationships exhibit tradeoffs and discuss what consequences those tradeoffs have for theoretical practice.
Abstract: Despite their best efforts, scientists may be unable to construct models that simultaneously exemplify every theoretical virtue. One explanation for this is the existence of tradeoffs: relationships of attenuation that constrain the extent to which models can have such desirable qualities. In this paper, we characterize three types of tradeoffs theorists may confront. These characterizations are then used to examine the relationships between parameter precision and two types of generality. We show that several of these relationships exhibit tradeoffs and discuss what consequences those tradeoffs have for theoretical practice.

120 citations


Journal ArticleDOI
01 Jan 2009-Synthese

114 citations


Journal ArticleDOI
01 Apr 2009-Synthese
TL;DR: This paper investigates those areas of the neuroscience of learning and memory from which the examples used to substantiate these models are culled, and argues that the multiplicity of experimental protocols used in these research areas presents specific challenges for both models.
Abstract: Descriptive accounts of the nature of explanation in neuroscience and the global goals of such explanation have recently proliferated in the philosophy of neuroscience (e.g., Bechtel, Mental mechanisms: Philosophical perspectives on cognitive neuroscience. New York: Lawrence Erlbaum, 2007; Bickle, Philosophy and neuroscience: A ruthlessly reductive account. Dordrecht: Kluwer Academic Publishing, 2003; Bickle, Synthese, 151, 411–434, 2006; Craver, Explaining the brain: Mechanisms and the mosaic unity of neuroscience. Oxford: Oxford University Press, 2007) and with them new understandings of the experimental practices of neuroscientists have emerged. In this paper, I consider two models of such practices; one that takes them to be reductive; another that takes them to be integrative. I investigate those areas of the neuroscience of learning and memory from which the examples used to substantiate these models are culled, and argue that the multiplicity of experimental protocols used in these research areas presents specific challenges for both models. In my view, these challenges have been overlooked largely because philosophers have hitherto failed to pay sufficient attention to fundamental features of experimental practice. I demonstrate that when we do pay attention to such features, evidence for reduction and integrative unity in neuroscience is simply not borne out. I end by suggesting some new directions for the philosophy of neuroscience that pertain to taking a closer look at the nature of neuroscientific experiments.

113 citations


Journal ArticleDOI
01 Aug 2009-Synthese
TL;DR: It is argued that the continuum idealizations are explanatorily ineliminable and that a full understanding of certain physical phenomena cannot be obtained through completely detailed, nonidealized representations.
Abstract: This paper examines the role of mathematical idealization in describing and explaining various features of the world. It examines two cases: first, briefly, the modeling of shock formation using the idealization of the continuum. Second, and in more detail, the breaking of droplets from the points of view of both analytic fluid mechanics and molecular dynamical simulations at the nano-level. It argues that the continuum idealizations are explanatorily ineliminable and that a full understanding of certain physical phenomena cannot be obtained through completely detailed, nonidealized representations.

Journal ArticleDOI
01 Mar 2009-Synthese
TL;DR: It is shown how Hodges’ semantics can be seen as a special case of a general construction, which provides a context for a useful completeness theorem with respect to a wider class of models, and proves a full abstraction result in the style of Hodges, in which the intuitionistic implication plays a very natural rôle.
Abstract: We take a fresh look at the logics of informational dependence and independence of Hintikka and Sandu and Vaananen, and their compositional semantics due to Hodges. We show how Hodges’ semantics can be seen as a special case of a general construction, which provides a context for a useful completeness theorem with respect to a wider class of models. We shed some new light on each aspect of the logic. We show that the natural propositional logic carried by the semantics is the logic of Bunched Implications due to Pym and O’Hearn, which combines intuitionistic and multiplicative connectives. This introduces several new connectives not previously considered in logics of informational dependence, but which we show play a very natural role, most notably intuitionistic implication. As regards the quantifiers, we show that their interpretation in the Hodges semantics is forced, in that they are the image under the general construction of the usual Tarski semantics; this implies that they are adjoints to substitution, and hence uniquely determined. As for the dependence predicate, we show that this is definable from a simpler predicate, of constancy or dependence on nothing. This makes essential use of the intuitionistic implication. The Armstrong axioms for functional dependence are then recovered as a standard set of axioms for intuitionistic implication. We also prove a full abstraction result in the style of Hodges, in which the intuitionistic implication plays a very natural role.

Journal ArticleDOI
01 Jan 2009-Synthese
TL;DR: It is argued that the correct interpretation falls out naturally from a relativist theory, but requires special stipulation in a theory which appeals instead to the use of hidden indexicals; and that a hidden indexical analysis presents problems for contemporary syntactic theory.
Abstract: Recent arguments for relativist semantic theories have centered on the phenomenon of “faultless disagreement.” This paper offers independent motivation for such theories, based on the interpretation of predicates of personal taste in certain attitude contexts and presuppositional constructions. It is argued that the correct interpretation falls out naturally from a relativist theory, but requires special stipulation in a theory which appeals instead to the use of hidden indexicals; and that a hidden indexical analysis presents problems for contemporary syntactic theory.

Journal ArticleDOI
01 Aug 2009-Synthese
TL;DR: It is claimed that physicality is not necessary for computer simulations' representational and predictive capacities and that the explanation of why computer simulations generate desired information about their target system is only to be found in the detailed analysis of their semantic levels.
Abstract: Whereas computer simulations involve no direct physical interaction between the machine they are run on and the physical systems they are used to investigate, they are often used as experiments and yield data about these systems. It is commonly argued that they do so because they are implemented on physical machines. We claim that physicality is not necessary for their representational and predictive capacities and that the explanation of why computer simulations generate desired information about their target system is only to be found in the detailed analy- sis of their semantic levels. We provide such an analysis and we determine the actual consequences of physical implementation for simulations.

Journal ArticleDOI
18 May 2009-Synthese
TL;DR: The “dynamic” nature of the concept of rationality explains why the condition avoids the apparent circularity of the “backward induction paradox”: it is consistent to (continue to) believe in a player’s rationality after updating with his irrationality.
Abstract: We formalise a notion of dynamic rationality in terms of a logic of conditional beliefs on (doxastic) plausibility models. Similarly to other epistemic statements (e.g. negations of Moore sentences and of Muddy Children announcements), dynamic rationality changes its meaning after every act of learning, and it may become true after players learn it is false. Applying this to extensive games, we “simulate” the play of a game as a succession of dynamic updates of the original plausibility model: the epistemic situation when a given node is reached can be thought of as the result of a joint act of learning (via public announcements) that the node is reached. We then use the notion of “stable belief”, i.e. belief that is preserved during the play of the game, in order to give an epistemic condition for backward induction: rationality and common knowledge of stable belief in rationality. This condition is weaker than Aumann’s and compatible with the implicit assumptions (the “epistemic openness of the future”) underlying Stalnaker’s criticism of Aumann’s proof. The “dynamic” nature of our concept of rationality explains why our condition avoids the apparent circularity of the “backward induction paradox”: it is consistent to (continue to) believe in a player’s rationality after updating with his irrationality.

Journal ArticleDOI
01 Mar 2009-Synthese
TL;DR: It is argued that van Fraassen’s argument is actually not so misguided, and that it causes more trouble for compatibilists than is typically thought, and a strongly objective Bayesianism is the preferred option.
Abstract: Inference to the Best Explanation (IBE) and Bayesianism are our two most prominent theories of scientific inference. Are they compatible? Van Fraassen famously argued that they are not, concluding that IBE must be wrong since Bayesianism is right. Writers since then, from both the Bayesian and explanationist camps, have usually considered van Fraassen’s argument to be misguided, and have plumped for the view that Bayesianism and IBE are actually compatible. I argue that van Fraassen’s argument is actually not so misguided, and that it causes more trouble for compatibilists than is typically thought. Bayesianism in its dominant, subjectivist form, can only be made compatible with IBE if IBE is made subservient to conditionalization in a way that robs IBE of much of its substance and interest. If Bayesianism and IBE are to be fit together, I argue, a strongly objective Bayesianism is the preferred option. I go on to sketch this objectivist, IBE-based Bayesianism, and offer some preliminary suggestions for its development.

Journal ArticleDOI
01 Jan 2009-Synthese
TL;DR: The aim of this paper is to examine the kind of evidence that might be adduced in support of relativist semantics of a kind that have recently been proposed for predicates of personal taste, for epistemic modals, for knowledge attributions and for other cases.
Abstract: The aim of this paper is to examine the kind of evidence that might be adduced in support of relativist semantics of a kind that have recently been proposed for predicates of personal taste, for epistemic modals, for knowledge attributions and for other cases. I shall concentrate on the case of taste predicates, but what I have to say is easily transposed to the other cases just mentioned. I shall begin by considering in general the question of what kind of evidence can be offered in favour of some semantic theory or framework of semantic theorizing. In other words, I shall begin with the difficult question of the empirical significance of semantic theorizing. In Sect. 2, I outline a relativist semantic theory, and in Sect. 3, I review four types of evidence that might be offered in favour of a relativistic framework. I show that the evidence is not conclusive because a sophisticated form of contextualism (or indexical relativism) can stand up to the evidence. However, the evidence can be taken to support the view that either relativism or the sophisticated form of contextualism is correct.

Journal ArticleDOI
01 May 2009-Synthese
TL;DR: The paper argues that digital ontology should be carefully distinguished from informational ontology (the ultimate nature of reality is digital, and the universe is a computational system equivalent to a Turing Machine) in order to abandon the former and retain only the latter as a promising line of research.
Abstract: The paper argues that digital ontology (the ultimate nature of reality is digital, and the universe is a computational system equivalent to a Turing Machine) should be carefully distinguished from informational ontology (the ultimate nature of reality is structural), in order to abandon the former and retain only the latter as a promising line of research. Digital vs. analogue is a Boolean dichotomy typical of our computational paradigm, but digital and analogue are only “modes of presentation” of Being (to paraphrase Kant), that is, ways in which reality is experienced or conceptualised by an epistemic agent at a given level of abstraction. A preferable alternative is provided by an informational approach to structural realism, according to which knowledge of the world is knowledge of its structures. The most reasonable ontological commitment turns out to be in favour of an interpretation of reality as the totality of structures dynamically interacting with each other. The paper is the first part (the pars destruens) of a two-part piece of research. The pars construens, entitled “A Defence of Informational Structural Realism”, is developed in a separate article, also published in this journal.

Journal ArticleDOI
Johanna Seibt1
01 Feb 2009-Synthese
TL;DR: The paper offers a brief introduction to GPT in order to provide ontological foundations for research programs such as interactivism that centrally rely on the notions of ‘process,” ‘interaction,’ and ‘emergence.’
Abstract: General Process Theory (GPT) is a new (non-Whiteheadian) process ontology. According to GPT the domains of scientific inquiry and everyday practice consist of configurations of ‘goings-on’ or ‘dynamics’ that can be technically defined as concrete, dynamic, non-particular individuals called general processes. The paper offers a brief introduction to GPT in order to provide ontological foundations for research programs such as interactivism that centrally rely on the notions of ‘process,’ ‘interaction,’ and ‘emergence.’ I begin with an analysis of our common sense concept of activities, which plays a crucial heuristic role in the development of the notion of a general process. General processes are not individuated in terms of their location but in terms of ‘what they do,’ i.e., in terms of their dynamic relationships in the basic sense of one process being part of another. The formal framework of GPT is thus an extensional mereology, albeit a non-classical theory with a non-transitive part-relation. After a brief sketch of basic notions and strategies of the GPT-framework I show how the latter may be applied to distinguish between causal, mechanistic, functional, self-maintaining, and recursively self-maintaining interactions, all of which involve ‘emergent phenomena’ in various senses of the term.

Journal ArticleDOI
01 Aug 2009-Synthese
TL;DR: It is shown that whether artificial societies provide potential explanations is investigated, and that these potential explanations, if they contribute to the understanding, considerably differ from potential causal explanations.
Abstract: It is often claimed that artificial society simulations contribute to the explanation of social phenomena. At the hand of a particular example, this paper argues that artificial societies often cannot provide full explanations, because their models are not or cannot be validated. Despite that, many feel that such simulations somehow contribute to our understanding. This paper tries to clarify this intuition by investigating whether artificial societies provide potential explanations. It is shown that these potential explanations, if they contribute to our understanding, considerably differ from potential causal explanations. Instead of possible causal histories, simulations offer possible functional analyses of the explanandum. The paper discusses how these two kinds explanatory strategies differ, and how potential functional explanations can be appraised.

Journal ArticleDOI
01 Mar 2009-Synthese
TL;DR: This article proposes a hierarchy of propositional logics that are all tractable (i.e. decidable in polynomial time), although by means of growing computational resources, and converge towards classical propositional logic, and argues that this hierarchy can be used to represent increasing levels of “depth” or “informativeness” of Boolean reasoning.
Abstract: Deductive inference is usually regarded as being “tautological” or “analytical”: the information conveyed by the conclusion is contained in the information conveyed by the premises. This idea, however, clashes with the undecidability of first-order logic and with the (likely) intractability of Boolean logic. In this article, we address the problem both from the semantic and the proof-theoretical point of view. We propose a hierarchy of propositional logics that are all tractable (i.e. decidable in polynomial time), although by means of growing computational resources, and converge towards classical propositional logic. The underlying claim is that this hierarchy can be used to represent increasing levels of “depth” or “informativeness” of Boolean reasoning. Special attention is paid to the most basic logic in this hierarchy, the pure “intelim logic”, which satisfies all the requirements of a natural deduction system (allowing both introduction and elimination rules for each logical operator) while admitting of a feasible (quadratic) decision procedure. We argue that this logic is “analytic” in a particularly strict sense, in that it rules out any use of “virtual information”, which is chiefly responsible for the combinatorial explosion of standard classical systems. As a result, analyticity and tractability are reconciled and growing degrees of computational complexity are associated with the depth at which the use of virtual information is allowed.

Journal ArticleDOI
01 Jan 2009-Synthese
TL;DR: A view on indicative conditionals is set out and defended that which proposition is (semantically) expressed by an utterance of a conditional is a function of (among other things) the speaker's context and the assessor's context.
Abstract: I set out and defend a view on indicative conditionals that I call “indexical relativism”. The core of the view is that which proposition is (semantically) expressed by an utterance of a conditional is a function of (among other things) the speaker’s context and the assessor’s context. This implies a kind of relativism, namely that a single utterance may be correctly assessed as true by one assessor and false by another.

Journal ArticleDOI
01 Jul 2009-Synthese
TL;DR: The verdict on the question of whether it is possible to operate a time machine by manipulating matter and energy so as to manufacture closed timelike curves is that no result of sufficient generality to underwrite a confident “yes” has been proven.
Abstract: We address the question of whether it is possible to operate a time machine by manipulating matter and energy so as to manufacture closed timelike curves. This question has received a great deal of attention in the physics literature, with attempts to prove no-go theorems based on classical general relativity and various hybrid theories serving as steps along the way towards quantum gravity. Despite the effort put into these no-go theorems, there is no widely accepted definition of a time machine. We explain the conundrum that must be faced in providing a satisfactory definition and propose a resolution. Roughly, we require that all extensions of the time machine region contain closed timelike curves; the actions of the time machine operator are then sufficiently “potent” to guarantee that closed timelike curves appear. We then review no-go theorems based on classical general relativity, semi-classical quantum gravity, quantum field theory on curved spacetime, and Euclidean quantum gravity. Our verdict on the question of our title is that no result of sufficient generality to underwrite a confident “yes” has been proven. Our review of the no-go results does, however, highlight several foundational problems at the intersection of general relativity and quantum physics that lend substance to the search for an answer.

Journal ArticleDOI
01 Jul 2009-Synthese
TL;DR: This paper defends the theory that knowledge is credit-worthy true belief against a family of objections by drawing a distinction between credit as praiseworthiness and credit as attributability.
Abstract: This paper defends the theory that knowledge is credit-worthy true belief against a family of objections, two instances of which were leveled against it in a recent paper by Jennifer Lackey. Lackey argues that both innate knowledge (if there is any) and testimonial knowledge are too easily come by for it to be plausible that the knower deserves credit for it. If this is correct, then knowledge would appear not to be a matter of credit for true belief. I will attempt to neutralize these objections by drawing a distinction between credit as praiseworthiness and credit as attributability.

Journal ArticleDOI
Brian Epstein1
01 Jan 2009-Synthese
TL;DR: It is argued that ontological individualism is false, and even when individualistic facts are expanded to include people’s local environments and practices, those still underdetermine the social facts that obtain, which has implications for explanation as well as ontology.
Abstract: The thesis of methodological individualism in social science is commonly divided into two different claims—explanatory individualism and ontological individualism. Ontological individualism is the thesis that facts about individuals exhaustively determine social facts. Initially taken to be a claim about the identity of groups with sets of individuals or their properties, ontological individualism has more recently been understood as a global supervenience claim. While explanatory individualism has remained controversial, ontological individualism thus understood is almost universally accepted. In this paper I argue that ontological individualism is false. Only if the thesis is weakened to the point that it is equivalent to physicalism can it be true, but then it fails to be a thesis about the determination of social facts by facts about individual persons. Even when individualistic facts are expanded to include people’s local environments and practices, I shall argue, those still underdetermine the social facts that obtain. If true, this has implications for explanation as well as ontology. I first consider arguments against the local supervenience of social facts on facts about individuals, correcting some flaws in existing arguments and affirming that local supervenience fails for a broad set of social properties. I subsequently apply a similar approach to defeat a particularly weak form of global supervenience, and consider potential responses. Finally, I explore why it is that people have taken ontological individualism to be true.

Journal ArticleDOI
01 Jun 2009-Synthese
TL;DR: It is argued that traditional engineering practices such as safety factors and multiple safety barriers avoid this fallacy and that they therefore manage uncertainty better than probabilistic risk analysis (PRA).
Abstract: Clear-cut cases of decision-making under risk (known probabilities) are unusual in real life The gambler’s decisions at the roulette table are as close as we can get to this type of decision-making In contrast, decision-making under uncertainty (unknown probabilities) can be exemplified by a decision whether to enter a jungle that may contain unknown dangers Life is usually more like an expedition into an unknown jungle than a visit to the casino Nevertheless, it is common in decision-supporting disciplines to proceed as if reasonably reliable probability estimates were available for all possible outcomes, ie as if the prevailing epistemic conditions were analogous to those of gambling at the roulette table This mistake can be called the tuxedo fallacy It is argued that traditional engineering practices such as safety factors and multiple safety barriers avoid this fallacy and that they therefore manage uncertainty better than probabilistic risk analysis (PRA) PRA is a useful tool, but it must be supplemented with other methods in order not to limit the analysis to dangers that can be assigned meaningful probability estimates

Journal ArticleDOI
01 Sep 2009-Synthese
TL;DR: A third view is defended that explains certain relevant data from these studies by positing the double dissociation of knowledge-that and knowledge-how and that is also able to do explanatory work elsewhere.
Abstract: In this article I have two primary goals. First, I present two recent views on the distinction between knowledge-that and knowledge-how (Stanley and Williamson, The Journal of Philosophy 98(8):411–444, 2001; Hetherington, Epistemology futures, 2006). I contend that neither of these provides conclusive arguments against the distinction. Second, I discuss studies from neuroscience and experimental psychology that relate to this distinction. Having examined these studies, I then defend a third view that explains certain relevant data from these studies by positing the double dissociation of knowledge-that and knowledge-how and that is also able to do explanatory work elsewhere.

Journal ArticleDOI
01 Nov 2009-Synthese
TL;DR: The goal of this paper is to explore the most important of these controversies, namely, the controversy about the nature of the conjunction fallacy.
Abstract: In a seminal work, Tversky and Kahneman showed that in some contexts people tend to believe that a conjunction of events (e.g., Linda is a bank teller and is active in the feminist movement) is more likely to occur than one of the conjuncts (e.g., Linda is a bank teller). This belief violates the conjunction rule in probability theory. Tversky and Kahneman called this phenomenon the “conjunction fallacy”. Since the discovery of the phenomenon in 1983, researchers in psychology and philosophy have engaged in important controversies around the conjunction fallacy. The goal of this paper is to explore the most important of these controversies, namely, the controversy about the nature of the conjunction fallacy. Is the conjunction fallacy mainly due to a misunderstanding of the problem by participants (misunderstanding hypothesis) or is it mainly due to a genuine reasoning bias (reasoning bias hypothesis)? A substantial portion of research on the topic has been directed to test the misunderstanding hypothesis. I review this literature and argue that a stronger case can be made against the misunderstanding hypothesis. Thus, I indirectly provide support for the reasoning bias hypothesis.

Journal ArticleDOI
01 Sep 2009-Synthese
TL;DR: A Boolean procedure that uncovers deterministic causal structures of arbitrary complexity is introduced, Contrary to existing Boolean methodologies, the procedure advanced here successfully analyzes structures of arbitrarily complexity.
Abstract: While standard procedures of causal reasoning as procedures analyzing causal Bayesian networks are custom-built for (non-deterministic) probabilistic structures, this paper introduces a Boolean procedure that uncovers deterministic causal structures. Contrary to existing Boolean methodologies, the procedure advanced here successfully analyzes structures of arbitrary complexity. It roughly involves three parts: first, deterministic dependencies are identified in the data; second, these dependencies are suitably minimalized in order to eliminate redundancies; and third, one or—in case of ambiguities—more than one causal structure is assigned to the minimalized deterministic dependencies.

Journal ArticleDOI
01 Jan 2009-Synthese
TL;DR: The concept of general design is introduced, specifying design with respect to its ontogenetic role, which allows function to be based on design without making reference to the history of the design, or to the phylogeny of an organism, while retaining the normative aspect of function ascriptions.
Abstract: Looking for an adequate explication of the concept of a biological function, several authors have proposed to link function to design. Unfortunately, known explications of biological design in turn refer to functions. The concept of general design I will introduce here breaks up this circle. I specify design with respect to its ontogenetic role. This allows function to be based on design without making reference to the history of the design, or to the phylogeny of an organism, while retaining the normative aspect of function ascriptions. The concept is applicable to the function and design of technical artifacts as well. Several problems well known with other definitions can be overcome by this approach.