scispace - formally typeset
Search or ask a question

Showing papers in "Synthese in 2005"


Journal ArticleDOI
01 Mar 2005-Synthese
TL;DR: This analysis provides a general account of how understanding is provided by scientific explanations of diverse types and reconciles conflicting views of explanatory understanding, such as the causal-mechanical and the unificationist conceptions.
Abstract: Achieving understanding of nature is one of the aims of science. In this paper we offer an analysis of the nature of scientific understanding that accords with actual scientific practice and accommodates the historical diversity of conceptions of understanding. Its core idea is a general criterion for the intelligibility of scientific theories that is essentially contextual: which theories conform to this criterion depends on contextual factors, and can change in the course of time. Our analysis provides a general account of how understanding is provided by scientific explanations of diverse types. In this way, it reconciles conflicting views of explanatory understanding, such as the causal-mechanical and the unificationist conceptions.

239 citations


Journal ArticleDOI
01 May 2005-Synthese
TL;DR: It is shown in an explicit case study drawn from molecular cell physiology that biochemical networks display this kind of emergence, even though they deploy only mechanistic explanations, which illustrates emergence and its place in nature.
Abstract: We will show that there is a strong form of emergence in cell biology. Beginning with C.D. Broad’s classic discussion of emergence, we distinguish two conditions sufficient for emergence. Emergence in biology must be compatible with the thought that all explanations of systemic properties are mechanistic explanations and with their sufficiency. Explanations of systemic properties are always in terms of the properties of the parts within the system. Nonetheless, systemic properties can still be emergent. If the properties of the components within the system cannot be predicted, even in principle, from the behavior of the system’s parts within simpler wholes then there also will be systemic properties which cannot be predicted, even in principle, on basis of the behavior of these parts. We show in an explicit case study drawn from molecular cell physiology that biochemical networks display this kind of emergence, even though they deploy only mechanistic explanations. This illustrates emergence and its place in nature.

142 citations


Journal ArticleDOI
01 Nov 2005-Synthese
TL;DR: In this paper, the authors propose to model belief revision in a dynamic epistemic logic, where a theory is revised with a formula ϕ resulting in a revised theory in the form of a theory in which the agent believes the negation of the revision formula.
Abstract: In ‘belief revision’ a theory \( \mathcal{K} \) is revised with a formula ϕ resulting in a revised theory \( \mathcal{K}*\phi \). Typically, ¬ϕ is in \( \mathcal{K} \), one has to give up belief in ¬ϕ by a process of retraction, and ϕ is in \( \mathcal{K}*\phi \). We propose to model belief revision in a dynamic epistemic logic. In this setting, we typically have an information state (pointed Kripke model) for the theory \( \mathcal{K} \) wherein the agent believes the negation of the revision formula, i.e., wherein B¬ϕ is true. The revision with ϕ is a program *ϕ that transforms this information state into a new information state. The transformation is described by a dynamic modal operator [*ϕ], that is interpreted as a binary relation 〚*ϕ〛 between information states. The next information state is computed from the current information state and the belief revision formula. If the revision is successful, the agent believes ϕ in the resulting state, i.e., B ϕ is then true. To make this work, as information states we propose ‘doxastic epistemic models’ that represent both knowledge and degrees of belief. These are multi-modal and multi-agent Kripke models. They are constructed from preference relations for agents, and they satisfy various characterizable multi-agent frame properties. Iterated, revocable, and higher-order belief revision are all quite natural in this setting. We present, for an example, five different ways of such dynamic belief revision. One can also see that as a non-deterministic epistemic action with two alternatives, where one is preferred over the other, and there is a natural generalization to general epistemic actions with preferences.

141 citations


Journal ArticleDOI
Troy Cross1
01 Apr 2005-Synthese
TL;DR: The categorical/dispositional distinction should not be abandoned; it underpins important metaphysical disputes and should be taken as a primitive, after which the doomed attempts at reductive explanation can be transformed into circular but interesting accounts.
Abstract: Attempts to capture the distinction between categorical and dispositional states in terms of more primitive modal notions – subjunctive conditionals, causal roles, or combinatorial principles – are bound to fail. Such failure is ensured by a deep symmetry in the ways dispositional and categorical states alike carry modal import. But the categorical/dispositional distinction should not be abandoned; it underpins important metaphysical disputes. Rather, it should be taken as a primitive, after which the doomed attempts at reductive explanation can be transformed into circular but interesting accounts.

112 citations


Journal ArticleDOI
01 Jul 2005-Synthese
TL;DR: Studying selected cases of incomplete arguments in natural language discourse to see what the requirements are for filling in the unstated premises and conclusions in some systematic and useful way shows this approach to work reasonably well for weak arguments.
Abstract: The aim of this investigation is to explore the role of argumentation schemes in enthymeme reconstruction. This aim is pursued by studying selected cases of incomplete arguments in natural language discourse to see what the requirements are for filling in the unstated premises and conclusions in some systematic and useful way. Some of these cases are best handled using deductive tools, while others respond best to an analysis based on defeasible argumentations schemes. The approach is also shown to work reasonably well for weak arguments, a class of arguments that has always been difficult to analyze without the principle of charity producing a straw man.

81 citations


Journal ArticleDOI
01 Apr 2005-Synthese
TL;DR: In considering the nature of properties four controversial decisions must be made: are properties universals or tropes, or are particulars just bundles of properties?
Abstract: In considering the nature of properties four controversial decisions must be made. (1) Are properties universals or tropes? (2) Are properties attributes of particulars, or are particulars just bundles of properties? (3) Are properties categorical (qualitative) in nature, or are they powers? (4) If a property attaches to a particular, is this predication contingent, or is it necessary? These choices seem to be in a great degree independent of each other. The author indicates his own choices.

78 citations


Journal ArticleDOI
01 Jan 2005-Synthese
TL;DR: Four experiments which investigate three rules of SYSTEMP, namely the AND, the LEFT LOGICAL EQUIVALENCE, and the OR rule find a relatively good agreement of human reasoning and principles of nonmonotonic reasoning.
Abstract: Nonmonotonic reasoning is often claimed to mimic human common sense reasoning. Only a few studies, though, have investigated this claim empirically. We report four experiments which investigate three rules of SYSTEMP, namely the AND, the LEFT LOGICAL EQUIVALENCE, and the OR rule. The actual inferences of the subjects are compared with the coherent normative upper and lower probability bounds derived from a non-infinitesimal probability semantics of SYSTEM P. We found a relatively good agreement of human reasoning and principles of nonmonotonic reasoning. Contrary to the results reported in the ‘heuristics and biases’ tradition, the subjects committed relatively few upper bound violations (conjunction fallacies).

68 citations


Journal ArticleDOI
01 Nov 2005-Synthese
TL;DR: This formalization builds on work of Halpern and Moses (1984) on the concept of ‘only knowing’, generalized by Hoek et al., (1999, 2000), and Zimmermann’s approach to competence.
Abstract: In this paper, a pragmatic approach to the phenomenon of free choice permission is proposed. Free choice permission is explained as due to taking the speaker (i) to obey certain Gricean maxims of conversation and (ii) to be competent on the deontic options, i.e. to know the valid obligations and permissions. The approach differs from other pragmatic approaches to free choice permission in giving a formally precise description of the class of inferences that can be derived based on these two assumptions. This formalization builds on work of Halpern and Moses (1984) on the concept of ‘only knowing’, generalized by Hoek et al., (1999, 2000), and Zimmermann’s (2000) approach to competence.

64 citations


Journal ArticleDOI
01 Oct 2005-Synthese
TL;DR: In 1888 Hilbert made his Rundreise from Königsberg to other German university towns, he arrived in Berlin just as Dedekind’s Was sind und was sollen die Zahlen?

51 citations


Journal ArticleDOI
01 Jul 2005-Synthese
TL;DR: It is observed that free choice inferences are defeasible, and a semantics of free choice permission is defended as strong permission expressed in terms of a modal conditional in a nonmonotonic logic.
Abstract: Free choice permission, a crucial test case concerning the semantics/ pragmatics boundary, usually receives a pragmatic treatment. But its pragmatic features follow from its semantics. We observe that free choice inferences are defeasible, and defend a semantics of free choice permission as strong permission expressed in terms of a modal conditional in a nonmonotonic logic.

50 citations


Journal ArticleDOI
01 Apr 2005-Synthese
TL;DR: This paper argues that accounts of causal relevance that are the most plausible, for independent reasons, render the verdict that dispositions are causally relevant.
Abstract: To determine whether dispositions are causally relevant, we have to get clear about what causal relevance is. Several characteristics of causal relevance have been suggested, including Explanatory Power, Counterfactual Dependence, Lawfullness, Exclusion, Independence, and Minimal Sufficiency. Different accounts will yield different answers about the causal relevance of dispositions. However, accounts of causal relevance that are the most plausible, for independent reasons, render the verdict that dispositions are causally relevant.

Journal ArticleDOI
01 Jul 2005-Synthese
TL;DR: It is shown that, even granting thefine tuning of the universe, it does not follow that the universe is improbable, thus no explanation of the fine tuning, theistic or otherwise, is required.
Abstract: The argument from fine tuning is supposed to establish the existence of God from the fact that the evolution of carbon-based life requires the laws of physics and the boundary conditions of the universe to be more or less as they are. We demonstrate that this argument fails. In particular, we focus on problems associated with the role probabilities play in the argument. We show that, even granting the fine tuning of the universe, it does not follow that the universe is improbable, thus no explanation of the fine tuning, theistic or otherwise, is required.

Journal ArticleDOI
01 Aug 2005-Synthese
TL;DR: An empirical study of three desirable properties for a consequence relation that capture default reasoning: Rationality, Property Inheritance and Ambiguity Preservation is presented.
Abstract: This paper first provides a brief survey of a possibilistic handling of default rules. A set of default rules of the form, “generally, from α deduce β”, is viewed as the family of possibility distributions satisfying constraints expressing that the situation where α and β is true has a greater plausibility than the one where a and - β is true. When considering only the subset of linear possibility distributions, the well-known System P of postulates proposed by Kraus, Lehmann and Magidor, has been obtained. We also present two rational extensions: one based on the minimum specificity principle and the other is based on the lexicographic ordering. The second part of the paper presents an empirical study of three desirable properties for a consequence relation that capture default reasoning: Rationality, Property Inheritance and Ambiguity Preservation. An experiment is conducted to investigate 13 patterns of inference for the test of these properties. Our experimental apparatus confirms previous results on the relevance of System P, and enforces the psychological relevance of the studied properties.

Journal ArticleDOI
01 Jul 2005-Synthese
TL;DR: In this paper, it is argued that Yablo's paradox is not strictly paradoxical, but rather "ω-paradoxical" and that the derivation of an inconsistency requires a uniform fixed-point construction.
Abstract: It is argued that Yablo’s Paradox is not strictly paradoxical, but rather ‘ω-paradoxical’. Under a natural formalization, the list of Yablo sentences may be constructed using a diagonalization argument and can be shown to be ω-inconsistent, but nonetheless consistent. The derivation of an inconsistency requires a uniform fixed-point construction. Moreover, the truth-theoretic disquotational principle required is also uniform, rather than the local disquotational T-scheme. The theory with the local disquotation T-scheme applied to individual sentences from the Yablo list is also consistent.

Journal ArticleDOI
01 Dec 2005-Synthese
TL;DR: A literature review and a new experiment are presented to show that the reduction of cognitive psychology to neuroscience is implausible and a good deal of object exploration research is potentially confounded precisely because it assumes that psychological generalizations can be reduced to neuroscientific ones.
Abstract: The purpose of this paper is to use neuroscientific evidence to address the philosophical issue of intertheoretic reduction. In particular, we present a literature review and a new experiment to show that the reduction of cognitive psychology to neuroscience is implausible. To make this case, we look at research using object exploration, an important experimental paradigm in neuroscience, behavioral genetics and psychopharmacology. We show that a good deal of object exploration research is potentially confounded precisely because it assumes that psychological generalizations can be reduced to neuroscientific ones.

Journal ArticleDOI
01 Aug 2005-Synthese
TL;DR: It is shown that system P is a tool for reasoning with normic laws which satisfies two important evolutionary standards: it is probabilistically reliable, and it has rules of low complexity.
Abstract: In the first part I argue that normic laws are the phenomenological laws of evolutionary systems. If this is true, then intuitive human reasoning should be fit in reasoning from normic laws. In the second part I show that system P is a tool for reasoning with normic laws which satisfies two important evolutionary standards: it is probabilistically reliable, and it has rules of low complexity. In the third part I finally report results of an experimental study which demonstrate that intuitive human reasoning is in well accord with basic argument patterns of system P.

Journal ArticleDOI
01 Jul 2005-Synthese
TL;DR: It is argued that the medieval form of dialectical disputation known as obligationes can be viewed as a logical game of consistency maintenance and the primacy of inferential (syntactic) relations over semantic aspects and the dynamic character of obligations are outlined.
Abstract: I argue that the medieval form of dialectical disputation known as obligationes can be viewed as a logical game of consistency maintenance. The game has two participants, Opponent and Respondent. Opponent puts forward a proposition P; Respondent must concede, deny or doubt, on the basis of inferential relations between P and previously accepted or denied propositions, or, in case there is none, on the basis of the common set of beliefs. Respondent loses the game if he concedes a contradictory set of propositions. Opponent loses the game if Respondent is able to maintain consistency during the stipulated period of time. The obligational rules are here formalised by means of familiar notational devices, and the application of some game-theoretical concepts, such as (winning) strategy, moves, motivation, allows for an analysis of some crucial properties of the game. In particular, the primacy of inferential (syntactic) relations over semantic aspects and the dynamic character of obligations are outlined.

Journal ArticleDOI
01 Aug 2005-Synthese
TL;DR: It is argued that ordinary people so cleverly and effortlessly use default reasoning to solve interesting cognitive tasks that nonmonotonic formalisms were introduced into AI, and this is a form of psychologism, despite the fact that it is not usually recognized as such in AI.
Abstract: Default reasoning occurs whenever the truth of the evidence available to the reasoner does not guarantee the truth of the conclusion being drawn. Despite this, one is entitled to draw the conclusion "by default" on the grounds that we have no information which would make us doubt that the inference should be drawn. It is the type of conclusion we draw in the ordinary world and ordinary situations in which we find ourselves. Formally speaking, 'nonmonotonic reasoning' refers to argumentation in which one uses certain information to reach a conclusion, but where it is possible that adding some further information to those very same premises could make one want to retract the original conclusion. It is easily seen that the informal notion of default reasoning manifests a type of nonmonotonic reasoning. Generally speaking, default statements are said to be true about the class of objects they describe, despite the acknowledged existence of "exceptional instances" of the class. In the absence of explicit information that an object is one of the exceptions we are enjoined to apply the default statement to the object. But further information may later tell us that the object is in fact one of the exceptions. So this is one of the points where nonmonotonicity resides in default reasoning. The informal notion has been seen as central to a number of areas of scholarly in- vestigation, and we canvass some of them before turning our attention to its role in AI. It is because ordinary people so cleverly and effortlessly use default reasoning to solve interesting cognitive tasks that nonmonotonic formalisms were introduced into AI, and we argue that this is a form of psychologism, despite the fact that it is not usually recognized as such in AI. We close by mentioning some of the results from our empirical investigations that we believe should be incorporated into nonmonotonic formalisms.

Journal ArticleDOI
01 Feb 2005-Synthese
TL;DR: Reflection on the apparently trivial character of T-sentences should not incline us to deflationism, and it is argued that there is no need for a disquotational truth-predicate; that the word ‘true’, in ordinary language, is not a disquested truth- Predicate; and that it is not at all clear that it was even possible to introduce a disaquotationaltruth-predicates into ordinary language.
Abstract: Hartry Field has suggested that we should adopt at least a methodological deflationism: “[W]e should assume full-fledged deflationism as a working hypothesis. That way, if full-fledged deflationism should turn out to be inadequate, we will at least have a clearer sense than we now have of just where it is that inflationist assumptions ... are needed”. I argue here that we do not need to be methodological deflationists. More pre-cisely, I argue that we have no need for a disquotational truth-predicate; that the word ‘true’, in ordinary language, is not a disquotational truth-predicate; and that it is not at all clear that it is even possible to introduce a disquotational truth-predicate into ordinary language. If so, then we have no clear sense how it is even possible to be a methodological deflationist. My goal here is not to convince a committed deflationist to abandon his or her position. My goal, rather, is to argue, contrary to what many seem to think, that reflection on the apparently trivial character of T-sentences should not incline us to deflationism.


Journal ArticleDOI
01 May 2005-Synthese
TL;DR: By recognizing that the failure to explain the truth of disparate propositions often stems from inflationary approaches’ allegiance to alethic monism, pluralist approaches are able to avoid this explanatory inadequacy and the resulting skepticism, though at the cost of inviting other conceptual difficulties.
Abstract: Traditional inflationary approaches that specify the nature of truth are attractive in certain ways; yet, while many of these theories successfully explain why propositions in certain domains of discourse are true, they fail to adequately specify the nature of truth because they run up against counterexamples when attempting to generalize across all domains. One popular consequence is skepticism about the efficaciousness of inflationary approaches altogether. Yet, by recognizing that the failure to explain the truth of disparate propositions often stems from inflationary approaches’ allegiance to alethic monism, pluralist approaches are able to avoid this explanatory inadequacy and the resulting skepticism, though at the cost of inviting other conceptual difficulties. A novel approach, alethic functionalism, attempts to circumvent the problems faced by pluralist approaches while preserving their main insights. Unfortunately, it too generates additional problems – namely, with its suspect appropriation of the multiple realizability paradigm and its platitude-based strategy – that need to be dissolved before it can constitute an adequate inflationary approach to the nature of truth.

Journal ArticleDOI
01 May 2005-Synthese
TL;DR: It is argued that alethic functionalism will collapse either into deflationism or into a view that takes “true” as simply ambiguous, but this work rejects both claims.
Abstract: According to alethic functionalism, truth is a higher-order multiply realizable property of propositions. After briefly presenting the view’s main principles and motivations, I defend alethic functionalism from recent criticisms raised against it by Cory Wright. Wright argues that alethic functionalism will collapse either into deflationism or into a view that takes “true” as simply ambiguous. I reject both claims.

Journal ArticleDOI
01 Apr 2005-Synthese
TL;DR: A metaphysical position is developed that is both lawless and anti-Humean and contrasts with both Humean lawlessness and nomological realism – the claim that there are laws in nature.
Abstract: I develop a metaphysical position that is both lawless and anti-Humean. The position is called realist lawlessness and contrasts with both Humean lawlessness and nomological realism – the claim that there are laws in nature. While the Humean view also allows no laws, realist lawlessness is not Humean because it accepts some necessary connections in nature between distinct properties. Realism about laws, on the other hand, faces a central dilemma. Either laws govern the behaviour of properties from the outside or from the inside. If the former, an unacceptable quidditist view of properties follows. But no plausible account of laws within properties can be developed that permits a governing role specifically for laws. I conclude in favour of eliminativism about laws. At the conceptual core, the notion of a law in nature is misleading. It is suggestive of an otherwise static world in need of animation.

Book ChapterDOI
01 Nov 2005-Synthese
TL;DR: It is shown how a formalization based on modal logic can incorporate those distinctive aspects introduced by David Lewis in Convention.
Abstract: In this paper, I provide a logical framework for defining conventions, elaborating on the game-theoretic model proposed by David Lewis. The philosophical analysis of some of the key concepts in Lewis's model reveals that a modal logic formalization may be a natural one. The paper will develop on the analysis and critique of such concepts as those of common knowledge, indication, and the dis tinction between epistemic and practical rationality. In particular: (i) the analysis of Lewis's definition of common knowledge reveals that a suitable formalization can be obtained by adopting an approach analogous to that of awareness structures in modal logic; moreover (ii) the analysis of the notion of indication reveals that the agents may be required to make inductive inferences yielding probabilis tic beliefs. I shall stress that such aspects, however, pertain to the sphere of epistemic rationality (i.e., they deal with the justification of the agents' beliefs) rather than to the sphere of practical rational ity. Confounding the two spheres may lead to the wrong conclusion that, in order to make sense of, say, salience as a coordination device, one should incorporate psychological assumptions into an undivided notion of rationality. On the contrary, practical rationality stands as the usual notion of game-theoretic rationality, whereas epistemic rationality incorporates those aspects pointed out in (i) and (ii) above. This attempt to provide a formal framework for Lewis's theory of convention follows those of Vanderschraaf (1995, 1998) and Cubitt and Sugden (2003). In his work on Lewis, Vanderschraaf provides a characterization of convention as correlated equilibrium, adopting a formal framework close to the set-theoretical one proposed by Au mann (1976). Cubitt and Sugden point out that such a framework does not take into account certain elements that are however pres ent in Lewis's original theory, and propose a different formal setup altogether. In this paper, I show how a formalization based on modal logic can incorporate those distinctive aspects introduced by David Lewis in Convention.

Journal ArticleDOI
01 Apr 2005-Synthese
TL;DR: This work presents a means of distinguishing the laws (and their logical consequences) from the accidents, in terms of their range of invariance under counterfactual antecedents, that does not appeal to physical modalities in delimiting the relevant range ofcounterfactual perturbations.
Abstract: Many philosophers have believed that the laws of nature differ from the accidental truths in their invariance under counterfactual perturbations. Roughly speaking, the laws would still have held had q been the case, for any q that is consistent with the laws. (Trivially, no accident would still have held under every such counterfactual supposition.) The main problem with this slogan (even if it is true) is that it uses the laws themselves to delimit q’s range. I present a means of distinguishing the laws (and their logical consequences) from the accidents, in terms of their range of invariance under counterfactual antecedents, that does not appeal to physical modalities (or any cognate notion) in delimiting the relevant range of counterfactual perturbations. I then argue that this approach explicates the sense in which the laws possess a kind of necessity.

Journal ArticleDOI
01 Nov 2005-Synthese
TL;DR: In this paper, the authors define a simple graded version of doxastic logic KD45 as the basis for the definition of belief-based programs, and study the way the agent's belief state is maintained when executing such programs, which calls for revising belief states by observations (possibly unreliable or imprecise).
Abstract: Knowledge-based programs (KBPs) are a powerful notion for expressing action policies in which branching conditions refer to implicit knowledge and call for a deliberation task at execution time. However, branching conditions in KBPs cannot refer to possibly erroneous beliefs or to graded belief, such as "if my belief that φ holds is high then do some action a else perform some sensing action β". The purpose of this paper is to build a framework where such programs can be expressed. In this paper we focus on the execution of such a program (a companion paper investigates issues relevant to the off-line evaluation and construction of such programs). We define a simple graded version of doxastic logic KD45 as the basis for the definition of belief-based programs. Then we study the way the agent's belief state is maintained when executing such programs, which calls for revising belief states by observations (possibly unreliable or imprecise) and progressing belief states by physical actions (which may have normal as well as exceptional effects).

Journal ArticleDOI
01 Aug 2005-Synthese
TL;DR: It is argued that cognitive states of biological systems are inherently temporal, and three adequacy conditions for neuronal models of representation are vindicated: the compositionality of meaning, the compositionalities of content, and the co-variation with content.
Abstract: The paper argues that cognitive states of biological systems are inherently temporal. Three adequacy conditions for neuronal models of representation are vindicated: the compositionality of meaning, the compositionality of content, and the co-variation with content. Classicist and connectionist approaches are discussed and rejected. Based on recent neurobiological data, oscillatory networks are introduced as a third alternative. A mathematical description in a Hilbert space framework is developed. The states of this structure can be regarded as conceptual representations satisfying the three conditions. 1. CONCEPTS, COMPOSITIONALITY AND CO-VARIATION The view that cognition takes place in the cortex constitutes a com- mon ground for most contemporary philosophers and cognitive scientists. Highly controversial, however, is the question how this can be. Cognition is not just any form of information processing. Only processes that are defined over conceptual structures, (i) which have content and (ii) which are expressible by predicate languages, are properly called cognition. The first condition derives from the fact that cognitive processes are essen- tially epistemic: The criterion of truth-conduciveness, which is exclusive to bearers of content, i.e., representations, applies to them. The second condi- tion grounds in the assumption that cognition presupposes categorization. Truth-conducive processes would be practically useless and without any evolutionary benefit if they did not subsume objects under categories. Non- categorial processes would not be about anything. Categories, however, are just what concepts are and predicates express. While the neuronal structure of the cortex, to this day, has been perceived as radically different from conceptual structure, this paper, using the dimension of time, will show how it is nevertheless possible to reduce the latter to the former. Cognition is systematic in the sense that there are systematic correla- tions between representational capacities: If a mind is capable of certain cognitive states, it most probably is also capable of other cognitive states with related contents. The capacity to think that a red square is in a green circle, e.g., is statistically highly correlated with the capacity to think that

Journal ArticleDOI
01 Jan 2005-Synthese
TL;DR: It is shown that the systems Cand CL of nonmonotonic logic are adequate with respect to the corresponding description of the classes of interpreted ordered and interpreted hierarchical systems, respectively.
Abstract: Interpreted dynamical systems are dynamical systems with an additional interpretation mapping by which propositional formulas are assigned to system states. The dynamics of such systems may be described in terms of qualitative laws for which a satisfaction clause is defined. We show that the systems C and CL of nonmonotonic logic are adequate with respect to the corresponding description of the classes of interpreted ordered and interpreted hierarchical systems, respectively. Inhibition networks, artificial neural networks, logic programs, and evolutionary systems are instances of such interpreted dynamical systems, and thus our results entail that each of them may be described correctly and, in a sense, even completely by qualitative laws that obey the rules of a nonmonotonic logic system.

Journal ArticleDOI
01 May 2005-Synthese
TL;DR: The two-envelope ‘problem’ of evaluating the ‘factual’ information provided to us in the form of the value contained by the envelope chosen first does not allow a satisfactory solution.
Abstract: After explaining the well-known two-envelope ‘paradox’ by indicating the fallacy involved, we consider the two-envelope ‘problem’ of evaluating the ‘factual’ information provided to us in the form of the value contained by the envelope chosen first. We try to provide a synthesis of contributions from economy, psychology, logic, probability theory (in the form of Bayesian statistics), mathematical statistics (in the form of a decision-theoretic approach) and game theory. We conclude that the two-envelope problem does not allow a satisfactory solution. An interpretation is made for statistical science at large.

Journal ArticleDOI
01 Sep 2005-Synthese
TL;DR: It is argued that a complete concept of causation must also account for dispositions whose manifestations involve no changes at all, and that a causal theory that fails to include these ‘static’ dispositions alongside the dynamic ones renders static occurrences miraculous.
Abstract: When it comes to scientific explanation, our parsimonious tendencies mean that we focus almost exclusively on those dispositions whose manifestations result in some sort of change – changes in properties, locations, velocities and so on. Following this tendency, our notion of causation is one that is inherently dynamic, as if the maintenance of the status quo were merely a given. Contrary to this position, I argue that a complete concept of causation must also account for dispositions whose manifestations involve no changes at all, and that a causal theory that fails to include these ‘static’ dispositions alongside the dynamic ones renders static occurrences miraculous.