scispace - formally typeset
Search or ask a question

Showing papers in "Synthese in 1996"


Journal ArticleDOI
01 Aug 1996-Synthese
TL;DR: F fuzzy logic is used in this paper to describe an imprecise logical system, FL, in which the truth-values are fuzzy subsets of the unit interval with linguistic labels such as true, false, not true, very true, quite true, not very true and not very false, etc.
Abstract: The term fuzzy logic is used in this paper to describe an imprecise logical system, FL, in which the truth-values are fuzzy subsets of the unit interval with linguistic labels such as true, false, not true, very true, quite true, not very true and not very false, etc. The truth-value set, ℐ, of FL is assumed to be generated by a context-free grammar, with a semantic rule providing a means of computing the meaning of each linguistic truth-value in ℐ as a fuzzy subset of [0, 1]. Since ℐ is not closed under the operations of negation, conjunction, disjunction and implication, the result of an operation on truth-values in ℐ requires, in general, a linguistic approximation by a truth-value in ℐ. As a consequence, the truth tables and the rules of inference in fuzzy logic are (i) inexact and (ii) dependent on the meaning associated with the primary truth-value true as well as the modifiers very, quite, more or less, etc. Approximate reasoning is viewed as a process of approximate solution of a system of relational assignment equations. This process is formulated as a compositional rule of inference which subsumes modus ponens as a special case. A characteristic feature of approximate reasoning is the fuzziness and nonuniqueness of consequents of fuzzy premisses. Simple examples of approximate reasoning are: (a) Most men are vain; Socrates is a man; therefore, it is very likely that Socrates is vain. (b) x is small; x and y are approximately equal; therefore y is more or less small, where italicized words are labels of fuzzy sets.

1,273 citations


Journal ArticleDOI
01 Sep 1996-Synthese
TL;DR: An account of implementation is developed, linked to an appropriate class of automata, such that the requirement that a system implement a given automaton places a very strong constraint on the system.
Abstract: Hilary Putnam has argued that computational functionalism cannot serve as a foundation for the study of the mind, as every ordinary open physical system implements every finite-state automaton. I argue that Putnam's argument fails, but that it points out the need for a better understanding of the bridge between the theory of computation and the theory of physical systems: the relation of implementation. It also raises questions about the class of automata that can serve as a basis for understanding the mind. I develop an account of implementation, linked to an appropriate class of automata, such that the requirement that a system implement a given automaton places a very strong constraint on the system. This clears the way for computation to play a central role in the analysis of mind.

211 citations


Journal ArticleDOI
01 Sep 1996-Synthese
TL;DR: The successful drawing of this distinction guards Turing's 1936 analysis of computation against a difficulty that has persistently been raised against it, and undercuts various objections that have been made to the computational theory of mind.
Abstract: To compute is to execute an algorithm. More precisely, to say that a device or organ computes is to say that there exists a modelling relationship of a certain kind between it and a formal specification of an algorithm and supporting architecture. The key issue is to delimit the phrase ‘of a certain kind’. I call this the problem of distinguishing between standard and nonstandard models of computation. The successful drawing of this distinction guards Turing's 1936 analysis of computation against a difficulty that has persistently been raised against it, and undercuts various objections that have been made to the computational theory of mind.

169 citations


Journal ArticleDOI
01 Jun 1996-Synthese
TL;DR: Relations of ‘screening-off’, long familiar to researchers in probabilistic causality, play a central role in this account of contrastive stress.
Abstract: Following Dretske (1977), there has been a considerable body of literature on the role of contrastive stress in causal claims Following van Fraassen (1980), there has been a considerable body of literature on the role of contrastive stress in explanations and explanation-requesting why-questions Amazingly, the two bodies of literature have remained almost entirely disjoint With an understanding of the contrastive nature of ordinary causal claims, and of the linguistic roles of contrastive stress, it is possible to provide a unified account of both phenomena I provide such an account from within the framework of a probabilistic theory of causation Relations of 'screening-off', long familiar to researchers in probabilistic causality, play a central role in this account

113 citations


Journal ArticleDOI
01 Sep 1996-Synthese
TL;DR: This paper outlines Turing's connectionist project of 1948, which proposed simulating both the behaviour of the network and the training process by means of a computer program.
Abstract: It is not widely realised that Turing was probably the first person to consider building computing machines out of simple, neuron-like elements connected together into networks in a largely random manner Turing called his networks ‘unorganised machines’ By the application of what he described as ‘appropriate interference, mimicking education’ an unorganised machine can be trained to perform any task that a Turing machine can carry out, provided the number of ‘neurons’ is sufficient Turing proposed simulating both the behaviour of the network and the training process by means of a computer program We outline Turing's connectionist project of 1948

73 citations


Journal ArticleDOI
01 Nov 1996-Synthese
TL;DR: It is argued that the best solution to the lottery paradox is that a ticket holder is not justified in believing any of the tickets are losers, which avoids the paradoxical result of the standard solution.
Abstract: The lottery paradox has been discussed widely. The standard solution to the lottery paradox is that a ticket holder is justified in believing each ticket will lose but the ticket holder is also justified in believing not all of the tickets will lose. If the standard solution is true, then we get the paradoxical result that it is possible for a person to have a justified set of beliefs that she knows is inconsistent. In this paper, I argue that the best solution to the paradox is that a ticket holder is not justified in believing any of the tickets are losers. My solution avoids the paradoxical result of the standard solution. The solution I defend has been hastily rejected by other philosophers because it appears to lead to skepticism. I defend my solution from the threat of skepticism and give two arguments in favor of my conclusion that the ticket holder in the original lottery case is not justified in believing that his ticket will lose.

69 citations


Journal ArticleDOI
01 Oct 1996-Synthese
TL;DR: This paper argues for the idea that the logic of questions should focus its attention on the analysis of arguments in which questions play the role of conclusions.
Abstract: This paper argues for the idea that the logic of questions should focus its attention on the analysis of arguments in which questions play the role of conclusions. The relevant concepts of validity are discussed and the concept of the logic of questions of a semantically interpreted formalized language is introduced.

53 citations


Journal ArticleDOI
01 Dec 1996-Synthese
TL;DR: This paper provides an interpretation of the Routley-Meyer semantics for a weak negation-free relevant logic using Israel and Perry's theory of information in information-theoretic terms.
Abstract: This paper provides an interpretation of the Routley-Meyer semantics for a weak negation-free relevant logic using Israel and Perry's theory of information. In particular, Routley and Meyer's ternary accessibility relation is given an interpretation in information-theoretic terms.

45 citations


Journal ArticleDOI
01 Jan 1996-Synthese
TL;DR: L'A.
Abstract: Rejetant le paradigme propositionnel developpe par P. S. Churchland (sentential paradigm), l'A. propose un modele topologique (ou geometrique) des representations mentales, fonde sur la theorie des espaces conceptuels, et l'applique au probleme epistemologique de la projectibilite des concepts, ainsi qu'au probleme semantique du transfert de signification par les metaphores

41 citations



Journal ArticleDOI
01 Aug 1996-Synthese
TL;DR: A new deontic operator for representing what an agent ought to do is explored, which is cast against the background of a modal treatment of action developed by Nuel Belnap and Michael Perloff, which itself relies on Arthur Prior's indeterministic tense logic.
Abstract: The purpose of this paper is to explore a new deontic operator for representing what an agent ought to do; the operator is cast against the background of a modal treatment of action developed by Nuel Belnap and Michael Perloff, which itself relies on Arthur Prior's indeterministic tense logic. The analysis developed here of what an agent ought to do is based on a dominance ordering adapted from the decision theoretic study of choice under uncertainty to the present account of action. It is shown that this analysis gives rise to a normal deontic operator, and that the result is superior to an analysis that identifies what an agent ought to do with what it ought to be that the agent does.

Journal ArticleDOI
01 Sep 1996-Synthese
TL;DR: This paper expound and then try to refute Searle's abstract argument against strong AI, an argument which turns upon quite general considerations concerning programs, syntax, and semantics, and which seems not to depend on intuitions about the Chinese Room.
Abstract: Discussion of Searle's case against strong AI has usually focused upon his Chinese Room thought-experiment. In this paper, however, I expound and then try to refute what I call his abstract argument against strong AI, an argument which turns upon quite general considerations concerning programs, syntax, and semantics, and which seems not to depend on intuitions about the Chinese Room. I claim that this argument fails, since it assumes one particular account of what a program is. I suggest an alternative account which, however, cannot play a role in a Searle-type argument, and argue that Searle gives no good reason for favoring his account, which allows the abstract argument to work, over the alternative, which doesn't. This response to Searle's abstract argument also, incidentally, enables the Robot Reply to the Chinese Room to defend itself against objections Searle makes to it.

Journal ArticleDOI
01 Apr 1996-Synthese
TL;DR: A comparative study of the sense in which physical theories can lend expression to the metaphysics at issue and to establish that the same issues are at stake in the relational approach to value-definiteness and probability in quantum mechanics.
Abstract: The relational approach to tense holds that “the now”, “passage”, and “becoming” are to be understood in terms of relations between events. The debate over the adequacy of this framework is illustrated by a comparative study of the sense in which physical theories, (in)deterministic and (non)relativistic, can lend expression to the metaphysics at issue. The objective is not to settle the matter, but to clarify the nature of this metaphysics and to establish that the same issues are at stake in the relational approach to value-definiteness and probability in quantum mechanics. They concern the existence of a unique present, respectively actuality, and a notion of identity over time that cannot be paraphrased in terms of relations.

Journal ArticleDOI
Gary Gates1
01 Jun 1996-Synthese
TL;DR: It is concluded that no “naturalization” of content of the sort currently popular can solve Quine's “gavagai” enigma and shows how failure to solve the problem leads to absurd conclusions not about one's own mental life, but about the nonmental world.
Abstract: In this paper I apply an old problem of Quine's (the inscrutability of reference in translation) to a new style of theory about mental content (causal/nomological/informational accounts of meaning) and conclude that no “naturalization” of content of the sort currently popular can solve Quine's “gavagai” enigma. I show how failure to solve the problem leads to absurd conclusions not about one's own mental life, but about the nonmental world. I discuss various ways of attempting to remedy the accounts so as to avoid the problem and explain why each attempt at solving the problem would take the information theorists further from their self-assigned task of “naturalizing” semantics.

Journal ArticleDOI
01 Nov 1996-Synthese
TL;DR: In this paper, a new formulation of the Ramsey test is proposed, which is compatible with the so-called AGM theory and is used to study the conditionals epistemically validated by the AGM postulates.
Abstract: How to accept a conditional? F. P. Ramsey proposed the following test in (Ramsey 1990). (RT) ‘If A, then B’ must be accepted with respect to the current epistemic state iff the minimal hypothetical change of it needed to accept A also requires accepting B. In this article we propose a formulation of (RT), which unlike some of its predecessors, is compatible with our best theory of belief revision, the so-called AGM theory (see (Gardenfors 1988), chapters 1–5 for a survey). The new test, which, we claim, encodes some of the crucial insights defended by F. P. Ramsey in (Ramsey 1990), is used to study the conditionals epistemically validated by the AGM postulates. Our notion of validity (PV) is compared with the notion of negative validity (NV) used by Gardenfors in (Gardenfors 1988). It is observed that the notions of PV and NV will in general differ and that when these differences arise it is the notion of PV that is preferable. Finally we compare our formulation of the Ramsey test with a previous formulation offered by Gardenfors (GRT). We show that any attempt to interpret (GRT) as delivering acceptance conditions for Ramsey's conditionals is doomed to failure.

Journal ArticleDOI
Lyle Zynda1
01 Nov 1996-Synthese
TL;DR: It is argued that idealized requirements can be normatively relevant even when the ideals are unattainable, so long as they define a structure that links imperfect and perfect rationality in a way that enables us to make sense of the notion of better approximations to the ideal.
Abstract: Probabilistic coherence is not an absolute requirement of rationality; nevertheless, it is an ideal of rationality with substantive normative import. An idealized rational agent who avoided making implicit logical errors in forming his preferences would be coherent. In response to the challenge, recently made by epistemologists such as Foley and Plantinga, that appeals to ideal rationality render probabilism either irrelevant or implausible, I argue that idealized requirements can be normatively relevant even when the ideals are unattainable, so long as they define a structure that links imperfect and perfect rationality in a way that enables us to make sense of the notion of better approximations to the ideal. I then analyze the notion of approximation to the ideal of coherence by developing a generalized theory of belief functions that allows for incoherence, and showing how such belief functions can be ordered with regard to greater or lesser coherence.

Journal ArticleDOI
Martin Bunzl1
01 Feb 1996-Synthese
TL;DR: It is argued that (at least many) philosophical thought experiments are unreliable and this notion of unreliability has to be understood relative to the goal of thought experiments as knowledge producing, and it is suggested that knowledge production is a goal only under quite limited circumstances.
Abstract: In this paper I argue that (at least many) philosophical thought experiments are unreliable. But I argue that this notion of unreliability has to be understood relative to the goal of thought experiments as knowledge producing. And relative to that goal many thought experiments in science are just as unreliable. But in fact thought experiments in science play a varied role and I will suggest that knowledge production is a goal only under quite limited circumstances. I defend the view that these circumstances can (sometimes) arise in philosophy as well.

Journal ArticleDOI
01 Apr 1996-Synthese
TL;DR: Compte-rendu de l'ouvrage de D. Bohm et B. Hiley, intitule «The undivided universe: an ontological interpretation of quantum theory» (1993), d'une part, and de celui de J. S. Bell, defendent des positions opposees concernant les variables cachees and l'interpretation classique de the mecanique quantique.
Abstract: Compte-rendu de l'ouvrage de D. Bohm et B. J. Hiley, intitule «The undivided universe: an ontological interpretation of quantum theory» (1993), d'une part, et de celui de J. S. Bell, intitule «Speakable and unspeakable in quantum mecanics» (1987), d'autre part, qui defendent des positions opposees concernant les variables cachees et l'interpretation classique de la mecanique quantique

Journal ArticleDOI
01 Jan 1996-Synthese
TL;DR: Descartes's third primary notion is examined and the consequences of the distinctions Descartes is making with regard to the authors' knowledge of the human mind and nature are rather different from those that have been attributed to Descarte due to the influential Rylean picture of Cartesian mind-body dualism.
Abstract: This paper examines Descartes's third primary notion and the distinction between different kinds of knowledge based on different and mutually irreducible primary notions. It discusses the application of the notions of clearness and distinctness to the domain of knowledge based on that of mind-body union. It argues that the consequences of the distinctions Descartes is making with regard to our knowledge of the human mind and nature are rather different from those that have been attributed to Descartes due to the influential Rylean picture of Cartesian mind-body dualism.

Journal ArticleDOI
01 Jul 1996-Synthese
TL;DR: Three issues surrounding “Humphreys's paradox” and interpretation of conditional propensities are distinguished, and the dispositional character of the propensity interpretation provides a consistent and useful interpretation of the probability calculus is examined.
Abstract: The aim of this paper is to distinguish between, and examine, three issues surrounding “Humphreys's paradox” and interpretation of conditional propensities. The first issue involves the controversy over the interpretation of inverse conditional propensities — conditional propensities in which the conditioned event occurs before the conditioning event. The second issue is the consistency of the dispositional nature of the propensity interpretation and the inversion theorems of the probability calculus, where an inversion theorem is any theorem of probability that makes explicit (or implicit) appeal to a conditional probability and its corresponding inverse conditional probability. The third issue concerns the relationship between the notion of stochastic independence which is supported by the propensity interpretation, and various notions of causal independence. In examining each of these issues, it is argued that the dispositional character of the propensity interpretation provides a consistent and useful interpretation of the probability calculus.

Journal ArticleDOI
01 Aug 1996-Synthese
TL;DR: It is concluded that the notion of ‘emerging from imaginary time’ is incoherent and the whole class of cosmological models appealing to imaginary time is thereby refuted.
Abstract: Recent models in quantum cosmology make use of the concept of imaginary time. These models all conjecture a join between regions of imaginary time and regions of real time. We examine the model of James Hartle and Stephen Hawking to argue that the various ‘no-boundary’ attempts to interpret the transition from imaginary to real time in a logically consistent and physically significant way all fail. We believe this conclusion also applies to ‘quantum tunneling’ models, such as that proposed by Alexander Vilenkin. We conclude, therefore, that the notion of ‘emerging from imaginary time’ is incoherent. A consequence of this conclusion seems to be that the whole class of cosmological models appealing to imaginary time is thereby refuted.

Journal ArticleDOI
01 Mar 1996-Synthese
TL;DR: Comparant la theorie des signes de Peirce a la conception de la ressemblance chez Goodman, l'A.M. mesure l'importance de the theorie peirceenne de l'iconicite dans the philosophie du langage and de theesprit.
Abstract: Etude de la fonction de la ressemblance dans le phenomene de la representation linguistique et mentale. Comparant la theorie des signes de Peirce a la conception de la ressemblance chez Goodman, l'A. mesure l'importance de la theorie peirceenne de l'iconicite dans la philosophie du langage et de l'esprit

Journal ArticleDOI
01 Aug 1996-Synthese
TL;DR: Kim maintains that “a physicalist has only two genuine options, eliminativism and reductionism”, but physicalists can reject both by using the Strict Implication thesis (SI).
Abstract: Kim maintains that “a physicalist has only two genuine options, eliminativism and reductionism”. But physicalists can reject both by using the Strict Implication thesis (SI). Discussing his arguments will help to show what useful work SI can do.

Journal ArticleDOI
01 Nov 1996-Synthese
TL;DR: The problem of pursuit and the problem of diversity can be solved by taking into account the cognitive risk that is involved in theory choice, and is compared to other proposals.
Abstract: How can it be rational to work on a new theory that does not yet meet the standards for good or acceptable theories? If diversity of approaches is a condition for scientific progress, how can a scientific community achieve such progress when each member does what it is rational to do, namely work on the best theory? These two methodological problems, the problem of pursuit and the problem of diversity, can be solved by taking into account the cognitive risk that is involved in theory choice. I compare this solution to other proposals, in particular T. S. Kuhn's and P. Kitcher's view that the two problems demonstrate the epistemic significance of the scientific community.

Journal ArticleDOI
01 Nov 1996-Synthese
TL;DR: It is argued that both the belief state and its input should be represented as epistemic entrenchment (EE) relations and a belief revision operation is constructed that updates a given EE relation to a new one in light of an evidential EE relation.
Abstract: In this paper, it is argued that both the belief state and its input should be represented as epistemic entrenchment (EE) relations. A belief revision operation is constructed that updates a given EE relation to a new one in light of an evidential EE relation, and an axiomatic characterization of this operation is given. Unlike most belief revision operations, the one developed here can handle both “multiple belief revision” and “iterated belief revision”.

Journal ArticleDOI
01 May 1996-Synthese
TL;DR: Through discussing McTaggart's positive conception of time as well as his negative attack on its reality, this work hopes to clarify the dispute between those who believe in the existence of the transitory temporal properties of pastness, presentness and futurity, and those who deny their existence.
Abstract: Since McTaggart first proposed his paradox asserting the unreality of time, numerous philosophers have attempted to defend the tensed theory of time against it. Certainly, one of the most highly developed and original is that put forth by Quentin Smith. Through discussing McTaggart's positive conception of time as well as his negative attack on its reality, I hope to clarify the dispute between those who believe in the existence of the transitory temporal properties of pastness, presentness and futurity, and those who deny their existence. We shall see that the debate centers around the ontological status of succession and the B-relations of earlier and later. I shall argue that Smith's tensed theory fails because he cannot account for the sense in which events have their tensed properties successively, and he cannot account for the direction of time.

Journal ArticleDOI
01 Jan 1996-Synthese
TL;DR: This paper critically review the most recent proposals to specify the nature of interdiscourse relations, focusing on the concept of supervenience, and argues that scientific categories referred to in interdiscourses relations are, ultimately, dependent on common sense categories and common sense normative criteria.
Abstract: Amidst the progress being made in the various (sub-)disciplines of the behavioural and brain sciences a somewhat neglected subject is the problem of how everything fits into one world and, derivatively, how the relation between different levels of discourse should be understood and to what extent different levels, domains, approaches, or disciplines are autonomous or dependent. In this paper I critically review the most recent proposals to specify the nature of interdiscourse relations, focusing on the concept of supervenience. Ideally supervenience is a relation between different discourses which has all the advantages of reduction, but without its disadvantages. I apply the more abstract considerations to two concrete cases: schizophrenia and colour. Usually an interlevel or interdiscourse relation is seen as asymmetrical: the overlaying discourse depends on the underlying discourse (and not vice versa), where the out- or un-spoken assumption is that the ultimate underlying discourse is physical. Instead I argue that scientific categories referred to in interdiscourse relations are, ultimately, dependent on common sense categories and common sense normative criteria. It is the manifest categories and common sense ideas about what is reasonable and what is right that determine the relevant categorisations at the deeper, underlying levels. I suggest that the implications of this are not merely methodological or epistemological.

Journal ArticleDOI
01 May 1996-Synthese
TL;DR: Analyse de l'argument tu quoque developpe par les defenseurs de la theorie-B du temps contre les partisans de la nouvelle theorie -B dans le contexte des theories de la reference directe.
Abstract: Analyse de l'argument tu quoque developpe par les defenseurs de la theorie-B du temps contre les partisans de la nouvelle theorie-B. Examinant la semantique du temps et des indexicaux dans le contexte des theories de la reference directe, l'A. montre que la theorie-B rejoint la theorie-A en postulant a la fois un temps reel et un sujet reel (moi-maintenant)

Journal ArticleDOI
01 Apr 1996-Synthese
TL;DR: This essay employs constructions of Takeuti's boolean-valued analysis to provide a metamathematical interpretation of ideas sometimes considered disparate, ‘heuristic’, or simply ill-defined: the ‘collapse of the wave function’; Everett's many worlds'-construal of quantum measurement; and a ‘natural’ product space of contextual (nonlocal) ‘hidden variables’.
Abstract: The basic purpose of this essay, the first of an intended pair, is to interpret standard von Neumann quantum theory in a framework of iterated measure algebraic ‘truth’ for mathematical (and thus mathematical-physical) assertions — a framework, that is, in which the ‘truth-values’ for such assertions are elements of iterated boolean measure-algebras \(\mathbb{A}\) (cf. Sections 2.2.9, 5.2.1–5.2.6 and 5.3 below).

Journal ArticleDOI
Rom Harré1
01 Aug 1996-Synthese
TL;DR: It is shown that by the use of these forms, a sequence of inductive arguments could be constructed, given suitable cases histories to serve as evidence, and that the best inductive argument for the most daring realist claim is the weakest when compared with similarly structured arguments for less daring claims.
Abstract: In recent years there have been several attempts to construct inductive arguments for some version of scientific realism. Neither the characteristics of what would count as inductive evidence nor the conclusion to be inferred have been specified in ways that escape sceptical criticism. By introducing the pragmatic criterion of manipulative efficacy for a good theory and by sharpening the specification of the necessary inductive principle, the viability of a mutually supporting pair of argument forms are defended. It is shown that by the use of these forms, taken together, a sequence of inductive arguments could be constructed, given suitable cases histories to serve as evidence. It also shown that the best inductive argument for the most daring realist claim is the weakest when compared with similarly structured arguments for less daring claims.