scispace - formally typeset
Search or ask a question

Showing papers in "Synthese in 2006"


Journal ArticleDOI
03 Nov 2006-Synthese
TL;DR: The Hodgkin and Huxley model of the action potential is used to illustrate the ways that models can be useful without explaining and what is required of an adequate mechanistic model.
Abstract: Not all models are explanatory. Some models are data summaries. Some models sketch explanations but leave crucial details unspecified or hidden behind filler terms. Some models are used to conjecture a how-possibly explanation without regard to whether it is a how-actually explanation. I use the Hodgkin and Huxley model of the action potential to illustrate these ways that models can be useful without explaining. I then use the subsequent development of the explanation of the action potential to show what is required of an adequate mechanistic model. Mechanistic models are explanatory.

409 citations


Journal ArticleDOI
Jaegwon Kim1
09 Aug 2006-Synthese
TL;DR: The concept of reduction, which lies at the heart of the emergence idea is explicated, and it is shown how the thesis that emergent properties are irreducible gives a unified account of emergence.
Abstract: This paper explores the fundamental ideas that have motivated the idea of emergence and the movement of emergentism. The concept of reduction, which lies at the heart of the emergence idea is explicated, and it is shown how the thesis that emergent properties are irreducible gives a unified account of emergence. The paper goes on to discuss two fundamental unresolved issues for emergentism. The first is that of giving a "positive" characterization of emergence; the second is to give a coherent explanation of how "downward" causation, a central component of emergentism, is able to avoid the problem of overdetermination.

262 citations


Journal ArticleDOI
05 Sep 2006-Synthese
TL;DR: This paper introduces a new aggregation procedure inspired by an operator defined in artificial intelligence in order to merge belief bases and shows that paradoxical outcomes arise only when inconsistent collective judgments are not ruled out from the set of possible solutions.
Abstract: The aggregation of individual judgments on logically interconnected propositions into a collective decision on the same propositions is called judgment aggregation. Literature in social choice and political theory has claimed that judgment aggregation raises serious concerns. For example, consider a set of premises and a conclusion where the latter is logically equivalent to the former. When majority voting is applied to some propositions (the premises) it may give a different outcome than majority voting applied to another set of propositions (the conclusion). This problem is known as the discursive dilemma (or paradox). The discursive dilemma is a serious problem since it is not clear whether a collective outcome exists in these cases, and if it does, what it is like. Moreover, the two suggested escape-routes from the paradox—the so-called premise-based procedure and the conclusion-based procedure—are not, as I will show, satisfactory methods for group decision-making. In this paper I introduce a new aggregation procedure inspired by an operator defined in artificial intelligence in order to merge belief bases. The result is that we do not need to worry about paradoxical outcomes, since these arise only when inconsistent collective judgments are not ruled out from the set of possible solutions.

190 citations


Journal ArticleDOI
13 Sep 2006-Synthese
TL;DR: This paper proposes a perspective on practical reasoning as presumptive justification of a course of action, along with critical questions of this justification, building on the account of Walton, and articulate an interaction protocol, which is called PARMA, for dialogues over proposed actions based on this theory.
Abstract: In this paper we consider persuasion in the context of practical reasoning, and discuss the problems associated with construing reasoning about actions in a manner similar to reasoning about beliefs. We propose a perspective on practical reasoning as presumptive justification of a course of action, along with critical questions of this justification, building on the account of Walton. From this perspective, we articulate an interaction protocol, which we call PARMA, for dialogues over proposed actions based on this theory. We outline an axiomatic semantics for the PARMA Protocol, and discuss two implementations which use this protocol to mediate a discussion between humans. We then show how our proposal can be made computational within the framework of agents based on the Belief-Desire-Intention model, and illustrate this proposal with an example debate within a multi agent system.

173 citations


Journal ArticleDOI
01 May 2006-Synthese
TL;DR: In this article, the Condorcet jury theorem was used to show that the pbp is universally superior to the cbp if the objective is to reach truth for the right reasons.
Abstract: This paper addresses a problem for theories of epistemic democracy. In a decision on a complex issue which can be decomposed into several parts, a collective can use different voting procedures: Either its members vote on each sub-question and the answers that gain majority support are used as premises for the conclusion on the main issue (premise based-procedure, pbp), or the vote is conducted on the main issue itself (conclusion-based procedure, cbp). The two procedures can lead to different results. We investigate which of these procedures is better as a truth-tracker, assuming that there exists a true answer to be reached. On the basis of the Condorcet jury theorem, we show that the pbp is universally superior if the objective is to reach truth for the right reasons. If one instead is after truth for whatever reasons, right or wrong, there will be cases in which the cbp is more reliable, even though, for the most part, the pbp still is to be preferred.

138 citations


Journal ArticleDOI
25 Aug 2006-Synthese
TL;DR: Here it is illustrated that a “ruthless reductionism” is alive and thriving in “molecular and cellular cognition”—a field of research within cellular and molecular neuroscience, the current mainstream of the discipline.
Abstract: As opposed to the dismissive attitude toward reductionism that is popular in current philosophy of mind, a "ruthless reductionism" is alive and thriving in "molecular and cellular cognition"—a field of research within cellular and molecular neuroscience, the current mainstream of the discipline. Basic experimental practices and emerging results from this field imply that two common assertions by philoso- phers and cognitive scientists are false: (1) that we do not know much about how the brain works, and (2) that lower-level neuroscience cannot explain cognition and complex behavior directly. These experimental practices involve intervening directly with molecular components of sub-cellular and gene expression pathways in neu- rons and then measuring specific behaviors. These behaviors are tracked using tests that are widely accepted by experimental psychologists to study the psychological phenomenon at issue (e.g., memory, attention, and perception). Here I illustrate these practices and their importance for explanation and reduction in current main- stream neuroscience by describing recent work on social recognition memory in mammals.

135 citations


Journal ArticleDOI
01 Jul 2006-Synthese
TL;DR: The analysis covers renormalisation and infinities, inequivalent representations, and the concept of localised states; the conclusion is that Lagrangian QFT is a perfectly respectable physical theory, albeit somewhat different in certain respects from most of those studied in foundational work.
Abstract: I analyse the conceptual and mathematical foundations of Lagrangian quantum field theory (QFT) (that is, the ‘naive’ (QFT) used in mainstream physics, as opposed to algebraic quantum field theory). The objective is to see whether Lagrangian (QFT) has a sufficiently firm conceptual and mathematical basis to be a legitimate object of foundational study, or whether it is too ill-defined. The analysis covers renormalisation and infinities, inequivalent representations, and the concept of localised states; the conclusion is that Lagrangian QFT (at least as described here) is a perfectly respectable physical theory, albeit somewhat different in certain respects from most of those studied in foundational work.

113 citations


Journal ArticleDOI
08 Aug 2006-Synthese
TL;DR: It is shown how traditional metaphysical approaches fail to engage how science is done, and the methods used support a pragmatic and non-eliminativist realism.
Abstract: Methodological reductionists practice ‘wannabe reductionism’. They claim that one should pursue reductionism, but never propose how. I integrate two strains in prior work to do so. Three kinds of activities are pursued as “reductionist”. “Successional reduction” and inter-level mechanistic explanation are legitimate and powerful strategies. Eliminativism is generally ill-conceived. Specific problem-solving heuristics for constructing inter-level mechanistic explanations show why and when they can provide powerful and fruitful tools and insights, but sometimes lead to erroneous results. I show how traditional metaphysical approaches fail to engage how science is done. The methods used do so, and support a pragmatic and non-eliminativist realism.

101 citations


Journal ArticleDOI
Dag Prawitz1
01 Feb 2006-Synthese
TL;DR: According to a main idea of Gentzen the meanings of the logical constants are reflected by the introduction rules in his system of natural deduction, which is understood as saying roughly that a closed argument ending with an introduction is valid provided that its immediate subarguments are valid and that other closed arguments are justified to the extent that they can be brought to introduction form.
Abstract: According to a main idea of Gentzen the meanings of the logical constants are reflected by the introduction rules in his system of natural deduction. This idea is here understood as saying roughly that a closed argument ending with an introduction is valid provided that its immediate subarguments are valid and that other closed arguments are justified to the extent that they can be brought to introduction form. One main part of the paper is devoted to the exact development of this notion. Another main part of the paper is concerned with a modification of this notion as it occurs in Michael Dummett’s book The Logical Basis of Metaphysics. The two notions are compared and there is a discussion of how they fare as a foundation for a theory of meaning. It is noted that Dummett’s notion has a simpler structure, but it is argued that it is less appropriate for the foundation of a theory of meaning, because the possession of a valid argument for a sentence in Dummett’s sense is not enough to be warranted to assert the sentence.

100 citations


Journal ArticleDOI
01 Nov 2006-Synthese
TL;DR: Analysis of information systems and 'philosophical puzzles' reveals a growing number of dynamic phenomena that can be described or explained by unsuccessful updates, and investigates the syntactic characterization of the successful formulas.
Abstract: In an information state where various agents have both factual knowledge and knowledge about each other, announcements can be made that change the state of information. Such informative announcements can have the curious property that they become false because they are announced. The most typical example of that is 'fact p is true and you don't know that', after which you know that p, which entails the negation of the announcement formula. The announcement of such a formula in a given information state is called an unsuccessful update. A successful formula is a formula that always becomes common knowledge after being announced. Analysis of information systems and 'philosophical puzzles' reveals a growing number of dynamic phenomena that can be described or explained by unsuccessful updates. This increases our understanding of such philosophical problems. We also investigate the syntactic characterization of the successful formulas.

99 citations


Journal ArticleDOI
01 Sep 2006-Synthese
TL;DR: It is shown that the notion of a truth-maker is a close relative of a concept employed by van Inwagen in the formulation of his Consequence Argument, and an argument is developed to the effect that the objects usually regarded as truth-makers are not apt to play this role.
Abstract: The article is primarily concerned with the notion of a truth-maker. An explication for this notion is offered, which relates it to other notions of making something such-and-such. In particular, it is shown that the notion of a truth-maker is a close relative of a concept employed by van Inwagen in the formulation of his Consequence Argument. This circumstance helps understanding the general mechanisms of the concepts involved. Thus, a schematic explication of a whole battery of related notions is offered. It is based on an explanatory notion, introduced by the sentential connector “because”, whose function is examined in some detail. Finally, on the basis of the explication proposed, an argument is developed to the effect that the objects usually regarded as truth-makers are not apt to play this role.

Journal ArticleDOI
01 Feb 2006-Synthese
TL;DR: Various notions of proof-theoretic validity are investigated in detail and particular emphasis is placed on the relationship between semantic validity concepts and validity concepts used in normalization theory.
Abstract: The standard approach to what I call “proof-theoretic semantics”, which is mainly due to Dummett and Prawitz, attempts to give a semantics of proofs by defining what counts as a valid proof. After a discussion of the general aims of proof-theoretic semantics, this paper investigates in detail various notions of proof-theoretic validity and offers certain improvements of the definitions given by Prawitz. Particular emphasis is placed on the relationship between semantic validity concepts and validity concepts used in normalization theory. It is argued that these two sorts of concepts must be kept strictly apart.

Journal ArticleDOI
12 Sep 2006-Synthese
TL;DR: This work examines in detail three classic reasoning fallacies, that is, supposedly ``incorrect'' forms of argument, the so-called argumentam ad ignorantiam, the circular argument or petitio principii, and the slippery slope argument.
Abstract: We examine in detail three classic reasoning fallacies, that is, supposedly ``incorrect'' forms of argument. These are the so-called argumentam ad ignorantiam, the circular argument or petitio principii, and the slippery slope argument. In each case, the argument type is shown to match structurally arguments which are widely accepted. This suggests that it is not the form of the arguments as such that is problematic but rather something about the content of those examples with which they are typically justified. This leads to a Bayesian reanalysis of these classic argument forms and a reformulation of the conditions under which they do or do not constitute legitimate forms of argumentation.

Journal ArticleDOI
01 Jun 2006-Synthese
TL;DR: So-called ‘reified temporal logics’ were introduced by researchers in Artificial Intelligence in the early 1980s, and gave rise to a long-running series of debates concerning the proper way to represent states, events, causation, action, and other notions identified as crucial to the knowledge representation needs of AI.
Abstract: So-called 'reified temporal logics' were introduced by researchers in Artificial Intelligence (AI) in the early 1980s, and gave rise to a long-running series of debates concerning the proper way to represent states, events, causation, action, and other notions identified as crucial to the knowledge representation needs of AI. These debates never resulted in a definitive resolution of the issues under discussion, and indeed continue to produce aftershocks to the present day; none the less, we are now sufficiently far removed in time from their heyday for it to be a worthwhile exercise to stand back and review them as a connected piece of history.

Journal ArticleDOI
01 Apr 2006-Synthese
TL;DR: The main aim of this paper is the explicit articulation of the Ungrounded Argument, an argument against a thesis that might be called universal or global groundedness; namely, that every dispositional property is grounded in some property other than itself.
Abstract: There is an argument that has yet to be made wholly explicit though it might be one of the most important in contemporary metaphysics. This paper is an attempt to rectify that omission. The argument is of such high importance because it involves a host of central concepts, concerning actuality, modality, groundedness and powers. If Ellis’s (2001) assessment is correct, the whole debate between Humean and anti-Humean metaphysics might rest on this viability of the argument. The argument, which I call the Ungrounded Argument (abbreviated to UA), has in various implicit forms been discussed or defended by Blackburn (1990), Molnar (1999, 2003, ch. 8) and Ellis (2001, 114 and 2002, 74–75). It concerns the alleged possibility of ungrounded dispositional properties or causal powers. It is an argument against a thesis that might be called universal or global groundedness; namely, that every dispositional property is grounded in some property other than itself. In Section 2 I formulate, for the first time, an explicit version of the Ungrounded Argument and present the evidence and reasons for its premises. Along the way, I will clarify some of the key concepts and issues. In Section 3 I consider the likely responses to UA and identify the main basis on which it might be challenged. In Section 4, I try to distil the issue down to its central core and show what must be overcome, and what must be acknowledged, if the argument is to be accepted. The main aim of this paper is the explicit articulation of the argument. Sections 3 and 4 are briefer, therefore, and give only an indication of the lines that may have to be developed for the argument’s ultimate acceptance.

Journal ArticleDOI
01 Mar 2006-Synthese
TL;DR: It is pointed out that an agent’s obligations are often dependent on what the agent knows, and indeed one cannot reasonably be expected to respond to a problem if one is not aware of its existence, and a case for combining Deontic Logic with the Logic of Knowledge is clear.
Abstract: Deontic Logic goes back to Ernst Mally’s 1926 work, Grundgesetze des Sollens: Elemente der Logik des Willens [Mally. E.: 1926, Grundgesetze des Sollens: Elemente der Logik des Willens, Leuschner & Lubensky, Graz], where he presented axioms for the notion ‘p ought to be the case’. Some difficulties were found in Mally’s axioms, and the field has much developed. Logic of Knowledge goes back to Hintikka’s work Knowledge and Belief [Hintikka, J.: 1962, Knowledge and Belief: An Introduction to the Logic of the Two Notions, Cornell University Press] in which he proposed formal logics of knowledge and belief. This field has also developed quite a great deal and is now the subject of the TARK conferences. However, there has been relatively little work combining the two notions of knowledge (belief) with the notion of obligation. (See, however, [Lomuscio, A. and Sergot, M.: 2003, Studia Logica 75 63–92; Moore, R. C.: 1990, In J. F. Allen, J. Hendler and A. Tate (eds.), Readings in Planning, Morgan Kaufmann Publishers, San Mateo, CA]) In this paper we point out that an agent’s obligations are often dependent on what the agent knows, and indeed one cannot reasonably be expected to respond to a problem if one is not aware of its existence. For instance, a doctor cannot be expected to treat a patient unless she is aware of the fact that he is sick, and this creates a secondary obligation on the patient or someone else to inform the doctor of his situation. In other words, many obligations are situation dependent, and only apply in the presence of the relevant information. Thus a case for combining Deontic Logic with the Logic of Knowledge is clear. We introduce the notion of knowledge based obligation and offer an S5, history based Kripke semantics to express this notion, as this semantics enables us to represent how information is transmitted among agents and how knowledge changes over time as a result of communications. We consider both the case of an absolute obligation (although dependent on information) as well as the (defeasible) notion of an obligation which may be over-ridden by more relevant information. For instance a physician who is about to inject a patient with drug d may find out that the patient is allergic to d and that she should use d′ instead. Dealing with the second kind of case requires a resort to non-monotonic reasoning and the notion of justified belief which is stronger than plain belief, but weaker than absolute knowledge in that it can be over-ridden. This notion of justified belief also creates a derived notion of default obligation where an agent has, as far as the agent knows, an obligation to do some action a. A dramatic application of this notion is our analysis of the Kitty Genovese case where, in 1964, a young woman was stabbed to death while 38 neighbours watched from their windows but did nothing. The reason was not indifference, but none of the neighbours had even a default obligation to act, even though, as a group, they did have an obligation to take some action to protect Kitty.

Journal ArticleDOI
20 Oct 2006-Synthese
TL;DR: This work suggests that neuroscientists invoke the computational outlook to explain regularities that are formulated in terms of the information content of electrical signals, and indicates why computational theories have explanatory force with respect to these regularities.
Abstract: The view that the brain is a sort of computer has functioned as a theoretical guideline both in cognitive science and, more recently, in neuroscience. But since we can view every physical system as a computer, it has been less than clear what this view amounts to. By considering in some detail a seminal study in computational neuroscience, I first suggest that neuroscientists invoke the computational outlook to explain regularities that are formulated in terms of the information content of electrical signals. I then indicate why computational theories have explanatory force with respect to these regularities:in a nutshell, they underscore correspondence relations between formal/mathematical properties of the electrical signals and formal/mathematical properties of the represented objects. I finally link my proposal to the philosophical thesis that content plays an essential role in computational taxonomy.

Journal ArticleDOI
01 Sep 2006-Synthese
TL;DR: It is argued that the credibility of a simulation model comes not only from the credentials supplied to it by the governing theory, but also from the antecedently established credentials of the model building techniques employed by the simulationists.
Abstract: In computer simulations of physical systems, the construction of models is guided, but not determined, by theory. At the same time simulations models are often constructed precisely because data are sparse. They are meant to replace experiments and observations as sources of data about the world; hence they cannot be evaluated simply by being compared to the world. So what can be the source of credibility for simulation models? I argue that the credibility of a simulation model comes not only from the credentials supplied to it by the governing theory, but also from the antecedently established credentials of the model building techniques employed by the simulationists. In other words, there are certain sorts of model building techniques which are taken, in and of themselves, to be reliable. Some of these model building techniques, moreover, incorporate what are sometimes called “falsifications.” These are contrary-to-fact principles that are included in a simulation model and whose inclusion is taken to increase the reliability of the results. The example of a falsification that I consider, called artificial viscosity, is in widespread use in computational fluid dynamics. Artificial viscosity, I argue, is a principle that is successfully and reliably used across a wide domain of fluid dynamical applications, but it does not offer even an approximately “realistic” or true account of fluids. Artificial viscosity, therefore, is a counter-example to the principle that success implies truth – a principle at the foundation of scientific realism. It is an example of reliability without truth.

Journal ArticleDOI
25 Aug 2006-Synthese
TL;DR: This paper describes the scientific explanation literature as involving field elements and a preferred causal model system (PCMS) abbreviated as FE and PCMS, and locates an “explanation” in a typical scientific research article.
Abstract: In this paper, I propose two theses, and then examine what the consequences of those theses are for discussions of reduction and emergence. The first thesis is that what have traditionally been seen as robust, reductions of one theory or one branch of science by another more fundamental one are a largely a myth. Although there are such reductions in the physical sciences, they are quite rare, and depend on special requirements. In the biological sciences, these prima facie sweeping reductions fade away, like the body of the famous Cheshire cat, leaving only a smile. ... The second thesis is that the “smiles” are fragmentary patchy explanations, and though patchy and fragmentary, they are very important, potentially Nobel-prize winning advances. To get the best grasp of these “smiles,” I want to argue that, we need to return to the roots of discussions and analyses of scientific explanation more generally, and not focus mainly on reduction models, though three conditions based on earlier reduction models are retained in the present analysis. I briefly review the scientific explanation literature as it relates to reduction, and then offer my account of explanation. The account of scientific explanation I present is one I have discussed before, but in this paper I try to simplify it, and characterize it as involving field elements (FE) and a preferred causal model system (PCMS) abbreviated as FE and PCMS. In an important sense, this FE and PCMS analysis locates an “explanation” in a typical scientific research article. This FE and PCMS account is illustrated using a recent set of neurogenetic papers on two kinds of worm foraging behaviors: solitary and social feeding. One of the preferred model systems from a 2002 Nature article in this set is used to exemplify the FE and PCMS analysis, which is shown to have both reductive and nonreductive aspects. The paper closes with a brief discussion of how this FE and PCMS approach differs from and is congruent with Bickle’s “ruthless reductionism” and the recently revived mechanistic philosophy of science of Machamer, Darden, and Craver.

Journal ArticleDOI
08 Aug 2006-Synthese
TL;DR: The concept of emergence is widely used in both the philosophy of mind and in cognitive science, and it is often used to refer to phenomena not explicitly programmed.
Abstract: The concept of emergence is widely used in both the philosophy of mind and in cognitive science. In the philosophy of mind it serves to refer to seemingly irreducible phenomena, in cognitive science it is often used to refer to phenomena not explicitly programmed. There is no unique concept of emergence available that serves both purposes.

Journal ArticleDOI
01 Mar 2006-Synthese
TL;DR: In order to be able to express common and interesting properties of action in general and of the interaction between action and knowledge in particular, a generalization of the coalition modalities of ATL is proposed.
Abstract: Alternating-time temporal logic (ATL) is a branching time temporal logic in which statements about what coalitions of agents can achieve by strategic cooperation can be expressed. Alternating-time temporal epistemic logic (ATEL) extends ATL by adding knowledge modalities, with the usual possible worlds interpretation. This paper investigates how properties of agents’ actions can be expressed in ATL in general, and how properties of the interaction between action and knowledge can be expressed in ATEL in particular. One commonly discussed property is that an agent should know about all available actions, i.e., that the same actions should be available in indiscernible states. Van der Hoek and Wooldridge suggest a syntactic expression of this semantic property. This paper shows that this correspondence in fact does not hold. Furthermore, it is shown that the semantic property is not expressible in ATEL at all. In order to be able to express common and interesting properties of action in general and of the interaction between action and knowledge in particular, a generalization of the coalition modalities of ATL is proposed. The resulting logics, ATL-A and ATEL-A, have increased expressiveness without loosing ATL’s and ATEL’s tractability of model checking.

Journal ArticleDOI
01 Feb 2006-Synthese
TL;DR: The purpose here is to motivate a simple example of a model of deductions performed within an abstract context, where the authors do not have any particular logical constant, but something underlying all logical constants, and to motivate the notion of adjointness.
Abstract: In standard model theory, deductions are not the things one models. But in general proof theory, in particular in categorial proof theory, one finds models of deductions, and the purpose here is to motivate a simple example of such models. This will be a model of deductions performed within an abstract context, where we do not have any particular logical constant, but something underlying all logical constants. In this context, deductions are represented by arrows in categories involved in a general adjoint situation. To motivate the notion of adjointness, one of the central notions of category theory, and of mathematics in general, it is first considered how some features of it occur in set-theoretical axioms and in the axioms of the lambda calculus. Next, it is explained how this notion arises in the context of deduction, where it characterizes logical constants. It is shown also how the categorial point of view suggests an analysis of propositional identity. The problem of propositional identity, i.e., the problem of identity of meaning for propositions, is no doubt a philosophical problem, but the spirit of the analysis proposed here will be rather mathematical. Finally, it is considered whether models of deductions can pretend to be a semantics. This question, which as so many questions having to do with meaning brings us to that wall that blocked linguists and philosophers during the whole of the twentieth century, is merely posed. At the very end, there is the example of a geometrical model of adjunction. Without pretending that it is a semantics, it is hoped that this model may prove illuminating and useful.

Journal ArticleDOI
01 Apr 2006-Synthese
TL;DR: Analytic metaphysics is in resurgence; there is renewed and vigorous interest in topics such as time, causation, persistence, parthood and possible worlds.
Abstract: 1. Introduction Analytic metaphysics is in resurgence; there is renewed and vigorous interest in topics such as time, causation, persistence, parthood and possible worlds. We who share this interest often pay lip-service to the idea that metaphysics should be informed by modern science; some take this duty very seriously.

Journal ArticleDOI
01 Nov 2006-Synthese
TL;DR: It is argued that at least some of these criteria depend on the methods of inference the proofs employ, and that standard models of formal deduction are not well-equipped to support such evaluations.
Abstract: On a traditional view, the primary role of a mathematical proof is to warrant the truth of the resulting theorem. This view fails to explain why it is very often the case that a new proof of a theorem is deemed important. Three case studies from elementary arithmetic show, informally, that there are many criteria by which ordinary proofs are valued. I argue that at least some of these criteria depend on the methods of inference the proofs employ, and that standard models of formal deduction are not well-equipped to support such evaluations. I discuss a model of proof that is used in the automated deduction community, and show that this model does better in that respect.

Journal ArticleDOI
01 Jun 2006-Synthese
TL;DR: This paper introduces hybrid logic from a contemporary perspective, and then examines the role it played in Prior’s work.
Abstract: Contemporary hybrid logic is based on the idea of using formulas as terms, an idea invented and explored by Arthur Prior in the mid 1960s. But Prior's own work on hybrid logic remains largely undiscussed. This is unfortunate, since hybridisation played a role that was both central to and problematic for his philosophical views on tense. In this paper I introduce hybrid logic from a contemporary perspective, and then examine the role it played in Prior's work.

Journal ArticleDOI
Kit Fine1
01 Jun 2006-Synthese
TL;DR: I argue for a version of tense-logical realism that privileges tensed facts without privileging any particular temporal standpoint from which they obtain.
Abstract: I argue for a version of tense-logical realism that privileges tensed facts without privileging any particular temporal standpoint from which they obtain

Journal ArticleDOI
01 Jan 2006-Synthese
TL;DR: It is argued that both challenges fail but, at the same time, that they help to clarify what is at stake on the seemingly never ending dispute over the nature and status of general covariance.
Abstract: It is generally acknowledged that the requirement that the laws of a spacetime theory be covariant under a general coordinate transformation is a restriction on the form but not the content of the theory. The prevalent view in the physics community holds that the substantive version of general covariance – exhibited, for example, by Einstein’s general theory of relativity – consists in the requirement that diffeomorphism invariance is a gauge symmetry of the theory. This conception of general covariance is explained and confronted by two challenges. One challenge claims, in effect, that substantive general covariance is not deserving of the name since, just as it is possible to rewrite any spacetime so that it satisfies formal general covariance, so it is also possible to rewrite the theory so that it satisfies the proffered version of substantive general covariance. The other challenge claims that the proffered version of substantive general covariance is not strong enough to guarantee the intended meaning of general covariance. Both challenges are discussed in terms of concrete examples. It is argued that both challenges fail but, at the same time, that they help to clarify what is at stake on the seemingly never ending dispute over the nature and status of general covariance.

Journal ArticleDOI
20 Oct 2006-Synthese
TL;DR: A number of recent attempts to bridge Husserlian phenomenology of time consciousness and contemporary tools and results from cognitive science or computational neuroscience are described and critiqued.
Abstract: A number of recent attempts to bridge Husserlian phenomenology of time consciousness and contemporary tools and results from cognitive science or computational neuroscience are described and critiqued. An alternate proposal is outlined that lacks the weaknesses of existing accounts.

Journal ArticleDOI
20 Oct 2006-Synthese
TL;DR: Some philosophical questions pertaining to computational explanation are raised and some promising answers that are being developed by a number of authors are outlined.
Abstract: Accordingtosomephilosophers,computationalexplanationisproprietary to psychology—it does not belong in neuroscience. But neuroscientists routinely offer computational explanations of cognitive phenomena. In fact, computational explana- tion was initially imported from computability theory into the science of mind by neuroscientists, who justified this move on neurophysiological grounds. Establishing the legitimacy and importance of computational explanation in neuroscience is one thing; shedding light on it is another. I raise some philosophical questions pertain- ing to computational explanation and outline some promising answers that are being developed by a number of authors.

Journal ArticleDOI
Sungho Choi1
01 Jan 2006-Synthese
TL;DR: This paper argues that Lewis’s defense of the reformed analysis can be understood to invoke the concepts of disposition-specific stimulus and manifestation and it is argued that advocates of the simple analysis, just like Lewis, can also defend their analysis from alleged counterexamples including Martin‘s cases by invoking the concepts.
Abstract: Lewis claims that Martin’s cases indeed refute the simple conditional analysis of dispositions and proposes the reformed conditional analysis that is purported to overcome them. In this paper I will first argue that Lewis’s defense of the reformed analysis can be understood to invoke the concepts of disposition-specific stimulus and manifestation. I will go on to argue that advocates of the simple analysis, just like Lewis, can also defend their analysis from alleged counterexamples including Martin’s cases by invoking the concepts of disposition-specific stimulus and manifestation. This means that Lewis’s own necessary defense of the reformed analysis invalidates his motivation of it. Finally, I will argue that we have a good reason to favor the simple analysis over Lewis’s analysis.