scispace - formally typeset
Search or ask a question

Showing papers in "Synthese in 1990"


Journal ArticleDOI
01 Sep 1990-Synthese

147 citations


Journal ArticleDOI
01 Feb 1990-Synthese
TL;DR: The authors argue that questions concerning the nature of concepts that are central in cognitive psychology are also important to epistemology, and that there is more to conceptual change than mere belief revision.
Abstract: This paper argues that questions concerning the nature of concepts that are central in cognitive psychology are also important to epistemology and that there is more to conceptual change than mere belief revision. Understanding of epistemic change requires appreciation of the complex ways in which concepts are structured and organized and of how this organization can affect belief revision. Following a brief summary of the psychological functions of concepts and a discussion of some recent accounts of what concepts arc, I propose a view of concepts as complex computational structures. This account suggests that conceptual change can come in varying degrees, with the most extreme consisting of fundamental conceptual reorganizations. These degrees of conceptual change are illustrated by the development of the concept of an acid.

65 citations


Journal ArticleDOI
01 Jun 1990-Synthese
TL;DR: The central role of a Duhemian holistic, underdeterminationist variety of conventionalism in Einstein's thought is examined in this paper, with special emphasis on Einstein's deployment of Duhem's arguments in his debates with neo-Kantian interpreters of relativity and in his critique of the empiricist doctrines of theory testing advanced by Schlick, Reichenbach, and Carnap.
Abstract: Pierre Duhem's often unrecognized influence on twentieth-century philosophy of science is illustrated by an analysis of his significant if also largely unrecognized influence on Albert Einstein. Einstein's first acquaintance with Duhem's La Theorie physique, son objet et sa structure around 1909 is strongly suggested by his close personal and professional relationship with Duhem's German translator, Friedrich Adler. The central role of a Duhemian holistic, underdeterminationist variety of conventionalism in Einstein's thought is examined at length, with special emphasis on Einstein's deployment of Duhemian arguments in his debates with neo-Kantian interpreters of relativity and in his critique of the empiricist doctrines of theory testing advanced by Schlick, Reichenbach, and Carnap. Most striking is Einstein's 1949 criticism of the verificationist conception of meaning from a holistic point of view, anticipating by two years the rather similar, but more famous criticism advanced independently by Quine in ‘Two Dogmas of Empiricism’.

60 citations


Journal ArticleDOI
01 Jul 1990-Synthese
TL;DR: The authors argue that in coordination problems rational agents will not necessarily reach a unique outcome that is most preferred by all, nor a unique 'coordination equilibrium' (Gauthier, Lewis, and others).
Abstract: Philosophers using game-theoretical models of human interactions have, I argue, often overestimated what sheer rationality can achieve. (References are made to David Gauthier, David Lewis, and others.) In particular I argue that in coordination problems rational agents will not necessarily reach a unique outcome that is most preferred by all, nor a unique 'coordination equilibrium' (Lewis), nor a unique Nash equilibrium. Nor are things helped by the addition of a successful precedent, or by common knowledge of generally accepted personal principles. Commitments like those generated by agreements may be necessary for rational expectations to arise. Social conventions, construed as group principles (following the analysis in my book On Social Facts), would suffice for this task.

57 citations


Journal ArticleDOI
01 Dec 1990-Synthese
TL;DR: In this paper, the best known test for ambiguity, namely the test by contradiction, is set out, its limitations discussed, and its connection with ambiguity's definition explained, and the test is contrasted with a test for vagueness first proposed by Peirce and a test of generality propounded by Margalit.
Abstract: The problem addressed is that of finding a sound characterization of ambiguity. Two kinds of characterizations are distinguished: tests and definitions. Various definitions of ambiguity are critically examined and contrasted with definitions of generality and indeterminacy, concepts with which ambiguity is sometimes confused. One definition of ambiguity is defended as being more theoretically adequate than others which have been suggested by both philosophers and linguists. It is also shown how this definition of ambiguity obviates a problem thought to be posed by ambiguity for truth theoretical semantics. In addition, the best known test for ambiguity, namely the test by contradiction, is set out, its limitations discussed, and its connection with ambiguity's definition explained. The test is contrasted with a test for vagueness first proposed by Peirce and a test for generality propounded by Margalit.

55 citations


Journal ArticleDOI
01 Jan 1990-Synthese
TL;DR: In this paper, Hacking argued that manipulating an entity does not commit to belief that it exists, and that only manipulating on an entity in order to experiment on something else need do that is the best proof of scientific realism.
Abstract: Recent literature in philosophy of science (Cartwright 1983, Hacking 1983, and van Fraassen 1980) has focused on arguments against the notion that theories ought to be taken as providing true descriptions of reality. While countenancing this form of 'theory anti-realism' both Cartwright and Hacking, unlike van Fraassen, have retained what they refer to as 'entity realism'. This latter form of realism acknowledges the existence of particular theoretical entities despite the fact that we may have several competing, and perhaps contradictory, theories (models) of those entities. Although Hacking's and Cartwright's arguments address the role of theoretical entities in experiments, Hacking emphasizes the manipulation of entities as a subspecies of experimentation, claiming that \"experimenting on an entity does not commit you to believing that it exists. Only manipulating an entity in order to experiment on something else need do that\" (p. 263). Hence \"engineering not theorizing is the best proof of scientific realism\" (p. 263). Hacking contrasts the metaphysical questions concerning scientific realism with those that deal with rationality, the epistemological questions. The former raise issues such as, Are the entities postulated by physical theories real?, What is true of those entities?, What is truth?, and so on. The epistemological questions concern what we know, what we should believe, and what should be considered evidence. In arguing for entity realism Hacking takes himself to be addressing only the metaphysical questions. Hacking claims that certain entities can be characterized using lowlevel generalizations about their causal properties and the ways in which they interact with other parts of nature. These 'home truths', as he calls them (p. 265), are supposedly robust under theory change and do not constitute anything like the kinds of complex frameworks that are normally taken to be definitive of a theory. In what follows I want to discuss some difficulties that arise from this characterization. One has to do with the ways in which the primacy of these home truths is

42 citations


Journal ArticleDOI
01 Dec 1990-Synthese
TL;DR: It is argued that one can take possible world semantics seriously and yet remain in full compliance with actualist scruples.
Abstract: Actualism is the doctrine that the only things there are, that have being in any sense, are the things that actually exist. In particular, actualism eschews possibilism, the doctrine that there are merely possible objects. It is widely held that one cannot both be an actualist and at the same time take possible world semantics seriously — that is, take it as the basis for a genuine theory of truth for modal languages, or look to it for insight into the modal structure of reality. For possible world semantics, it is supposed, commits one to possibilism. In this paper I take issue with this view. To the contrary, I argue that one can take possible world semantics seriously and yet remain in full compliance with actualist scruples.

41 citations


Journal ArticleDOI
01 Aug 1990-Synthese
TL;DR: The notion of a "measured multi-tude" or a "multi tude of measures" as mentioned in this paper is a generalization of the notion of "measure of a set".
Abstract: According to Aristotle, the objects studied by mathematics have no independent existence, but are separated in thought from the substrate in which they exist, and treated as separable i.e., are \"abstracted\" by the mathematician. I In particular, numerical attributives or predicates (which answer the question 'how many?') have for \"substrate\" multitudes with a designated unit. 'How many pairs of socks?' has a different answer from 'how many socks?'. (Cf. Metaph. XIV i 1088a5ff.: \"One la signifies that it is a measure of a multitude, and number lb that it is a measured multitude and a multitude of measures\".) It is reasonable to see in this notion of a \"measured multi tude\" or a \"multi tude of measures\" just that of a (finite) set: the measures or units are what we should call the elements of the set; the requirement that such units be distinguished is precisely what differentiates a set from a mere accumulation or mass. There is perhaps some ambiguity in the quoted passage: the statement, \"Number signifies that it is a measured multitude\", might be taken either to identify numbers with finite sets, or to imply that the subjects numbers are predicated of are finite sets. Euclid's definition \"a number is a multitude composed of units\" points to the former reading (which implies, for example, that there are many two's a particular knife and fork being one of them). Number-words, on this interpretation, would be strictly construed as denoting infimae species of numbers. It is clearly in accord with this conception that Aristotle says, for example (in illustrating the \"discreteness\", as opposed to continuity, of number): \"The parts of a number have no common boundary at which they join together. For example, if five is

39 citations


Journal ArticleDOI
01 Jun 1990-Synthese
TL;DR: In this paper, the authors present an exposition of the views expressed by Pierre Duhem in his Aim and Structure of Physical Theory concerning the philosophy and historiography of mathematics and provide a critique of these views, pointing to the conclusion that they are in need of reformulation.
Abstract: The first part of this paper consists of an exposition of the views expressed by Pierre Duhem in his Aim and Structure of Physical Theory concerning the philosophy and historiography of mathematics. The second part provides a critique of these views, pointing to the conclusion that they are in need of reformulation. In the concluding third part, it is suggested that a number of the most important claims made by Duhem concerning physical theory, e.g., those relating to the ‘Newtonian method’, the limited falsifiability of theories, and the restricted role of logic, can be meaningfully applied to mathematics.

29 citations


Journal ArticleDOI
Dennis Dieks1
01 Jan 1990-Synthese
TL;DR: In this article, it is argued that the symmetry and anti-symmetry of the wave functions of systems consisting of identical particles have nothing to do with the observational indistinguishability of these particles.
Abstract: It is argued that the symmetry and anti-symmetry of the wave functions of systems consisting of ‘identical particles’ have nothing to do with the observational indistinguishability of these particles. Rather, a much stronger ‘conceptual indistinguishability’ is at the bottom of the symmetry requirements. This can be used to argue further, in analogy to old arguments of De Broglie and Schrodinger, that the reality described by quantum mechanics has a wave-like rather than particle-like structure. The question of whether quantum statistics alone can give rise to empirically observable correlations between results of distant measurements is also discussed.

26 citations


Journal ArticleDOI
John Greco1
01 Nov 1990-Synthese
TL;DR: In this paper, the deontological (or responsibilist) conception of justification is discussed and explained, and arguments are put forward to derive the most plausible version of perspectival internalism, or the position that epistemic justification is a function of factors internal to the believer's cognitive perspective.
Abstract: In section one the deontological (or responsibilist) conception of justification is discussed and explained. In section two, arguments are put forward in order to derive the most plausible version of perspectival internalism, or the position that epistemic justification is a function of factors internal to the believer's cognitive perspective. The two most common considerations put forward in favor of perspectival internalism are discussed. These are the responsibilist conception of justification, and the intuition that two believers with like beliefs and experiences are equally justified in their like beliefs. In section three it is argued that perspectival internalism is false, and that in fact the position is not supported by a responsibilist conception of justification. Section four explicates two other forms of internalism, which are rejected for reasons similar to those presented against perspectival internalism. In section five, an internalist theory of justification is defended which is supported by a responsibilist conception of justification. Roughly stated, the position is that justified belief is belief which arises from the use of correct rules of reasoning. The idea of correctness is explicated, and the position is distinguished from others which are similar to it.

Journal ArticleDOI
01 Jun 1990-Synthese
TL;DR: The authors showed that Duhem's epistemological views were already formed before the crisis occured; that he consistently supported general thermodynamics against the new atomism; and that he rejected the epistemology views of the latter's philosophical supporters.
Abstract: I reject the widely held view that Duhem's 1906 book La Theorie physique is a statement of instrumentalistic conventionalism, motivated by the scientific crisis at the end of the nineteenth century. By considering Duhem's historical context I show that his epistemological views were already formed before the crisis occured; that he consistently supported general thermodynamics against the new atomism; and that he rejected the epistemological views of the latter's philosophical supporters. In particular I show that Duhem rejected Poincare's account of scientific language, Le Roy's view that laws are definitions, and the conventionalist's use of simplicity as the criterion of theory choice. Duhem regarded most theory choices as decidable on empirical grounds, but made historical context the main determining factor in scientific change.


Journal ArticleDOI
Lucia M. Vaina1
01 Apr 1990-Synthese
TL;DR: Clinical evidence for the existence of two visual systems in man, one specialized for spatial vision and the other for object vision, and the computational hypothesis that these two systems consist of several visual modules are presented.
Abstract: In this paper we focus on the modularity of visual functions in the human visual cortex, that is, the specific problems that the visual system must solve in order to achieve recognition of objects and visual space. The computational theory of early visual functions is briefly reviewed and is then used as a basis for suggesting computational constraints on the higher-level visual computations. The remainder of the paper presents neurological evidence for the existence of two visual systems in man, one specialized for spatial vision and the other for object vision. We show further clinical evidence for the computational hypothesis that these two systems consist of several visual modules, some of which can be isolated on the basis of specific visual deficits which occur after lesions to selected areas in the visually responsive brain. We will provide examples of visual modules which solve information processing tasks that are mediated by specific anatomic areas. We will show that the clinical data from behavioral studies of monkeys (Ungerleider and Mishkin 1984) supports the distinction between two visual systems in monkeys, the ‘what’ system, involved in object vision, and the ‘where’ system, involved in spatial vision.

Book ChapterDOI
01 Mar 1990-Synthese
TL;DR: The paper takes as a case study the various positions held by McDermott on these issues and concludes, reluctantly, that, although he has reversed himself on the issue, there was no time at which he was right.
Abstract: This paper continues a strain of intellectual complaint against the presumptions of certain kinds of formal semantics (the qualification is important) and their bad effects on those areas of artificial intelligence concerned with machine understanding of human language. After some discussion of the use of the term ‘epistemology’ in artificial intelligence, the paper takes as a case study the various positions held by McDermott on these issues and concludes, reluctantly, that, although he has reversed himself on the issue, there was no time at which he was right.


Journal ArticleDOI
Andrew Lugg1
01 Jun 1990-Synthese
TL;DR: For example, Duhem's discussion of physical theories as natural classifications is neither antithetical nor incidental to the main thrust of his philosophy of science as discussed by the authors, and he took the principle of the autonomy of physics to be of paramount importance and he developed the conception of natural classification in opposition to accounts of physical theory that contravened it.
Abstract: Duhem's discussion of physical theories as natural classifications is neither antithetical nor incidental to the main thrust of his philosophy of science. Contrary to what is often supposed, Duhem does not argue that theories are better thought of as economically organizing empirical laws than as providing information concerning the nature of the world. What he is primarily concerned with is the character and justification of the scientific method, not the logical status of theoretical entities. The crucial point to notice is that he took the principle of the autonomy of physics to be of paramount importance and he developed the conception of natural classification in opposition to accounts of physical theories that contravened it.

Journal ArticleDOI
01 Feb 1990-Synthese
TL;DR: The authors argued that the relation between connectionism and Chomsky's views on innate knowledge is more complicated than many have assumed, and that even if these models enjoy considerable success the threat they pose for linguistic nativism is small.
Abstract: Along with the increasing popularity of connectionist language models has come a number of provocative suggestions about the challenge these models present to Chomsky’s arguments for nativism. The aim of this paper is to assess these claims. We begin by reconstructing Chomsky’s “argument from the poverty of the stimulus” and arguing that it is best understood as three related arguments, with increasingly strong conclusions. Next, we provide a brief introduction to connectionism and give a quick survey of recent efforts to develop networks that model various aspects of human linguistic behavior. Finally, we explore the implications of this research for Chomsky’s arguments. Our claim is that the relation between connectionism and Chomsky’s views on innate knowledge is more complicated than many have assumed, and that even if these models enjoy considerable success the threat they pose for linguistic nativism is small.

Journal ArticleDOI
01 Aug 1990-Synthese
TL;DR: In this paper, a generalization of Hilbert's program is presented, which is best interpreted as a far-reaching generalisation of Hilbert's program and can be seen as a constructive foundation for mathematical analysis.
Abstract: The goal of Hilbert 's program to give consistency proofs for analysis and set theory within finitist mathematics is unattainable; the program is dead. The mathematical instrument, however, that Hilbert invented for attaining his programmatic aim is remarkably well: p r o o f theory has obtained important results and pursues fascinating logical questions; its concepts and techniques are fundamental for the mechanical search and transformation of proofs; and I believe that it will contribute to the solution of classical mathematical problems. 2 Nevertheless, we may ask ourselves whether the results of proof theory are significant for the foundational concerns that motivated Hilbert 's program and, more generally, for a reflective examination of the nature of mathematics. The results I alluded to establish the consistency of classical theories relative to constructive ones and give in particular a constructive foundation to mathematical analysis. They have been obtained in the pursuit of a reductive program that provides a coherent scheme for metamathematical work and is best interpreted as a far-reaching generalization of Hilbert 's program. For philosophers these definite mathematical results (should) present a profound challenge. To take it on means to explicate the reductionist point of constructive relative consistency proofs; the latter are to secure, after all, classical theories on the basis of more elementary, more evident ones. I take steps towards analyzing the precise character of such implicitly epistemological reductions and thus towards answering the narrow part of the above question. But

Journal Article
01 Jan 1990-Synthese
TL;DR: The Note Technique as discussed by the authors is a technique for the diffusion of travaux les plus recents consacres aux graveleux leteritiques, aux principaux resultats issus des theses de jeunes ingenieurs africains ou d'ingenieurs francais confirmes and au etudes pedologiques modernes.
Abstract: La presente Note Technique a pour but d'assurer une large diffusion aux travaux les plus recents consacres aux graveleux leteritiques, aux principaux resultats issus des theses de jeunes ingenieurs africains ou d'ingenieurs francais confirmes et au etudes pedologiques modernes dont les donnees devraient etre davantage exploitee par les ingenieurs. Cette revue des laterites commencera par definir ce que l'on entend par cette appellation etquelle acception retiennent pedologues et ingenieurs; les classificationsadoptees par les uns et par les autres seront ensuite examinees apres qu'auront ete presentees les conceptions les plus recentes sur la genese de ces formations. L'examen des classifications geotechniques permettra de faire le point sur les caracteristiques geomecaniques usuelles de ces materiaux telles qu'elles sont prises en compte en Genie Civil. On decrira lesmethodes de prospection a recommander et les diverses techniques d'emploides laterites selon qu'on les utilise au moyen d'un seul traitement mecanique ou avec des liants. On proposera des recommandations et des specifications pour leur mise en oeuvre et pour le dimensionnement des chaussees qui en sont constituees. On donnera les resultats des essais realises a l'occasion de cette etude et on indiquera quelles recherches seraient a poursuivre pour parfaire les connaissances sur ces materiaux afin de les utiliser au mieux alors que leurs gisements s'epuisent.

Journal ArticleDOI
01 Sep 1990-Synthese
TL;DR: The authors classify expressions of the first sort as singular terms, those of the second sort as predicate-phrases, and the third sort as first-order quantifier-phrase, and support a characterization of the interanimation of sentences, especially within chunks of discourse containing inferences.
Abstract: Throughout his philosophical career, Frege maintained that numbers were objects. 3 In part, 4 this thesis reflects facts about the syntactic form of sentences containing arithmetical expressions, the sorts of sentences uttered by infants learning to count things, children learning sums and simple algebra, and mathematicians teaching or advancing their science. With respect to their syntactic roles in the formation of sentences, expressions like '2', '2 + 3', 'the number of moons of Jupiter' and 'the least prime greater than 10' closely resemble paradigmatic singular terms, expressions like 'is prime' and 'is greater than' are much like paradigmatic predicate-phrases, and expressions like 'some number' and 'all numbers' resemble first-order quantifier-phrases from the other corners of our language. Why fight our inclination for generalization? Let's classify expressions of the first sort as singular terms, those of the second sort as predicate-phrases, and those of the third sort as firstorder quantifier-phrases. 5 This classification of lexical items doesn't merely help us understand the formation of individual sentences; it supports a characterization of what Quine called "the interanimation of sentences", especially within chunks of discourse containing inferences. This characterization permits our reasoning "about" numbers to be adequately regimented within any complete formalization of first-order logic. Mathematical predicatephrases play a proof-theoretic role like that played by paradigmatic level-one predicates; mathematical quantifier-phrases behave prooftheoretically like paradigmatic first-order quantifier-expressions. In-

Journal ArticleDOI
01 Feb 1990-Synthese
TL;DR: In the absence of empirical results, however, no a priori arguments against functionalism can be cogent as mentioned in this paper, and the cumulative empirical evidence from experiments on image inversion suggests that the results of actual spectrum inversion would confirm rather than refute functionalism.
Abstract: Functionalism, a philosophical theory, has empirical consequences. Functionalism predicts that where systematic transformations of sensory input occur and are followed by behavioral accommodation in which normal function of the organism is restored such that the causes and effects of the subject’s psychological states return to those of the period prior to the transformation, there will be a return of qualia or subjective experiences to those present prior to the transform. A transformation of this type that has long been of philosophical interest is the possibility of an inverted spectrum. Hilary Putnam argues that the physical possibilty of acquired spectrum inversion refutes functionalism. I argue, however, that in the absence of empirical results no a priori arguments against functionalism, such as Putnam’s, can be cogent. I sketch an experimental situation which would produce acquired spectrum inversion. The mere existence of qualia inversion would constitute no refutation of functionalism; only its persistence after behavioral accommodation to the inversion would properly count against functionalism, The cumulative empirical evidence from experiments on image inversion suggests that the results of actual spectrum inversion would confirm rather than refute functionalism.

Journal ArticleDOI
01 Jan 1990-Synthese
TL;DR: The common cause principle states that common causes produce correlations among their effects, but that common effects do not produce correlations amongst their causes as mentioned in this paper, which is false in classical statistical mechanics.
Abstract: The common cause principle states that common causes produce correlations amongst their effects, but that common effects do not produce correlations amongst their causes. I claim that this principle, as explicated in terms of probabilistic relations, is false in classical statistical mechanics. Indeterminism in the form of stationary Markov processes rather than quantum mechanics is found to be a possible saviour of the principle. In addition I argue that if causation is to be explicated in terms of probabilities, then it should be done in terms of probabilistic relations which are invariant under changes of initial distributions. Such relations can also give rise to an asymmetric cause-effect relationship which always runs forwards in time.

Journal ArticleDOI
01 Jun 1990-Synthese
TL;DR: Duhem attempted to find a middle way between two positions he regarded as extremes, the conventionalism of Poincare and the scientific realism of the majority of his scientific colleagues.
Abstract: Duhem attempted to find a middle way between two positions he regarded as extremes, the conventionalism of Poincare and the scientific realism of the majority of his scientific colleagues. He argued that conventionalism exaggerated the arbitrariness of scientific formulations, but that belief in atoms and electrons erred in the opposite direction by attributing too much logical force to explanatory theories. The instrumentalist sympathies so apparent in Duhem's writings on the history of astronomy are only partially counterbalanced by his view that science is progressing toward a ‘natural classification’ of the world.

Book ChapterDOI
01 Apr 1990-Synthese
TL;DR: One cannot understand Descartes without understanding his famous cogito insight, put forward for the first time publicly 350 years ago as discussed by the authors, and one cannot understand contemporary philosophy of mind without the ghost of Descarte skulking around in the shadows.
Abstract: One cannot discuss contemporary philosophy of mind without the ghost of Descartes skulking around in the shadows. And one cannot understand Descartes without understanding his famous cogito insight, put forward for the first time publicly 350 years ago.1 Twenty-five years ago I showed what the nerve of the Cartesian insight is2. Descartes is not inferring sum from cogito, but demonstrating to himself his own existence by performing an act of thinking. The expression cogito does not mark a premise from which sum is inferred, but a thought-art which reveals (as long as it goes on) to Descartes the entity that he is. Descartes’s little skit is analogous to someone’s, say Mark Twain’s, proving his existence to a skeptic by confronting the doubter and confirming his existence to him by saying: “I exist.” Of course any other thought-act (in Descartes’s case) or language act (in Mark Twain’s case) would have done the trick equally well. This opens the door to Descartes’s dramatic gambit of attempting to doubt, nay, to deny, everything. When he then tries to deny to himself his own existence, by so doing he on the contrary proves that he exists. In Mark Twain’s case an analogous purpose is served by the language act of declaring the rumors of his demise to be exaggerated.


Journal ArticleDOI
01 Jul 1990-Synthese
TL;DR: It is argued that axiom schemas tied to a preferred interpretation may provide a necessary intermediate stage of reflective abstraction en route to acquisition of the ability to use formal systems in abstracto.
Abstract: In this paper, we offer a Piagetian perspective on the construction of the logico-mathematical schemas which embody our knowledge of logic and mathematics. Logico-mathematical entities are tied to the subject's activities, yet are so constructed by reflective abstraction that they result from sensorimotor experience only via the construction of intermediate schemas of increasing abstraction. The ‘axiom set’ does not exhaust the cognitive structure (schema network) which the mathematician thus acquires. We thus view ‘truth’ not as something to be defined within the closed ‘world’ of a formal system but rather in terms of the schema network within which the formal system is embedded. We differ from Piaget in that we see mathematical knowledge as based on social processes of mutual verification which provide an external drive to any ‘necessary dynamic’ of reflective abstraction within the individual. From this perspective, we argue that axiom schemas tied to a preferred interpretation may provide a necessary intermediate stage of reflective abstraction en route to acquisition of the ability to use formal systems in abstracto.

Journal ArticleDOI
01 Oct 1990-Synthese
TL;DR: In this article, it is argued that the justification of a priori knowledge is consistent with Kant's general restriction that intuitions must play a role in justification of all nondegenerate knowledge.
Abstract: Kant's claim that the justification of transcendental philosophy is a priori is puzzling because it should be consistent with (1) his general restriction on the justification of knowledge, that intuitions must play a role in the justification of all nondegenerate knowledge, with (2) the implausibility of a priori intuitions being the only ones on which transcendental philosophy is founded, and with (3) his professed view that transcendental philosophy is not analytic. I argue that this puzzle can be solved, that according to Kant transcendental philosophy is justified a priori in the sense that the only empirical information required for its justification can be derived from any possible human experience. Transcendental justification does not rely on any more particular or special observations or experiments. Philip Kitcher's general account of apriority in Kant captures this aspect of a priori knowledge. Nevertheless, I argue that Kitcher's account goes wrong in the link it specifies between apriority and certainty.

Book ChapterDOI
01 Mar 1990-Synthese
TL;DR: This paper discusses three recent attempts to display the frame problem of AI: Dennett's problem of ignoring obviously irrelevant knowledge, Haugeland’s problem of efficiently keeping track of salient side effects, and Fodor's problems of avoiding the use of ‘kooky’ concepts.
Abstract: The frame problem is widely reputed among philosophers to be one of the deepest and most difficult problems of cognitive science This paper discusses three recent attempts to display this problem: Dennett’s problem of ignoring obviously irrelevant knowledge, Haugeland’s problem of efficiently keeping track of salient side effects, and Fodor’s problem of avoiding the use of ‘kooky’ concepts In a negative vein, it is argued that these problems bear nothing but a superficial similarity to the frame problem of AI, so that they do not provide reasons to disparage standard attempts to solve it More positively, it is argued that these problems are easily solved by slight variations on familiar AI themes Finally, some discussion is devoted to more difficult problems confronting AI

Journal ArticleDOI
01 May 1990-Synthese
TL;DR: In this paper, the authors reexamine Duhem's question of the continuity between medieval dynamics and early modern conservation theories and show how Descartes derives his law of conservation by extending Aristotelian celestial dynamics to the earth.
Abstract: Here I reexamine Duhem's question of the continuity between medieval dynamics and early modern conservation theories. I concentrate on the heavens. For Aristotle, the motions of the heavens are eternally constant (and thus mathematizable) because an eternally constant divine Reason is their mover. Duhem thought that impetus and conservation theories, by extending sublunar mechanics to the heavens, made a divine renewer of motion redundant. By contrast, I show how Descartes derives his law of conservation by extending Aristotelian celestial dynamics to the earth. Descartes argues that motion is intrinsically linear, not circular. But he agrees that motion is mathematically intelligible only where divine Reason moves bodies in a constant and eternal motion. Descartes strips bodies of active powers, leaving God as the only natural mover; thus both celestial and sublunar motions are constant, and uniformly mathematizable. The law of conservation of the total quantity of motion is an attempt to harmonize the constancy derived a priori with the phenomenal inconstancy of sublunar motions.