scispace - formally typeset
Search or ask a question

Showing papers in "Synthese in 1988"



Journal ArticleDOI
01 Aug 1988-Synthese
TL;DR: In this article, a general-equilibrium steady-state model was proposed to characterize the hostile and destructive interactions that characterize real-world social relations, in contrast with the harmonistic bias of orthodox economic theory.
Abstract: Individuals, groups, or nations — if rational and self-interested — will be balancing on the margin between two alternative ways of generating income: (1) “peaceful” production and exchange, versus (2) ‘appropriative” efforts designed to seize resources previously controlled by others (or to defend against such invasions). Both production and appropriation, on the assumption here, are entirely normal lines of activity engaged in to the extent that doing so seems profitable. The general-equilibrium steady-state model involves a resource partition function, a social production function, a combat power function, and an income distribution equation. Solutions were obtained under thesymmetrical Cournot protocol and two alternativeasymmetrical assumptions: the familiar Stackelberg condition and a more novel hierarchical protocol called Threat-and-Promise. The analysis demonstrates that, in contrast with the harmonistic bias of orthodox economic theory, a general-equilibrium model can also encompass the hostile and destructive interactions that characterize real-world social relations.

295 citations


Journal ArticleDOI
01 Aug 1988-Synthese
TL;DR: In this paper, the authors apply formal modeling to study a terrorist group's choice of whether to attack or not, and, in the case of an attack, which of two potential targets to strike.
Abstract: This article applies formal modeling to study a terrorist group's choice of whether to attack or not, and, in the case of an attack, which of two potential targets to strike. Each potential target individually takes protective measures that influence the terrorists' perceived success and failure, and, hence, the likelihood of attack. For domestic terrorism, a tendency for potential targets to overdeter is indicated. For transnational terrorism, cases of overdeterrence and underdeterrence are identified. We demonstrate that increased information about terrorists' preferences, acquired by the targets, may exacerbate inefficiency when deterrence efforts are not coordinated. In some cases, perfect information may eliminate the existence of a noncooperative solution.

272 citations


Journal ArticleDOI
01 Mar 1988-Synthese
TL;DR: In this article, a blend of internalism and externalism in the view of epistemic justification is presented, and the general contours of the position, as a basis for specifying the points at which we have an internalism-externalism issue.
Abstract: In this paper I will explain, and at least begin to defend, the particular blend of internalism and externalism in my view of epistemic justification. So far as I know, this is my own private blend; 2 many, I'm afraid, will not take that as a recommendation. Be that as it may, it's mine, and it's what I will set forth in this paper. I will first have to present the general contours of the position, as a basis for specifying the points at which we have an internalism-externalism issue. I won' t have time to defend the general position, or even to present more than a sketch. Such defence as will be offered will be directed to the internalist and externalist features. In a word, my view is that to be justified in believing that p is for that belief to be based on an adequate ground. To explain what I mean by this I will have to say something about the correlative terms 'based' on and 'ground' and about the adequacy of grounds. The ground of a belief is what it is based on. The notion of based on is a difficult one. I am not aware that anyone has succeeded in giving an adequate and illuminating general explanation of it. It seems clear that some kind of causal dependence is involved, whether the belief is based on other beliefs or on experience. If my belief that it rained last night is based on my belief that the streets are wet, then I hold the former belief because I hold the latter belief; my holding the latter belief explains my holding the former. Similarly, if my belief that the streets are wet is based on their looking wet, I believe that they are wet because of the way they look, and their looking that way explains my believing that they are wet. And presumably these are relations of causal dependence. But, equally clearly, not just any kind of causal dependence will do. My belief that p is causally dependent on a certain physiological state of my brain, but the former is not based on the latter. How is being based on distinguished from other sorts of causal dependence? We have a clear answer to this question for cases of maximally explicit inference, where I come to believe that p

187 citations



Journal ArticleDOI
Isaac Levi1
01 Jul 1988-Synthese
TL;DR: In this paper, the authors expose differences d'interpretations du test de Ramsey (celle de Gardenfors notamment) for les conditionnels (principe du changement minimal du systeme de croyances) and des divergences qui en decoulent for l'interpretation generale des modeles de revision de Croyances.
Abstract: Expose des differences d'interpretations du test de Ramsey (celle de Gardenfors notamment) pour les conditionnels (principe du changement minimal du systeme de croyances) et des divergences qui en decoulent pour l'interpretation generale des modeles de revision de croyances. L'A. expose sa propre theorie de la contraction et de l'expansion des croyances. Il considere un ensemble de croyances (belief set) comme une base pour la deliberation

95 citations


Book ChapterDOI
01 Oct 1988-Synthese
TL;DR: The most important background factor in the development of twentieth-century logic has received insufficient attention in the literature as mentioned in this paper, which is a largely tacit contrast between ways of looking at the relation of language and its logic to reality.
Abstract: The most important background factor in the development of twentieth-century logic has received insufficient attention in the literature. This factor is a largely tacit contrast between ways of looking at the relation of language and its logic to reality. I have called them the idea of language as the universal medium and the idea of language as calculus.1 I shall also refer the two traditions representing these two respective ideas as the universalist tradition and as the model-theoretical tradition.

70 citations


Journal ArticleDOI
01 Feb 1988-Synthese
TL;DR: The purpose of this paper is to expound one of the most important insights yielded by the interrogative model of inquiry, prominently including scientific inquiry.
Abstract: The purpose of this paper is to expound one of the most important insights yielded by the interrogative model of inquiry, prominently including scientific inquiry. I have outlined this model elsewhere.1 The basic idea on which this model is based is simplicity itself. It can be expressed most easily in the jargon of game theory.2 A player, called the Inquirer, is trying to prove a predetermined conclusion C from a given theoretical premise T. (In a variant form, the Inquirer is trying to prove either C or ~ C, i.e., to answer the initial question “C or not- C?”.) Over and above deductive moves, i.e., over and above drawing logical inferences, beginning with T, the Inquirer may address questions to a source of information and use the answers (when available) as additional premises, in short, may carry out interrogative moves. The answerer is called Nature. As a bookkeeping technique, a Beth-like semantical tableau can be assumed to be used.3 Each move is relative to the stage of a subtableau reached in the game at the time. The questions must of course pertain to a given model M of the language of T. Before a question is asked, its presupposition must have been established by the Inquirer, that is, occur in the left column of the subtableau in question.4

65 citations


Journal ArticleDOI
01 Jul 1988-Synthese
TL;DR: In this paper, the authors consider the case of multiple equilibria, where no player can ever be certain that the others believe he has certain beliefs, and they model the process of belief formation as a theory of counterfactuals.
Abstract: The difficulty of defining rational behavior in game situations is that the players' strategies will depend on their expectations about other players' strategies. These expectations are beliefs the players come to the game with. Game theorists assume these beliefs to be rational in the very special sense of beingobjectively correct but no explanation is offered of the mechanism generating this property of the belief system. In many interesting cases, however, such a rationality requirement is not enough to guarantee that an equilibrium will be attained. In particular, I analyze the case of multiple equilibria, since in this case there exists a whole set of rational beliefs, so that no player can ever be certain that the others believe he has certain beliefs. In this case it becomes necessary to explicitly model the process of belief formation. This model attributes to the players a theory of counterfactuals which they use in restricting the set of possible equilibria. If it were possible to attribute to the players the same theory of counterfactuals, then the players' beliefs would eventually converge.

49 citations


Journal ArticleDOI
Aarne Ranta1
01 Sep 1988-Synthese
TL;DR: Without violating the spirit of Game-Theoretical semantics, its results can be re-worked in Martin-Löf's Constructive Type Theory by interpreting games as types of Myself's winning strategies, creating a direct connection between linguistic semantics and computer programming.
Abstract: Without violating the spirit of Game-Theoretical semantics, its results can be re-worked in Martin-Lof's Constructive Type Theory by interpreting games as types of Myself's winning strategies. The philosophical ideas behind Game-Theoretical Semantics in fact highly recommend restricting strategies to effective ones, which is the only controversial step in our interpretation. What is gained, then, is a direct connection between linguistic semantics and computer programming.

40 citations


Journal ArticleDOI
01 Mar 1988-Synthese
TL;DR: The authors defend, contre les theories recentes de la " fiabilite " des croyances vraies qui definissent la connaissance par les processus externes au sujet connaissant, sa propre version de l'" internalisme ", which est une version du fondationnalisme en theorie de the connaissance.
Abstract: L'A. defend, contre les theories recentes de la " fiabilite " des croyances vraies qui definissent la connaissance par les processus externes au sujet connaissant, sa propre version de l'" internalisme ", qui est une version du fondationnalisme en theorie de la connaissance

Journal ArticleDOI
01 Aug 1988-Synthese
TL;DR: In this article, a dynamic model is proposed to analyze the evolution of modern guerrilla warfare and how such wars might be fought or combatted, and the model suggests specific paths for the evolution and how these wars could be combatted.
Abstract: Guerrilla warfare has emerged as one of the principal forms of modern warfare. Mutual deterrence, as achieved by the arms race and arms control, has made nuclear warfare not impossible but at least improb able, the major potential initiator of a nuclear war probably being miscalculation, accident, or escalation rather than premeditation or preemption.1 Traditional large-scale warfare, as in the case of both World Wars, is also improbable in view of potential superpower involvement and fear of escalation. What remains are three pos sibilities. First, there are wars confined to a particular region, and without direct superpower involvement, such as the Arab-Israeli wars, the India-Pakistan wars, the Iran-Iraq war, or the Falklands War. Second, there are civil wars such as those in Nigeria and Chad. Third, there are guerrilla wars, such as those in Malaysia, Vietnam, the Sudan, Nicaragua, El Salvador, and Afghanistan. The purpose of this paper is to analyze guerrilla warfare by means of a dynamic model.2 The model suggests specific paths for the evolution of such wars and how such wars might be fought or combatted.

Journal ArticleDOI
01 Nov 1988-Synthese
TL;DR: In this paper, the explanatory adequacy of lower-level theories when their higher-level counterparts are irreducible is explored, and a number of empirical conditions are sketched.
Abstract: This paper explores the explanatory adequacy of lower-level theories when their higher-level counterparts are irreducible. If some state or entity described by a high-level theory supervenes upon and is realized in events, entities, etc. described by the relevant lower-level theory, does the latter fully explain the higher-level event even if the higher-level theory is irreducible? While the autonomy of the special sciences and the success of various eliminativist programs depends in large part on how we answer this question, neither the affirmative or negative answer has been defended in detail. I argue, contra Putnam and others, that certain facts about causation and explanation show that such lower-level theories do explain. I also argue, however, that there may be important questions about counterfactuals and laws that such explanations cannot answer, thereby showing their partial inadequacy. I defend the latter claim against criticisms based on eliminativism about higher-level explanations and sketch a number of empirical conditions that lower-level explanations would have to meet to fully explain higher-level events.

Journal ArticleDOI
01 Oct 1988-Synthese
TL;DR: In this article, a post-peer-review, pre-copyedit version of an article published in Synthese is presented, which is available online at: https://doi.org/10.1007/BF00869546
Abstract: This is a post-peer-review, pre-copyedit version of an article published in Synthese. The final authenticated version is available online at: https://doi.org/10.1007/BF00869546

Journal ArticleDOI
01 Dec 1988-Synthese
TL;DR: In this article, the authors construe Kant's theory of mental representations as a theory of intentionality, and discover a striking contrast between Kant's views and those of his predecessors.
Abstract: If we construe Kant's theory of mental representations as a theory of intentionality, we will discover a striking contrast between Kant's views and those of his predecessors. Whereas there is an important sense in which Hume and much of the tradition preceding him extensionalizes intentional relations, Kant does not. Reflection on how Kant manages to avoid extensionalism, and on his theory of intentionality in general, provides us with an unusual and illuminating perspective on Kant's metaphysical and epistemological project.

Journal ArticleDOI
01 Jan 1988-Synthese
TL;DR: The traditional distinction that places studies of scientific discovery outside the philosophy of science, in psychology, sociology, or history, is no longer valid in view of the existence of computer systems of discovery.
Abstract: New computer systems of discovery create a research program for logic and philosophy of science. These systems consist of inference rules and control knowledge that guide the discovery process. Their paths of discovery are influenced by the available data and the discovery steps coincide with the justification of results. The discovery process can be described in terms of fundamental concepts of artificial intelligence such as heuristic search, and can also be interpreted in terms of logic. The traditional distinction that places studies of scientific discovery outside the philosophy of science, in psychology, sociology, or history, is no longer valid in view of the existence of computer systems of discovery. It becomes both reasonable and attractive to study the schemes of discovery in the same way as the criteria of justification were studied: empirically as facts, and logically as norms.

Journal ArticleDOI
Antti Koura1
01 Feb 1988-Synthese
TL;DR: In this paper, a semantical analysis of why-questions is given, with a focus on the conditions for conclusive answerhood in the case of whyquestions, and some topics in the theory of explanation are discussed.
Abstract: The purpose of this paper is to give a semantical analysis of why-questions. Why-questions will be construed as requests for knowledge. Special attention will be paid to considering what the conditions for conclusive answerhood are in the case of why-questions. Since explanations can often be thought of as answers to why-questions, we also discuss some topics in the theory of explanation.

Journal ArticleDOI
01 Dec 1988-Synthese
TL;DR: In this article, the authors argue that Kant's use of causal locutions to describe things in themselves is simply his attempt to capture the fact that as the objects that we are related to in experience, the existence of things in their own ontology is presupposed by any account of the nature of our experience of them.
Abstract: In this paper I examine Kant's use of causal language to characterize things in themselves. Following Nicholas Rescher, I contend that Kant's use of such causal language can only be understood by first coming to grips with the relation of things in themselves to appearances. Unlike Rescher, however, I argue that things in themselves and appearances are not numerically distinct entities. Rather, I claim that it is things in themselves that we are intentionally related to in veridical experience, though of course we know them only as they appear to us via our subjective experiential faculties. In light of this account of the role of things in themselves in Kant's account of experience, I argue that his use of causal locutions to describe things in themselves is simply his attempt to capture the fact that as the objects that we are related to in experience, the existence of things in themselves is presupposed by any account of the nature of our experienceof them.

Journal ArticleDOI
01 Mar 1988-Synthese
TL;DR: The authors show that it follows from both externalist and internalist theories that smart people may be in a better position to know than smart ones, and that knowledge, as contemporary theories construe it, is not a particularly valuable cognitive achievement.
Abstract: I show that it follows from both externalist and internalist theories that stupid people may be in a better position to know than smart ones. This untoward consequence results from taking our epistemic goal to be accepting as many truths as possible and rejecting as many falsehoods as possible, combined with a recognition that the standard for acceptability cannot be set too high, else scepticism will prevail. After showing how causal, reliabilist, and coherentist theories devalue intelligence, I suggest that knowledge, as contemporary theories construe it, is not a particularly valuable cognitive achievement, and that we would do well to reopen epistemology to the study of cognitive excellences of all sorts.

Journal ArticleDOI
01 Mar 1988-Synthese
TL;DR: In this paper, it is proposed that knowledge is equivalent to undefeated justification, which is justification on the basis of every system that eliminates or corrects any error in what a person accepts.
Abstract: Internalism and externalism are both false. What is needed to convert true belief into knowledge is the appropriate blend of subjective and objective factors to yield the appropriate sort of connection between mind and the world. The sort of knowledge explicated is calledmetaknowledge and is knowledge that involves the evaluation of incoming information in terms of a background system. It is proposed that knowledge is equivalent to undefeated justification which is justification on the basis of every system that eliminates or corrects any error in what a person accepts. The system of such system is called the ultrasystem of the person. This account appeals both to internal factors and external factors and involves appeal to both normative requirements and empirical constraints. Justification is defined in terms of a comparative notion of rationality adapted from Chisholm.

Journal ArticleDOI
01 Nov 1988-Synthese
TL;DR: In this paper, the authors discuss some philosophical problems concerning the geometrization of physics and propose that geometriation and unification are strongly combined, and discuss some solutions to these problems.
Abstract: In this paper I discuss some philosophical problems concerning the geometrization of physics, and propose that geometrization and unification are strongly combined.

Journal ArticleDOI
01 Jul 1988-Synthese
TL;DR: The prisoner's dilemma is an exploitable Newcomb problem only if there are particular sorts of similarity from which decision-makers can argue that cooperation may maximise conditional expected utility as discussed by the authors.
Abstract: David Lewis has shown that if a certain similarity between participants can be assumed, as it would seem it can, then the two-party prisoner's dilemma is a Newcomb problem. 1 This result suggests that if it is rational to maximise conditional expected utility then it will sometimes be rational to cooperate in a prisoner's dilemma. 2 But one of the firmest intuitions around is that cooperating in a one-shot prisoner's dilemma is irrational. And so the suggestion must count against the evidential decision theory which supports such maximisation. By the same token it will count in favour of the causal decision theory which Lewis himself supports. 3 I wish to argue, however, that except under one marginal sort of circumstance, the Lewis result will not support a policy of cooperation for decision-makers who seek to maximise conditional expected utility. From an outside point of view there may be grounds for thinking of a prisoner's dilemma as a Newcomb problem, but those grounds fail to provide a basis on which participants might think of cooperating. The prisoner's dilemma, at best, is an unexploitable Newcomb problem. In the next section I provide essential background to this argument, giving an account of the Newcomb problem, the prisoner's dilemma and the Lewis result. The prisoner's dilemma is an exploitable Newcomb problem only if there are particular sorts of similarity from which participants can argue that cooperation may maximise conditional expected utility. In the third section I consider how they might try to reason to this conclusion from their common rationality; in the fourth how they might try to do so, more generally, from any 'optionbased' similarity; and in the fifth how they might try to run an argument from an 'agent-based' similarity. I propound three theses, defended respectively in these last three sections. Thesis 1 is that the argument from rationality fails. Thesis 2 is that any argument from such an option-based similarity also fails. And Thesis 3 is that while an argument from an agent-based similarity

Journal ArticleDOI
01 Apr 1988-Synthese
TL;DR: In this paper, a measure on the set of all codes for computable functions in such a way that the measure of every effectively accountable subset is bounded by a number β < 1 is presented.
Abstract: A case is made for supposing that the total probability accounted for in a decision analysis is less than unity. This is done by constructing a measure on the set of all codes for computable functions in such a way that the measure of every effectively accountable subset is bounded by a number β<1. The consistency of these measures with the Savage axioms for rational preference is established. Implications for applied decision theory are outlined.

Journal ArticleDOI
01 Mar 1988-Synthese
TL;DR: In this paper, it is argued that BonJour's Doxastic Presumption cannot play the role which is required of it to make his internalism work, and that all other internalisms are motivated by a Cartesian view of an agent's access to her own mental states.
Abstract: This paper examines Laurence BonJour's defense of internalism inThe Structure of Empirical Knowledge with an eye toward better understanding the issues which separate internalists from externalists. It is argued that BonJour's Doxastic Presumption cannot play the role which is required of it to make his internalism work. It is further argued that BonJour's internalism, and, indeed, all other internalisms, are motivated by a Cartesian view of an agent's access to her own mental states. This Caretsian view is argued to be untenable, and, accordingly, so is internalism.

Journal ArticleDOI
01 Aug 1988-Synthese
TL;DR: Issues that arise in using game theory to model national security problems are discussed, including positing nation-states as players, assuming that their decision makers act rationally and possess complete information, and modeling certain conflicts as two-person games.
Abstract: Issues that arise in using game theory to model national security problems are discussed, including positing nation-states as players, assuming that their decision makers act rationally and possess complete information, and modeling certain conflicts as two-person games. A generic two-person game called the Conflict Game, which captures strategic features of such variable-sum games as Chicken and Prisoners' Dilemma, is then analyzed. Unlike these classical games, however, the Conflict Game is a two-stage game in which each player can threaten to retaliate — and carry out this threat in the second stage — if its opponent chose noncooperation in the first stage. Conditions for the existence of different pure-strategy Nash equilibria, or stable outcomes, are found, and these results are extended to situations in which the players can select mixed strategies (i.e., make probabilistic threats or choices). Although the Conflict Game sheds light on the rational foundations underlying arms races, nuclear deterrence, and other strategic situations, more detailed assumptions are required to tie this generic game to specific conflicts.

Journal ArticleDOI
Roy Sorensen1
01 Apr 1988-Synthese
TL;DR: The authors show that the sorites paradox cannot be solved through restrictions, revisions, or rejection of either classical logic or common sense, and conclude that vague predicates are dispensable or they are identical to blurry predicates.
Abstract: My thesis is that the sorites paradox can be resolved by viewing vagueness as a type of irremediable ignorance. 1 I begin by showing _ that the paradox cannot be solved through restrictions, revisions, or rejection of either classical logic or common sense. I take the key issue raised by the sorites to be \"limited sensitivity\": are there changes too small to ever affect the applicability of a vague Predicate? I argue that the only consistent answer is negative, and blame our tendency to think otherwise on a fallacious proportionality principle and a background of anti-realist theories of meaning. These theories of meaning encourage the view that perceptual, pedagogical, and memory limits would preclude unlimited sensitivity. Refutation of this view comes in the form of a reduction of vague predicates to \"blurry\" predicates. Since blurry predicates have unlimited sensitivity and are indistinguishable from their vague counterparts, I conclude that either vague predicates are dispensable or they are identical to blurry predicates. The sorites appears to have originated with Eubulides. One well known version is the paradox of the heap which takes the form of a mathematical induction. The base step of the induction claims that a collection of sand containing, say, one million grains of sand, is a heap. The induction step claims that any heap remains a heap if only one grain of sand is removed from it. Classical logic allows us to validly infer from these two propositions that a collection of sand containing one grain of sand is a heap. One has resolved the paradox of the heap'iff one has shown how Eubulides' argument (and its variations) is unsound. Thus one can classify resolutions of the paradox in accordance with the manner in which they constitute objections to the soundness of Eubulides' argument. There are two basic kinds of objections to the soundness of an argument: a challenge to the truth of its premises and a challenge to its validity.

Journal ArticleDOI
Brent Mundy1
01 Apr 1988-Synthese
TL;DR: In this paper, it was shown that weak extensive measurement is a more natural model of actual physical extensive scales than is the standard model using strong extensive measurement, and the present apparatus is applied to slightly simplify the existing necessary and sufficient conditions for strong extensive measurements.
Abstract: Extensive measurement theory is developed in terms of theratio of two elements of an arbitrary (not necessarily Archimedean) extensive structure; thisextensive ratio space is a special case of a more general structure called aratio space. Ratio spaces possess a natural family of numerical scales (r-scales) which are definable in non-representational terms; ther-scales for an extensive ratio space thus constitute a family of numerical scales (extensive r-scales) for extensive structures which are defined in a non-representational manner. This is interpreted as involving arelational theory of quantity which contrasts in certain respects with thequalitative theory of quantity implicit in standard representational extensive measurement theory. The representational properties of extensiver-scales are investigated, and found to coincide withweak extensive measurement in the sense of Holman. This provides support for the thesis (developed in a separate paper) that weak extensive measurement is a more natural model of actual physical extensive scales than is the standard model using strong extensive measurement. Finally, the present apparatus is applied to slightly simplify the existing necessary and sufficient conditions for strong extensive measurement.

Journal ArticleDOI
01 Jun 1988-Synthese
TL;DR: In this paper several views on the fuzziness of concepts are pointed out to have stemmed from dubious concepts of fuzziness and an alternative definition based on classes (in the sense of axiomatic set theory) is proposed.
Abstract: It has been a vexing question in recent years whether concepts are fuzzy. In this paper several views on the fuzziness of concepts are pointed out to have stemmed from dubious concepts of fuzziness. The underlying notions of the roles feasibly played byprototype, set, andprobability in modeling concepts strongly suggest that the controversy originates from a vague relation between intuitive and mathematical ideas in the cognitive sciences. It is argued that the application of fuzzy sets cannot resolve this vagueness since they are one sided,viz., defined on sets. An alternative definition based on classes (in the sense of axiomatic set theory) is proposed.

Journal ArticleDOI
01 Feb 1988-Synthese
TL;DR: In this paper, a formal treatment of the Appendix II of Carlson's quantified epistemic logic is presented, and a possible worlds semantics for this system is defined, and it is shown that the Appendix system is complete with respect to this semantics.
Abstract: This paper contains a formal treatment of the system of quantified epistemic logic sketched in Appendix II of Carlson (1983). Section 1 defines the syntax and recapitulates the model set rules and principles of the Appendix system. Section 2 defines a possible worlds semantics for this system, and shows that the Appendix system is complete with respect to this semantics. Section 3 extends the system by an explicit truth operatorT “it is true that” and considers quantification over nonexistent individuals. Section 4 formalizes the idea of variable identity criteria typical of Hintikkian epistemic logic.

Journal ArticleDOI
01 Oct 1988-Synthese
TL;DR: The authors expose les methodes suivies par Husserl et Frege dans leurs decouvertes logiques and insiste surtout sur la position de Husserls vis-a-vis de la logique traditionnelle, aristotelienne notamment.
Abstract: L'A. expose les methodes suivies par Husserl et Frege dans leurs decouvertes logiques. Il insiste surtout sur la position de Husserl vis-a-vis de la logique traditionnelle, aristotelienne notamment