scispace - formally typeset
Search or ask a question

Showing papers in "Synthese in 1977"


Journal ArticleDOI
01 Oct 1977-Synthese
TL;DR: A geometrical interpretation of independence and exchangeability leads to understanding the failure of de Finetti's theorem for a finite exchangeable sequence as mentioned in this paper, in particular a finite sequence of length r which can be extended to an exchangeable one of length k, with the error going to zero like 1/k.
Abstract: A geometrical interpretation of independence and exchangeability leads to understanding the failure of de Finetti's theorem for a finite exchangeable sequence. In particular an exchangeable sequence of length r which can be extended to an exchangeable sequence of length k is almost a mixture of independent experiments, the error going to zero like 1/k.

180 citations


Journal ArticleDOI
Isaac Levi1
01 Apr 1977-Synthese
TL;DR: In this article, it is shown that X and Y differ in the way they evaluate h with respect to credal probability to be used in practical deliberation and scientific inquiry in computing expectations.
Abstract: X says ‘It is probable that h’ and Y says ‘It is improbable that h’. No doubt X and Y disagree in some ways. In particular, they disagree in the way they evaluate h with respect to credal (or personal) probability to be used in practical deliberation and scientific inquiry in computing expectations.

170 citations


Journal ArticleDOI
01 Sep 1977-Synthese
TL;DR: The stimulus is multiple: letters from friends calling my attention to a dispute in journal articles, in letters to editors, and in books, about what is described as 'the Neyman-Pearson school' and particularly what isdescribed as Neyman's 'radical' objectivism.
Abstract: journal especially given to foundations of probability and statistics. The other stimulus is multiple: letters from friends calling my attention to a dispute in journal articles, in letters to editors, and in books, about what is described as 'the Neyman-Pearson school' and particularly what is described as Neyman's 'radical' objectivism. While being grateful to my friends for their effort to keep me informed, I have to admit that, owing to a variety of present research preoccupations, I have not read the whole of the literature mentioned to me. However, I glanced at the published exchange of letters and at the book by de Finetti [1]. My reactions are somewhat mixed. First, I feel honored by the attention given to my writings, primarily those published more than a quarter of a century ago (see [2]). Next, I feel a degree of amusement when reading an exchange between an authority in 'subjectivistic statistics' and a practicing statisti cian, more or less to this effect:

144 citations


Journal ArticleDOI
Robert Nozick1
01 Nov 1977-Synthese
TL;DR: The major figures of the Austrian tradition in economic theory are Cad Menger and Frederick yon Weiser, originators of marginal utility theory, Eugen von B6hm-Bawerk, and in this century Ludwig yon Mises and the co-winner of the 1974 Nobel Prize in economics, Frederick Hayek as mentioned in this paper.
Abstract: The major figures of the Austrian tradition in economic theory are Cad Menger and Frederick yon Weiser, originators of marginal utility theory, Eugen von B6hm-Bawerk, and in this century Ludwig yon Mises and the co-winner of the 1974 Nobel Prize in Economics, Frederick Hayek. 1 A framework of methodological principles and assumptions, which economists in other traditions either do not accept or do not adhere to, shapes and informs the substantive theory of Austrian economics. I shall focus on the most fundamental features of this framework, the principle of methodological individualism and the claim that economics is an a priori science of human action, and upon two issues at the foundation of Austrian theory within this framework: the nature of preference and its relationship to action, and the basis of time-preference. I shall be forced to neglect the farthest reaches of the theory, for example, the Austrian theory of the business cycle, where still the fundamental methodological theses intertwine. I also shall leave untouched other illuminating distinctive emphases and approaches of Austrian theory, e.g. the constant awareness of and attention to processes occurring in and through time, the study of the coordination of actions and projects when information is decentralized, the realistic theory of competitive processes. Nor shall I be able to detail the intricate interconnections of the different Austrian themes.

115 citations


Journal ArticleDOI
01 Sep 1977-Synthese
TL;DR: The Lindley-Savage argument for Bayesian theory is shown to have no direct cogency as a criticism of typical standard practice, since it is based on a behavioral, not an evidential, interpretation of decisions.
Abstract: The concept of a decision, which is basic in the theories of Neyman Pearson, Wald, and Savage, has been judged obscure or inappropriate when applied to interpretations of data in scientific research, by Fisher, Cox, Tukey, and other writers. This point is basic for most statistical practice, which is based on applications of methods derived in the Neyman-Pearson theory or analogous applications of such methods as least squares and maximum likelihood. Two contrasting interpretations of the decision concept are formulated: behavioral, applicable to 'deci sions' in a concrete literal sense as in acceptance sampling; and evidential, applicable to 'decisions' such as 'reject H{ in a research context, where the pattern and strength of statistical evidence concerning statistical hypotheses is of central interest. Typical standard practice is charac terized as based on the confidence concept of statistical evidence, which is defined in terms of evidential interpretations of the 'decisions' of decision theory. These concepts are illustrated by simple formal examples with interpretations in genetic research, and are traced in the writings of Neyman, Pearson, and other writers. The Lindley-Savage argument for Bayesian theory is shown to have no direct cogency as a criticism of typical standard practice, since it is based on a behavioral, not an evidential, interpretation of decisions.

73 citations


Journal ArticleDOI
01 Dec 1977-Synthese
TL;DR: The concept of indeterminacy is a generalization that goes strictly beyond ordinary probability theory, and thus provides a means of expressing the intuitions of those philosophers who are not satisfied with a purely probabilistic notion of ind uncertainty.
Abstract: For a variety of reasons there has been considerable interest in upper and lower probabilities as a generalization of ordinary probability. Perhaps the most evident way to motivate this generalization is to think of the upper and lower probabilities of an event as expressing bounds on the probability of the event. The most interesting case conceptually is the assignment of a lower probability of zero and an upper probability of one to express maximum ignorance. Simplification of standard probability spaces is given by random variables that map one space into another and usually simpler space. For example, if we flip a coin a hundred times, the sample space describing the possible outcome of each flip consists of 2 100 points, but by using the random variable that simply counts the number of heads in each sequence of a hundred flips we can construct a new space that contains only 101 points. Moreover, the random variable generates in a direct fashion the appropriate probability measure on the new space. What we set forth in this paper is a similar method for generating upper and lower probabilities by means of random relations. The generalization is a natural one; we simply pass from functions to relations, and the multivalued character of the relations leads in an obvious way to upper and lower probabilities. The generalization from random variables to random relations also provides a method for introducing a distinction between indeterminacy and uncertainty that we believe is new in the literature.

69 citations


Journal ArticleDOI
01 Sep 1977-Synthese
TL;DR: This article provided an overall simple theoretical account of the structure of perceptual experience and found the proper logical machinery to formulatte those accounts rigorously, but they did not consider the relationship between perceptual experiences and perceptual experience.
Abstract: We have now provided an overall simple theoretical account of the structure of perceptual experience proto-philosophically examined in Part I The next task is to find the proper logical machinery to formulatte those accounts rigorously

62 citations


Book ChapterDOI
01 Dec 1977-Synthese
TL;DR: Philosophers of past times have claimed that the answer to the question, Is visual space Euclidean?
Abstract: Philosophers of past times have claimed that the answer to the question, Is visual space Euclidean?, can be answered by a priori or purely philo­sophical methods. Today such a view is presumably held only in remote philosophical backwaters. It would be generally agreed that one way or another the answer is surely empirical, but the answer might be empirical for indirect reasons. It could be decided by physical arguments that physi­cal space is Euclidean and then by conceptual arguments about perception that necessarily the visual space must be Euclidean. To some extent this must be the view of many laymen who accept that to a high degree of approximation physical space is Euclidean, and therefore automatically hold the view that visual space is Euclidean.

48 citations


Journal ArticleDOI
01 Apr 1977-Synthese
TL;DR: In this article, the authors present an account of dispositional properties or powers in the Truth, Probability, and Paradox (hereafter TPP) book, which they call the Cement of the Universe (COTU).
Abstract: In Chapter 4 of my Truth,Probability, and Paradox (hereafter TPP) I tried to give an account of dispositional properties or powers. However, I failed to make my views clear to some readers, partly because I developed my own position only gradually and by way of criticism of others, and some who have understood it have found my key arguments unconvincing. Consequently, though my views have not substantially changed, it may be worth while to present my positive conclusions more bluntly and so, I hope, more clearly, and to reinforce the arguments that support my chief claims. At the same time I shall be able to relate what I have to say about dispositions both to the account of causation which I have since offered in The Cement of the Universe (hereafter ‘CU’) and to views about primary qualities and real essence developed in Problems from Locke (hereafter ‘PL’). This account will not replace the more discursive treatment in TPP, but it may clarify and strengthen it.

43 citations


Journal ArticleDOI
01 Dec 1977-Synthese

37 citations


Journal ArticleDOI
01 Jan 1977-Synthese
TL;DR: Among the greatest philosophers of science of all times one would surely have to include Aristotle, Descartes, Leibniz, Hume, and Kant as mentioned in this paper, and in an important sense Kant represents a culmination of this tradition on account of his strenuous attempts to provide an epistemological and metaphysical analysis appropriate to mature Newtonian science.
Abstract: Among the greatest philosophers of science of all times one would surely have to include Aristotle, Descartes, Leibniz, Hume, and Kant. In an important sense Kant represents a culmination of this tradition on account of his strenuous attempts to provide an epistemological and metaphysical analysis appropriate to mature Newtonian science. There were, to be sure, significant developments in classical physics after Kant’s death — e.g. Maxwell’s electromagnetic theory — but these seemed more like completions of the Newtonian system than revolutionary subversions which would demand profound conceptual and philosophical revisions. It was only in 1905 that the first signs of fundamental downfall of classical physics began to be discernible; even though Planck’s quantum hypothesis and the Michelson-Morley experiment had occurred earlier, their crucial importance was only later recognized.

Book ChapterDOI
01 Apr 1977-Synthese
TL;DR: In this paper, the fundamental concepts of physical ontology are those of objects and events, and it is widely assumed that the world itself is amenable to being characterized by means of an event ontology or an object ontology, where the outstanding difficulty is simply one of finding the right sort of fit.
Abstract: Perhaps the fundamental concepts of physical ontology are those of objects and events; for it is widely assumed that the world itself is amenable to being characterized successfully by means of an event ontology or an object ontology, where the outstanding difficulty is simply one of finding the right sort of fit. Although these pathways have seemed promising, they have not been without their own distinctive difficulties, for despite an area of agreement concerning suitable criteria for the individuation of objects, substantial disagreement abounds regarding appropriate standards for the differentiation of events.1 This matter is consequential for both perspectives, moreover, since whether objects are to be constructed from events or events from objects, neither view presumes either category alone provides a sufficient foundation for an adequate ontology.2 The problems which they share have resisted successful explication, nevertheless.

Journal ArticleDOI
01 Dec 1977-Synthese
TL;DR: The theory I offer rests on a natural extension of standard techniques used in modal logic for the analysis of concepts of necessity and possibility, and is based on standard techniques of measure theory.
Abstract: A measure of degrees of similarity between possible worlds can be used to generate measures over propositions, or sets of possible worlds. These measures over propositions will count as 'probability measures', at least in the sense that they satisfy the axioms of the probability calculus. In a previous article (Bigelow, 1976), I have outlined one way in which such probability measures can be generated. In the present article I will present a considerably less devious way of generating probability measures. I will draw on two resources. My first resource will be standard techniques of measure theory. I will borrow lavishly from an excellent mathematics textbook by Friedman (1970). My second resource will be provided by concepts originating in modal logic, and also in the analysis of counterfactuals given by David Lewis (1973). The theory I offer rests on a natural extension of standard techniques used in modal logic for the analysis of concepts of necessity and possibility. The semantics of modal logic rest on a relation, called a (strict) accessibility relation on possible worlds. Different accessibility relations provide us with analyses of different concepts of necessity and possibility (see Hughes and Cresswell, 1968). A strict accessibility relation is used to give an analysis of the relationship of necessitation which may hold between propositions. We say it is true, in a given possible world, that one proposition necessitates another, when all the accessible worlds in which the first is true are worlds in which the second is true as well. But there are important relations among propositions which a strict accessibility relation does not illuminate. It may be, in particular, that though one proposition does not strictly necessitate another, yet it does nevertheless provide good inductive support for it. If the first is true, it may be extremely probable that the second will be true as well. This will be so, I will maintain, when most of the worlds in which the first is true are worlds in which the second is true.

Book ChapterDOI
01 Feb 1977-Synthese
TL;DR: Perhaps the most difficult problem confronted by Reichenbach’s explication of physical probabilities as limiting frequencies is that of providing decision procedures for assigning singular occurrences to appropriate reference class, i.e., the problem of the single case.
Abstract: Perhaps the most difficult problem confronted by Reichenbach’s explication of physical probabilities as limiting frequencies is that of providing decision procedures for assigning singular occurrences to appropriate reference class, i.e., the problem of the single case.1 Presuming the symmetry of explanations and predictions is not taken for granted, this difficulty would appear to have two (possibly non-distinct) dimensions, namely: the problem of selecting appropriate reference classes for predicting singular occurrences, i.e., the problem of (single case) prediction, and the problem of selecting appropriate reference classes for explaining singular occurrences, i.e., the problem of (single case) explanation. If the symmetry thesis is theoretically sound, then these aspects of the problem of the single case are actually non-distinct, since any singular occurrence should be assigned to one and the same reference class for purposes of either kind; but if it is not the case that singular occurrences should be assigned to one and the same reference class for purposes of either kind, then these aspects are distinct and the symmetry thesis is not sound.2

Journal ArticleDOI
01 Sep 1977-Synthese
TL;DR: In this paper, the present author reluctantly agreed to add to the extensive literature on the subject of foundations, and the reason for the present essay is the fortuitous circumstance that this author had the benefit of five years of spirited discussions with Etienne Halphen whose views were not entirely out of line with the present neo-Bayesian philosophies.
Abstract: Discussions about foundations are typically accompanied by much unnecessary proselytism, name calling and personal animosities. Since they rarely contribute to the advancement of the debated discipline one may be strongly tempted to brush them aside in the direction of the appropriate philosophers. However, there is always a ghost of a chance that some new development might be spurred by the arguments. Also the possibly desirable side effects of the squabbles on the teaching and on the standing of the debated disciplines cannot be entirely ignored. This partly explains why the present author reluctantly agreed to add to the extensive literature on the subject. Another reason for the present essay is the fortuitous circumstance that this author had the benefit of five years of spirited discussions with Etienne Halphen whose views were not entirely out of line with the present neo-Bayesian philosophies. By virtue of these special circumstances, it happened that, contrary to what seems to be the case for most American statisticians, we learned a form of the neo-Bayesian creed before being exposed to the classical theory of statistics. From these long discussions, in which the works of de Morgan, Venn, Keynes, Jeffreys and many others were constantly quoted, we have retained a certain awe for the Bayesian approach itself, but above all for the fascination its attractive simplicity seems to have for the sharpest minds. However, we could not defend ourselves at that time, and cannot defend ourselves now from the sentiment that this age and times should see an elaboration of a formalized mathematical theory in which the subject could not only be debated but also studied.

Journal ArticleDOI
01 Nov 1977-Synthese
TL;DR: The world I grew up in believed that change and development in life are part of a continuous process of cause and effect, minutely and patiently sustained throughout the millenniums as discussed by the authors.
Abstract: The world I grew up in believed that change and development in life are part of a continuous process of cause and effect, minutely and patiently sustained throughout the millenniums. With the exception of the initial act of creation ..., the evolution of life on earth was considered to be a slow, steady and ultimately demonstrable process. No sooner did I begin to read history, however, than I began to have my doubts. Human society and living beings, it seemed to me, ought to be excluded from so calm and rational a view. The whole of human development, far from having been a product of steady evolution, seemed subject to only partially explicable and almost invariably violent mutations. Entire cultures and groups of individuals appeared imprisoned for centuries in a static shape which they endured with long-suffering indifference, and then suddenly, for no demonstrable cause, became susceptible to drastic changes and wild surges of development. It was as if the movement of life throughout the ages was not a Darwinian caterpillar but a startled kangaroo, going out towards the future in a series of unpredictable hops, stops, skips and bounds. Indeed, when I came to study physics I had a feeling that the modern concept of energy could perhaps throw more light on the process than any of the more conventional approaches to the subject. It seemed that species, society and individuals behaved more like thunder-clouds than scrubbed, neatly clothed and well-behaved children of reason. Throughout the ages life appeared to build up great invisible charges, like clouds and earth of electricity, until suddenly in a sultry hour the spirit moved, the wind rose, a drop of rain fell acid in the dust, fire flared in the nerve, and drums rolled to produce what we call thunder and lightening in the heavens and chance and change in human society and personality. LAURENS VAN DER POST, The Lost World of the Kalahari

Journal ArticleDOI
J. Kiefer1
01 Sep 1977-Synthese

Journal ArticleDOI
John W. Pratt1
01 Sep 1977-Synthese
TL;DR: To whatever extent the use of a behavioral, not an evidential, interpretation of decisions in the Lindley-Savage argument for Bayesian theory undermines its cogency as a criticism of typical standard practice, it also undermines the Neyman-Pearson theory as a support for typical standard practices as mentioned in this paper.
Abstract: To whatever extent the use of a behavioral, not an evidential, interpretation of decisions in the Lindley-Savage argument for Bayesian theory undermines its cogency as a criticism of typical standard practice, it also undermines the Neyman-Pearson theory as a support for typical standard practice. This leaves standard practice with far less theoretical support than Bayesian methods. It does nothing to resolve the anomalies and paradoxes of standard methods. (Similar statements apply to the common protestation that the models are not real anyway.) The appropriate interpretation of tests as evidence, if possible at all, is difficult and counterintuitive. Any attempt to support tests as more than rules of thumb is doomed to failure.

Journal ArticleDOI
01 Dec 1977-Synthese
TL;DR: In the statistical relevance model as discussed by the authors, a reference class A is homogeneous with respect to an attribute B provided there is no set of properties C i in terms of which A can be relevantly partitioned.
Abstract: The statistical-relevance (S.R) model of scientific explanation involves homogeneous references classes. 1 A reference class A is homogeneous with respect to an attribute B provided there is no set of properties Ci in terms of which A can be relevantly partitioned. A partition of A by means of C i is relevant with respect to B if, for some value ofi, P(A.Ci;B ) 4=P(A,B). To say that a reference class is homogeneous with respect to an attribute does not mean merely that we do not know how to effect a relevant partition, or that there are practical obstacles to carrying out the partition. To say that a reference class is homogeneousobjectively homogeneous for emphas i s means that there is no way, even in principle, to effect the relevant partition. There are two cases in which homogeneity obtains trivially, namely, if all A are B or if no A are B. This follows from an obvious logical truism. We shall not be interested in trivial homogeneity. In the non-trivial cases, some restrictions must be imposed upon the types of partitions that are to be admitted; otherwise, the concept of homogeneity becomes vacuous in all but the trivial cases. Suppose that P(A,B) = 1⁄2. Let CI =B and C2=B. Then P(A.C1,B)=I and P(A.C2,B)=O; thereby a relevant partition has been achieved. The problem of ruling out unsuitable partitions is precisely the problem Richard von Mises faced in attempting to characterize his 'collectives.' ([ 15], chap. I; [16], chap. I) A collective, it will be recalled, is an infinite sequence x l , x2, x 3 , . . , in which some attribute B occurs with a relative frequency which converges to a limiting value p. Furthermore, the sequence must be random in the sense that the limiting frequency of B in any subsequence selected from the main sequence by means of a 'place selection' must have the same value p. This is the principle of insensitivity of the probability to place selections; it is also the principle of the impossibility of a gambling system. Roughly speaking, a place selection must determine whether a member of the main sequence belongs to the subsequence without reference to whether the element in question has or lacks the attribute B. There are two types of place selections: (1) selections which determine membership in the

Journal ArticleDOI
01 Oct 1977-Synthese
TL;DR: In this paper, a uniform method for constructing tests, confidence regions and point estimates which is called exact since it reduces to Fisher's so-called exact test in the case of the hypothesis of independence in a 2 × 2 contingency table was proposed.
Abstract: This paper proposes a uniform method for constructing tests, confidence regions and point estimates which is called exact since it reduces to Fisher's so-called exact test in the case of the hypothesis of independence in a 2 × 2 contingency table. All the wellknown standard tests based on exact sampling distributions are instances of the exact test in its general form. The likelihood ratio and x2 tests as well as the maximum likelihood estimate appears as asymptotic approximations to the corresponding exact procedures.

Journal ArticleDOI
01 Nov 1977-Synthese
TL;DR: In this paper, the authors provide an overview of models of individual preference and choice, and direct the reader towards some of the vast literature on the topics that we shall consider. But they do not discuss this process explicitly.
Abstract: The purpose of this paper is to provide an overview of models of individual preference and choice, and to direct the reader towards some of the vast literature on the topics that we shall consider. Our discussion will center around three basic approaches to the structure of preferences and choices, namely binary preferences, choice functions, and choice probability functions. These three approaches will be explored both separately and in terms of interconnections between them. Although an important aspect of decision making is the development and discovery of feasible decision alternatives, I shall not discuss this process explicitly. It will be presumed throughout that the individual is confronted with a non-empty set X of potential decision alternatives which will be denoted as x, y, a, b ..... Generally speaking, the alternafives in X are mutually incompatible in the sense that no more than one alternative can ultimately be chosen and implemented. Moreover, when discussing choices from specified subsets of X, it will generally be assumed that some alternative in each subset must be chosen when that subset is the feasible subset. This latter point deserves elaboration. To provide flexibility in discussing choices and relationships between choices from different 'environments' or subsets of X, we imagine that any one of a number of non-empty subsets of X might emerge as the set of alternatives actually available for implementation in the situation at hand. Notationally, M will denote this set of potentially feasible subsets of X, with members A, B,... Then we can speak about choices or potential choices from A, from B, and so forth, and consider relationships between these choices in view of relationships between the potential feasible subsets. For example, if A is a subset of B, or A _~ B, and if x ~ A is a 'choice' from B, then we

Journal ArticleDOI
01 Sep 1977-Synthese
TL;DR: The inference and decision processes are therefore different and solved by different methods Bayes' theorem and maximization of expected utility respectively as mentioned in this paper, which is why it is useful to separate inference from decision because inference can be carried out without considering decisions.
Abstract: The inference and decision processes are therefore different and solved by different methods Bayes' theorem and maximization of expected utility respectively. The former, in addition to the known p (x I 0), requires only p(O) and the observed data values. The latter has additional ingredients in the class of available decisions and the utility function. Whilst it is possible to make inferences without considering decisions, the implementation of decision-making requires an earlier calculation of the appropriate inference, p (0 Ix). Notice, however, the important point, that every decision problem which depends on 0 requires the same inferential statement in order to evaluate the expected value. This is why it is useful to separate inference from decision because inference can be carried out

Book ChapterDOI
01 Mar 1977-Synthese
TL;DR: In this article, the status of a twodimensional spatial geometry on a disk rotating with respect to an inertial frame in the Minkowski space time of the special theory of relativity is discussed.
Abstract: In his (1924; 1969), Hans Reichenbach discussed the status of a twodimensional spatial geometry on a disk rotating with respect to an inertial frame in the Minkowski space time of the special theory of relativity (‘STR’) Since then, the literature on this topic has grown very considerably1

Journal ArticleDOI
01 Jan 1977-Synthese
TL;DR: This paper showed that while induction may succeed, no alternative is a rational way of trying, and they concluded that no alternative can succeed if success is possible, and that there are good prospects for completing Reichenbach's justification of induction.
Abstract: Reichenbach held that all scientific inference reduces, via probability calculus, to induction, and he held that induction can be justified. He sees scientific knowledge in a practical context and insists that any rational assessment of actions requires a justification of induction. Gaps remain in his justifying argument; for we can not hope to prove that induction will succeed if success is possible. However, there are good prospects for completing a justification of essentially the kind he sought by showing that while induction may succeed, no alternative is a rational way of trying.

Journal ArticleDOI
01 Oct 1977-Synthese
TL;DR: Shaman monism as discussed by the authors is a form of personalistic statistical inference that is either pure or sham, and it is used to conceal the existence of objective or physical probabilities in statistical inference.
Abstract: Current systems of statistical inference which utilize personal probabilities (in short, personalistic systems) are either dualistic or monistic. Dualistic systems postulate the existence of objective or physical probabilities (which are usually identified with limiting relative frequencies of observable events), whereas monistic systems do not countenance such probabilities. The central thesis of monism is that statistics can get along quite well without physical probability and the related concepts of objective randomness and random process. Monistic systems may be either pure or sham, Sham monists pay lip service to monism but covertly introduce physical probabilities, and thus trivialize the central thesis. They accomplish this by introducing the same sort of probability models that dualist statisticians do, under the guise of 'personal' probability distributions of observable random variables conditional on the unknown value of a physical parameter. For example, a sham monist will treat problems that a dualist would describe as involving the unknown mean of a normally distributed population in the same way the dualist would with conditionally independent trials governed by a normal law except that he refuses to call the probabilities determined by the law 'physical probabilities'. He insists that they are merely special kinds of personal probabilities. The same sort of approach is used to treat all the standard problems of statistics, i.e., the probability models which govern the sham monist's observable random variables are going to be the same as the ones used by dualists and objectivists, except that they will be labelled differently. A notable adherent of sham monism was the late L. J. Savage, who advocated pure monism when he was theorizing on a foundational level, but who shifted ground while he tried to incorporate the standard problems of statistics into his theoretical framework (cf. [3], [13]). The difference between sham monists and dualists is that the latter overtly postulate the existence of physical probabilities, whereas the former covertly postulate


Journal ArticleDOI
01 Jun 1977-Synthese
TL;DR: A causal theory of spacetime topology as mentioned in this paper is the claim that according to some scientific theory (the true theory? our best confirmed theory to date?) some causal relationship among events is coextensive with some relationship defined by the concepts of the topology of the spacetime.
Abstract: A causal theory of time, or, more properly, a causal theory of spacetime topology, might merely be the claim that according to some scientific theory (the true theory? our best confirmed theory to date?) some causal relationship among events is, as a matter of law or merely as a matter of physically contingent fact, coextensive with some relationship defined by the concepts of the topology of the spacetime. A strongest version of such a causal theory would be one which demonstrated such a coextensiveness between some causally definable notion and some concept of topology (such as ‘open set’) sufficient to fully define all other topological notions. Given such a result, one would have demonstrated that for each topological aspect of the spacetime, a causal relationship among events could be found such that that causal relationship held when and only when the appropriate topological relationship held.

Journal ArticleDOI
01 Sep 1977-Synthese
TL;DR: Birnbaum's views on statistical inference have been discussed in detail in this article, including the Neyman-Pearson theory and its application to the analysis of statistical inference in statistical inference.
Abstract: Allan Birnbaum died in London in the early summer of 1976. He was 53. Although he was by training and profession a statistician, his intellectual interests were more philosophical than mathematical. During the past fifteen years most of his efforts were devoted to the foundations of statistics. His paper on the Neyman-Pearson Theory (Birnbaum [41]) is only the latest, though now unfortunately the last, of several papers developing a point of view that Birnbaum thought to represent both the theory and practice of the majority of reflective theoretical statisticians. In the following pages I will outline the development of Birnbaum's views on statistical inference. I hope this will help those unfamiliar with the earlier papers better to understand this last paper and also provide additional motivation for looking at the earlier papers as well. Birnbaum was unusual among statisticians in that he actively sought intellectual contact with philosophers as well as with methodologists in various sciences. Thus several of his papers, especially the later ones, were written so as to require relatively little technical expertise in statistics. They must be read very carefully, however, for they are written in a style that is sometimes complex and usually understates the signifi cance of the point being made. The style is an accurate reflection of the man.

Journal ArticleDOI
01 Dec 1977-Synthese
TL;DR: The need to answer the fundamental epistemological question about Luneburg's theory of binocular visual perception becomes even greater, if the ideas concerning the geometry of the authors' immediate physical environment are formed not by the physical geometry of yardsticks or by the formal study of Euclidean geometry but rather by the psychometry of their visual sense data.

Journal ArticleDOI
Lars Löfgren1
01 Dec 1977-Synthese
TL;DR: In this article, the authors developed ideas about existential perceptions as a result of a cerebral description (learning) process when working on trying to describe the enormous data flow that stimulates our receptors, in order to make that description simple enough to be easily comprehensible, a denoting process is likely to occur within the learning process.
Abstract: When we see the table before us, we experience its existence as an unmistakable existential perception. When a mathematician is talking about the existence of a certain number, he does not have the same existential acquaintance with the number even though he might experience a subsistential cognition. Subsistence, or logical or mathematical existence, is a weaker form of existence that is intimately connected with consistence. Concrete existence, on the other hand, will be developed as subsistence with existential perceptions as complementum possibilitatis. Ideas about existential perceptions are developed as follows. We look upon 'reality' as the result of our cerebral description (learning) process when working on trying to describe the enormous data flow that stimulates our receptors. In order to make that description simple enough to be easily comprehensible, a denoting (naming) process is likely to occur within the learning process. In the first place, the denoting process will come into play for frequently occurring collections of, otherwise undescribably large, combinations of properties of the data flow. Such frequent occurrences will strongly confirm the cerebral hypothesis that the combination is unique. We argue that this type of confirmation may cause perceptions, and when it does, perceptions of something unique and concrete concrete objects have a maximal number of properties i.e., existential perceptions. Such primitive perceptions are thus looked upon as the result of a first internal description process, working in terms of a first internal cerebral language. Strongly developed existential perceptions may be further analyzed by a second description (learning) process, working in terms of a second internal cerebral language. The domain of interpretation of that language contains the existential perceptions generated by the first learning process. Our conscious thinking as well as our nonconscious 'reasoning' which is responsible for our intuitions is likely to occur in such a second cerebral language.