scispace - formally typeset
Search or ask a question
Topic

Rationality

About: Rationality is a research topic. Over the lifetime, 20459 publications have been published within this topic receiving 617787 citations.


Papers
More filters
Book
01 Jan 1996
TL;DR: The Modern Theories of Justice as mentioned in this paper is a comprehensive survey of the main ethical theories influencing the development of normative economics, with a long discussion on utilitarianism and social choice theory.
Abstract: Modern Theories of Justice By Serge-Christophe Kolm. Cambridge, MA: The M1T Press, 1998. Pp. ix, 525. $42.00. Professor Kolm's books and articles form one of the most important contributions to contemporary social ethics. Even though the main parts of his works were originally published in or have been translated into English, they have not received the attention they deserve. This is sufficient reason to highly recommend his Modern Theories of Justice, which contains a detailed introduction to his other books. Since the late fifties, Professor Kolm has developed a general theory of justice. He has also extensively contributed to the critical evaluation of the various theories, principles, or criteria of justice that influence economists. Professor Kolm presents his theory of justice as resulting from an application of rationality (in the sense of rational justification) to the question of global justice (what should be done in society?), that is, to the definition of the social optimum and of what is right or good in society. Briefly, this social optimum basically consists of first satisfying basic needs and guaranteeing basic rights and second allocating society's resources (including human resources) in an equitable way. The latter principle requires mixing the (somehow competing) moral criteria of equal process freedom (freedom to benefit from the results of one's acts), equal consumption, and equal satisfaction. Process freedom alone justifies free markets and no resource redistribution. Equal consumption, when combined with efficiency, requires superequity (that is, no agent would strictly prefer any convex combination of the allocations received by others to his own allocation). Equal satisfaction requires to leximin welfare levels corresponding to fundamental preferences, a (difficult) concept that allows the social observer, in particularly unjust or unequal situations, to unambiguously identify the worst-off agents, that is, the agents who should be allocated more resources. Among the different ways of mixing process freedom and equal consumption, Professor Kolm elaborated a particularly interesting intermediary case in recent contributions (recall that the whole structure of his theory, including the ideas of the maximin in fundamental preferences, income justice and superequity, unjust inequality measurement, etc., was first developed and presented in the late sixties). The purpose of this intermediary case is to equally share the benefits of possibly unequal productive capacities while letting agents individually benefit from their own consumptive capacities. The solution consists of a fixed-duration income equalization. This criterion is met when all agents in a society face a budget set having the property that by choosing a prespecified labor time (the so-called fixed duration) any agent would earn the same labor income. Professor Kolm's theory of justice also considers several reasons why the first-best social optimum could not be reached (for example, market failures) and proposes solutions to these problems (for example, social contracts, which give foundations to a theory of the state). Finally, a large part of the book is devoted to a critical appraisal of the main ethical theories influencing the development of normative economics (what justifies the title of the book), with a long discussion on utilitarianism and social choice theory. The general picture is quite impressive. Professor Kolm is not only able to discuss a long and diversified series of topics, such as the economics of poverty, the no self in Buddhism or the ideology of the French Revolution, but he also succeeds in building links among all these topics and integrating them into a unified theory. …

201 citations

Journal ArticleDOI
TL;DR: Personal preference was found to play a major role in Web site evaluation in the areas of graphic/multimedia and subject content preferences, and the role of personal preferences in Web-based decisions.
Abstract: This study investigated Simon's behavioral decision-making theories of bounded rationality and satisficing in relation to young people's decision making in the World Wide Web, and considered the role of personal preferences in Web-based decisions. It employed a qualitative research methodology involving group interviews with 22 adolescent females. Data analysis took the form of iterative pattern coding using QSR NUD*IST Vivo qualitative data analysis software. Data analysis revealed that the study participants did operate within the limits of bounded rationality. These limits took the form of time constraints, information overload, and physical constraints. Data analysis also uncovered two major satisficing behaviors - reduction and termination. Personal preference was found to play a major role in Web site evaluation in the areas of graphic/multimedia and subject content preferences. This study has related implications for Web site designers and for adult intermediaries who work with young people and the Web.

201 citations

Journal ArticleDOI
TL;DR: The methodological and procedural issues raised by Meehl (1967, 1978) that seem to question the rationality of psychological inquiry have been examined in this article, where the authors argue that the methodological paradox can be ameliorated with the help of a good-enough principle, to be proposed here, so that hypothesis testing in psychology is not rationally disadvantaged when compared to physics.
Abstract: " This article reexamines a number of methodological and procedural issues raised by Meehl (1967, 1978) that seem to question the rationality of psychological inquiry. The first issue concerns the asymmetry in theory testing between psychology and physics and the resulting paradox that, because the psychological null hypothesis is always false, increases in precision in psychology always lead to weaker tests of a theory, whereas the converse is true in physics. The second issue, related to the first, regards the slow progress observed in psychological research and the seeming unwillingness of social scientists to take seriously the Popperian requirements for intellectual honesty. We propose a good-enough principle to resolve Meehl's methodological paradox and appeal to a more powerful reconstruction of science developed by Lakatos (1978a, 1978b) to account for the actual practice of psychological researchers. From time to time every research discipline must reevaluate its method for generating and certifying knowledge. The actual practice of working scientists in a discipline must continually be subjected to severe criticism and be held accountable to standards of intellectual honesty, standards that are themselves revised in light of critical appraisal (Lakatos, 1978a). If, on a metatheoretical level, scientific methodology cannot be defended on rational grounds, then metatheory must be reconstructed so as to make science rationally justifiable. The history of science is replete with numerous such reconstructions, from the portrayal of science as being inductive and justificationist, to the more recent reconstructions favored by (naive and sophisticated) methodological falsificationists, such as Popper (1959), Lakatos (1978a), and Zahar (1973). In the last two decades psychology, too, has been subjected to criticism for its research methodology. Of increasing concern is empirical psychology's use of inferential hypothesis-testing techniques and the way in which the information derived from these procedures is used to help us make decisions about the theories under test (e.g., Bakan, 1966; Lykken, 1968; Rozeboom, 1960). In two penetrating essays, Meehl (1967, 1978) has cogently and effectively faulted the use of the traditional null-hypothesis significance test in psychological research. According to Meehl (1978, p. 817), "the almost universal reliance on merely refuting the null hypothesis as the standard method for corroborating substantive theories [in psychology] is a terrible mistake, is basically unsound, poor scientific strategy, and one of the worst things that ever happened in the history of psychology." He maintained that it leads to a methodological paradox when compared to theory testing in physics. In addition, Meehl (1978) pointed to the apparently slow progress in psychological research and the deleterious effect that null-hypothesis testing has had on the detection of progress in the accumulation of psychological knowledge. The cumulative effect of this criticism is to do nothing less than call into question the rational character of our empirical inquiries. As yet there has been no attempt to deal with the problems raised by Meehl by reconstructing the actual practice of psychologists into a logically defensible form. This is the purpose of the present article. The two articles by Meehl seem to deal with two disparate issues--null-hypothesis testing and slow progress. Both issues, however, are linked in the methodological falsificationist reconstruction of science to the necessity for scientists to agree on what experimental outcomes are to be considered as disconfirming instances. We will argue that the methodological paradox can be ameliorated with the help of a "good-enough" principle, to be proposed here, so that hypothesis testing in psychology is not rationally disadvantaged when compared to physics. We will also account for the apparent slow progress in psychological research, and we will take issue with certain (though not all) claims made by Meehl (1978) in this regard. Both the methodological and the progress issues will be resolved by an appeal to the (sophisticated) methodological falsificationist reconstruction of science developed by Lakatos (1978a), an approach with which Meehl is familiar but one he did not apply to psychology in his articles. January 1985 • American Psychologist Copyright 1985 by the American Psychological Association, Inc. 0003-066X/85/$00.75 Vol. 40, No. 1, 73-83 73 Meehl's Asymmetry Argument Let us develop Meehl's argument. It is his contention that improved measurement precision has widely different effects in psychology and physics on the success of a theory in overcoming an "observational hurdle." Perfect precision in the behavioral sciences provides an easier hurdle for theories, whereas such accuracy in physics makes it much more difficult for a theory to survive. According to the Popperian reconstruction of science (Popper, 1959), scientific theories must be continually subjected to severe tests. But if the social sciences are immanently incapable of generating such tests, if they cannot expose their theories to the strongest possible threat of refutation, even with ever-increasing measurement precision, then their claim to scientific status might reasonably be questioned. Further, according to this view of research in the social sciences, there can be no question of scientific progress based on the rational consideration of experimental outcomes. Instead, progress is more a matter of psychological conversion (Kuhn, 1962). Let us look more closely at the standard practice in psychology. On the basis of some theory T we derive the conclusion that a parameter 6 will differ for two populations. In order to examine this conclusion, we can set up a point-null hypothesis, Ho: = 0, and test this hypothesis against the predicted outcome, H~: 6 4: 0. However, it has also been recognized (Kaiser, 1960; Kimmel, 1957) that another question of interest is whether the difference is in a certain direction, and so we could instead test the directional null hypothesis, I-I~: 6 ~ 0, against the directional alternative, H*: ~ > 0. In such tests, we can make two types of errors. The Type I error would lead to rejecting Ho or H~ when they are indeed true, whereas the Type II error involves not rejecting Ho or HJ when they are false. The conventional methodology sets the Type I (or alpha) error rate at 5% and seeks to reduce the frequency of Type II errors. Such a reduction in the Type II error rate can be achieved by improving the logical structure of the experiment, reducing measurement errors, or increasing sample size. Meehl pointed out that in the behavioral sciences, because of the large number of factors affecting variables, we would never expect two populations to have literally equal means. Hence, he concluded that An earlier version of this article was read at the 1983 meeting of the American Educational Research Association. The authors are grateful to Robbie Case, Joel R. Lcvin, and Leonard Marascuilo for reading earlier drafts, and to Crescent L. Kringle for her help with the manuscript. Requests for reprints should be sent to Ronald C. Serlin, Department of Educational Psychology, University of Wisconsin, Madison, Wisconsin 53706. the point-null hypothesis is always false. With infinite precision, we would always reject Ho. This is perhaps one reason to prefer the directional null hypothesis H~. But Meehl then conducted a thought experiment in which the direction predicted by T was assigned at random. In such an experiment, T provides no logical connection to the predicted direction and so is totally without merit. Because H0 is always false, the two populations will always differ, but because the direction in H~ is assigned at random, with infinite precision we will reject HJ half of the time. Hence, Meehl concluded "that the effect of increased p r e c i s i o n . . , is to yield a probability approaching 1/2 of corroborating our substantive theory by a significance test, even i f the theory is totally without merit" (Meehl, 1967, p. 111, emphasis in original). Meehl contrasted this state of affairs with that in physics, wherein the usual situation involves the prediction of a point value. That which corresponds to the point-null hypothesis is the value flowing as a consequence of a substantive theory T. An increase in statistical power in physics has the effect of stiffening the experimental hurdle by "'decreasing the prior probability of a successful experimental outcome if the theory lacks verisimilitude, that is, precisely the reverse of the situation obtaining in the social sciences" (Meehl, 1967, p. 113). With infinite precision, and if the theory has no merit, the logical probability of it surviving such a test in physics is negligible; in the social sciences, this logical probability for H~ is one half. Perhaps another way of describing the asymmetry in hypothesis testing between psychology and physics is to note that, in psychology, the point-null hypothesis is not what is derived from a substantive theory. Rather, it is a "straw-man" competitor whose rejection we interpret as increasing the plausibility of T. In physics, on the other hand, theories that entail point-null statistical hypotheses are the very ones physicists take seriously and hope to confirm. If 0 is a predicted outcome of interest, and 0 is its logical complement, then the depiction of null and alternative statistical hypotheses in the two disciplines can be written as follows:

201 citations

Journal ArticleDOI
TL;DR: It is argued that evaluation of rationality requires less experience than anticipations of action goals, suggesting a dual process account of preverbal infants' everyday action understanding.

200 citations


Network Information
Related Topics (5)
Ideology
54.2K papers, 1.1M citations
85% related
Empirical research
51.3K papers, 1.9M citations
81% related
Politics
263.7K papers, 5.3M citations
80% related
Incentive
41.5K papers, 1M citations
79% related
Democracy
108.6K papers, 2.3M citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023921
20221,963
2021645
2020689
2019682
2018753