scispace - formally typeset
Search or ask a question

Showing papers in "Economics and Philosophy in 2004"


Journal ArticleDOI
TL;DR: In this article, the authors question the notion of a dichotomy between legitimacy in the market and the state and argue that it is the result of a conflation of choice and consent in economics and show how an independent concept of consent makes the need for legitimization of market transactions visible.
Abstract: According to an often repeated definition, economics is the science of individual choices and their consequences. The emphasis on choice is often used – implicitly or explicitly – to mark a contrast between markets and the state: While the price mechanism in well-functioning markets preserves freedom of choice and still efficiently coordinates individual actions, the state has to rely to some degree on coercion to coordinate individual actions. Since coercion should not be used arbitrarily, coordination by the state needs to be legitimized by the consent of its citizens. The emphasis in economic theory on freedom of choice in the market sphere suggests that legitimization in the market sphere is “automatic” and that markets can thus avoid the typical legitimization problem of the state. In this paper, I shall question the alleged dichotomy between legitimization in the market and in the state. I shall argue that it is the result of a conflation of choice and consent in economics and show how an independent concept of consent makes the need for legitimization of market transactions visible.

85 citations


Journal ArticleDOI
TL;DR: In this article, the authors propose to restore the conceptual separation of opportunity principles concerning unjustified inequalities from distributive principles concerning justifiable inequalities by introducing the principle of "opportunity dominance".
Abstract: All conceptions of equal opportunity draw on some distinction between morally justified and unjustified inequalities. We discuss how this distinction varies across a range of philosophical positions. We find that these positions often advance equality of opportunity in tandem with distributive principles based on merit, desert, consequentialist criteria or individuals’ responsibility for outcomes. The result of this amalgam of principles is a festering controversy that unnecessarily diminishes the widespread acceptability of opportunity concerns. We therefore propose to restore the conceptual separation of opportunity principles concerning unjustified inequalities from distributive principles concerning justifiable inequalities. On this view, equal opportunity implies that that morally irrelevant factors should engender no differences in individuals’ attainment, while remaining silent on inequalities due to morally relevant factors. We examine this idea by introducing the principle of ‘opportunity dominance’ and explore in a simple application to what extent this principle may help us arbitrate between opposing distributive principles. We also compare this principle to the selection rules developed by John Roemer and Dirk Van de Gaer.

46 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the other rational species problem and show that Adam Smith's wider sympathetic principle would alter Hume's conclusion that superior beings will enslave inferior ones, even when there is no possibility of contract.
Abstract: David Hume's sympathetic principle applies to physical equals. In his account, we sympathize with those like us. By contrast, Adam Smith's sympathetic principle induces equality. We consider Hume's “other rational species” problem to see whether Smith's wider sympathetic principle would alter Hume's conclusion that “superior” beings will enslave “inferior” beings. We show that Smith introduces the notion of “generosity,” which functions as if it were Hume's justice even when there is no possibility of contract.

45 citations


Journal ArticleDOI
TL;DR: In this article, the authors address the question of how finitely additive moral value theories (such as utilitarianism) should rank worlds when there are an infinite number of locations of value (people, times, etc.).
Abstract: We address the question of how finitely additive moral value theories (such as utilitarianism) should rank worlds when there are an infinite number of locations of value (people, times, etc.). In the finite case, finitely additive theories satisfy both Weak Pareto and a strong anonymity condition. In the infinite case, however, these two conditions are incompatible, and thus a question arises as to which of these two conditions should be rejected. In a recent contribution, Hamkins and Montero (2000) have argued in favor of an anonymity-like isomorphism principle and against Weak Pareto. After casting doubt on their criticism of Weak Pareto, we show how it, in combination with certain other plausible principles, generates a plausible and fairly strong principle for the infinite case. We further show that where locations are the same in all worlds, but have no natural order, this principle turns out to be equivalent to a strengthening of a principle defended by Vallentyne and Kagan (1997), and also to a weakened version of the catching-up criterion developed by Atsumi (1965) and by von WeizsA¤cker (1965).

27 citations


Journal ArticleDOI
TL;DR: In this article, a framing effect occurs when an agent's choices are not invariant under changes in the way a decision problem is presented, e.g. changes in how options are described or preferences elicited.
Abstract: A framing effect occurs when an agent's choices are not invariant under changes in the way a decision problem is presented, e.g. changes in the way options are described (violation of description invariance) or preferences are elicited (violation of procedure invariance). Here we identify those rationality violations that underlie framing effects. Applying a model by List (2004), we attribute to the agent a sequential decision process in which a "target" proposition and several "background" propositions are considered. We suggest that the agent exhibits a framing effect if and only if two conditions are met. First, different presentations of the decision problem lead the agent to consider the propositions in a different order (the empirical condition). Second, different such "decision paths" lead to different decisions on the target proposition (the logical condition). The second condition holds when the agent's initial dispositions on the propositions are "implicitly inconsistent", which may be caused by violations of "deductive closure". Our account is consistent with some observations made by psychologists and provides a unified framework for explaining violations of description and procedure invariance.

24 citations


Journal ArticleDOI
TL;DR: A syntactic formalism for the modeling of belief revision in perfect information games is presented that allows to define the rationality of a player's choice of moves relative to the beliefs he holds as his respective decision nodes have been reached.
Abstract: A syntactic formalism for the modeling of belief revision in perfect information games is presented that allows to define the rationality of a player's choice of moves relative to the beliefs he holds as his respective decision nodes have been reached. In this setting, true common belief in the structure of the game and rationality held before the start of the game does not imply that backward induction will be played. To derive backward induction, a “forward belief” condition is formulated in terms of revised rather than initial beliefs. Alternative notions of rationality as well as the use of knowledge instead of belief are also studied within this framework.

17 citations


Journal ArticleDOI
TL;DR: The authors argued that the republican account of freedom is vulnerable to a version of Sen's liberal paradox, an inconsistency between universal domain, freedom, and the weak Pareto principle, and argued that some standard escape-routes from the liberal paradox are not easily available to the republican.
Abstract: Philip Pettit (2001) has suggested that there are parallels between his republican account of freedom and Amartya Sen’s (1970) account of freedom as decisive preference. In this paper, I discuss these parallels from a social-choice-theoretic perspective. I sketch a formalization of republican freedom, and argue that republican freedom is formally very similar to freedom as defined in Sen’s “minimal liberalism” condition. In consequence, the republican account of freedom is vulnerable to a version of Sen's liberal paradox, an inconsistency between universal domain, freedom, and the weak Pareto principle. I argue that some standard escape-routes from the liberal paradox – those via domain restriction – are not easily available to the republican. I suggest that republicans need to take seriously the challenge of the impossibility of a Paretian republican.

17 citations


Journal ArticleDOI
TL;DR: In this article, the authors argue that there is a general incompatibility between the equal value of life principle and the weak Pareto principle and provide proof of this under mild structural assumptions.
Abstract: A principle claiming equal entitlement to continued life has been strongly defended in the literature as a fundamental social value. We refer to this principle as ‘equal value of life’. In this paper we argue that there is a general incompatibility between the equal value of life principle and the weak Pareto principle and provide proof of this under mild structural assumptions. Moreover we demonstrate that a weaker, age-dependent version of the equal value of life principle is also incompatible with the weak Pareto principle. However, both principles can be satisfied if transitivity of social preference is relaxed to quasi-transitivity.

15 citations


Journal ArticleDOI
TL;DR: In this paper, a normative evaluation of the minimum wage in the light of recent evidence and theory about its effects is presented, and it is argued that the minimum-wage should be evaluated using a consequentialist criterion that gives priority to the jobs and incomes of the worst off.
Abstract: This paper develops a normative evaluation of the minimum wage in the light of recent evidence and theory about its effects. It argues that the minimum wage should be evaluated using a consequentialist criterion that gives priority to the jobs and incomes of the worst off. This criterion would be accepted by many different types of consequentialism, especially given the two major views about what the minimum wage does. One is that the minimum wage harms the jobs and incomes of the worst off and the other is that it does neither much harm nor much good. The paper then argues at length that there are no important considerations besides jobs and incomes relevant to the assessment of the minimum wage. It criticizes exploitation arguments for the minimum wage. It is not clear that the minimum wage would reduce exploitation and the paper doubts that, if it did, it would do so in a morally significant way. The paper then criticizes freedom arguments against the minimum wage by rejecting appeals to self-ownership and freedom of contract and by arguing that no freedom of significance is lost by the minimum wage that is not already taken account of in the main consequentialist criterion. The conclusion is that, at worst, the minimum wage is a mistake and, at best, something to be half-hearted about.

11 citations


Journal ArticleDOI
TL;DR: In this article, the authors compare a good-specific and an all-things-considered perspective on seniority-based allocation of benefits from employment and find that under certain circumstances, a maximin egalitarian case for seniority privileges could be made.
Abstract: What should maximin egalitarians think about seniority privileges? We contrast a good-specific and an all-things-considered perspective. As to the former, inertia and erasing effects of a seniority-based allocation of benefits fromemploymentareidentified,allowingustospotthecategoriesofworkers and job-seekers made involuntarily worse off by such a practice. What matters however is to find out whether abolishing seniority privileges will bring about a society in which the all-things-considered worst off people are better off than in the seniority rule’s presence. An assessment of the latter’s cost-reduction potential is thus needed, enabling us to bridge a practice taking place within a firm with its impact on who the least well off members of society are likely to be. Three accounts of the profitability of seniority privileges are discussed: the “(firm specific) human capital”, the “deferred compensation” and the “knowledge transfer” ones. The respective relevance of “good-specific” and “all-things-considered” analysis is discussed. It turns out that under certain circumstances, a maximin egalitarian case for seniority privileges could be made. Senior: Do you know that they are planning layoffs? Of course, it is only fair that they lay-off the newcomers first! After all, I have been loyal to the company for many years. Junior: Did I choose to be a newcomer?

11 citations


Journal ArticleDOI
TL;DR: In this article, it is shown that most of these criteria will be compatible with, or actually select, the zero basic income policy and reject the basic income maximizing one, and the force of this main conclusion is discussed both in relation to Van Parijs' argument for basic income in Real Freedom for All (1995) and to some key empirical conditions in the real world.
Abstract: This article challenges the general thesis that an unconditional basic income, set at the highest sustainable level, is required for maximizing the income-leisure opportunities of the least advantaged, when income varies according to the responsible factor of labor input. In a linear optimal taxation model (of a type suggested by Vandenbroucke 2001) in which opportunities depend only on individual productivity, adding the instrument of a uniform wage subsidy generates an array of undominated policies besides the basic income maximizing policy, including a “zero basic income” policy which equalizes the post-tax wage rate. The choice among such undominated policies may be guided by distinct normative criteria which supplement the maximin objective in various ways. It is shown that most of these criteria will be compatible with, or actually select, the zero basic income policy and reject the basic income maximizing one. In view of the model's limited realism, the force of this main conclusion is discussed both in relation to Van Parijs' argument for basic income in Real Freedom for All (1995) and to some key empirical conditions in the real world.

Journal ArticleDOI
TL;DR: In this paper, the authors argue that the reasons in favor of viewing principles of prudence as the outcome of a choice weigh equally in favour of viewing principle of justice as the end of a bargain.
Abstract: Whereas principles of justice adjudicate inter personal conflicts, principles of prudence adjudicate intra personal conflicts – ie, conflicts between the preferences an individual has now and the preferences he will have later On a contractarian approach, principles of justice can be theoretically grounded in a hypothetical agreement in an appropriately specified pre-moral situation in which those persons with conflicting claims have representatives pushing for their claims Similarly, I claim, principles of prudence can be grounded in a hypothetical agreement in an appropriately specified pre-prudential situation in which those temporal parts of a person with conflicting claims have representatives as advocates of their claims During the course of developing the prudential contractarian methodology, I consider a dispute between those who would see principles of justice as the outcome of a choice (eg, Rawls) and others (eg, Gauthier) who argue for viewing principles of justice as the outcome of a bargain I contend that the reasons I adduce in favor of viewing principles of prudence as the outcome of a choice weigh equally in favor of viewing principles of justice as the outcome of a bargain

Journal ArticleDOI
TL;DR: The economic analysis of law has undergone a remarkable change in the past decade and a half as discussed by the authors, with the emergence of the notion of the law of tort as a set of liability rules selected for their incentive effects.
Abstract: The economic analysis of law has gone through a remarkable change in the past decade and a half. The founding articles of the discipline – such classic pieces as Ronald Coase's “The problem of social cost” (1960), Richard Posner's “A theory of negligence” (1972) and Guido Calabresi and Douglas Malamed's “Property rules, liability rules, and inalienability: One view of the cathedral” (1972) – offered economic analyses of familiar aspects of the common law, seeking to explain, in particular, fundamental features of the law of tort in terms of such economic ideas as transaction costs (Coase), Kaldor-Hicks efficiency (Posner), or minimizing the sum of the accident costs and avoidance costs (Calabresi and Malamed). In each case, they argued that the law of torts should be understood as a set of liability rules selected for their incentive effects, rather than as a set of substantive rights and remedies for their violation. These authors claimed to be able to explain most of the features of tort law and, where features were found that did not fit with their preferred explanations, recommended modification. Although they disagreed on important questions, each of the pieces seems to work a manageable structure into what strikes first-year law students as an otherwise random morass of common-law judgments. Generations of legal academics were introduced to these works, and drawn into their way of looking at things. As a student studying first-year torts with Calabresi at Yale, I had the sense that I was in the presence of greatness.



Journal ArticleDOI
TL;DR: In Fairness versus Welfare as discussed by the authors, the authors argue that no independent weight should be accorded to notions of fairness such as corrective or retributive justice or other deontological principles.
Abstract: In Fairness versus Welfare (FVW), we advance the thesis that social policies should be assessed entirely with regard to their effects on individuals' well-being. That is, no independent weight should be accorded to notions of fairness such as corrective or retributive justice or other deontological principles. Our claim is based on the demonstration that pursuit of notions of fairness has perverse effects on welfare, on other problematic aspects of the notions, and on a reconciliation of our thesis with the evident appeal of moral intuitions. Here we summarize our three arguments and explain that Professor Ripstein's commentary largely fails to respond to them. (We will pass over some of what he says because it has little to do with our book, and we will not address his rather surprising attacks on our scholarship because the reader can readily verify their inaccuracy.)



Journal ArticleDOI
TL;DR: The most successful example of this can be found in Part III of the volume, devoted to economic modelling as mentioned in this paper, where Sugden illustrates and discusses two well-known microeconomic models, George Akerlof's "market for lemons" and Thomas Schelling's checkerboard model of segregation.
Abstract: In 1997 the conference ‘Fact or Fiction?’ celebrated Uskali Mäki’s inauguration as Chair of Philosophy at Erasmus University, Rotterdam. The papers presented at the conference have now been published in the volume under review. Some of them appear in print for the first time, others have already been published in journals, but the volume presents the added value of having them organised in coherent groups – recreating at least in some cases the dialectical atmosphere of the conference from which they originate. The most successful example of this can be found in Part III of the volume, devoted to economic modelling. Modelling has been one of the hottest topics in the methodology of economics during the last decade, and here we have five top-rate contributions by some of the leading scholars in the field. Bob Sugden’s ‘Credible worlds: The status of theoretical models in economics’ is one of my favourite papers, as well as a pedagogic masterpiece. Sugden illustrates and discusses two well-known microeconomic models, George Akerlof’s ‘market for lemons’ and Thomas Schelling’s checkerboard model of segregation. Using them as reference points, he reviews critically some of the major views on the connection between theoretical models and the real world, including the views that models are tools for conceptual exploration, that they are instruments for prediction, that they are metaphors, and that they are caricatures; he finishes with a critical discussion of the method of isolation and of Mill’s inexact deductive method. Sugden argues that none of these provides the right characterization of the link between models and reality, a gap that is bridged by a process of inductive inference. A model is a special entity somehow representative of a wider class of entities, a class that is supposed to include real economies among its elements. When engaged in theoretical modelling, economists ask their audience to believe that some

Journal ArticleDOI
TL;DR: In this article, Steuer argues that social science's goals are similar in kind to those of natural science, and its relatively bad empirical record can be explained by a number of practical disadvantages it faces.
Abstract: Max Steuer’s readable book offers both an introduction to contemporary work in social science and also a defense of some general views about the nature of this kind of inquiry. Practicing social scientists will likely warm to its instinctive sympathy for their work. What of philosophers? Although both the author and Ken Binmore in the foreword are eager to deny that this book is an exercise in philosophy, its central claims – that a scientific study of society is possible and that its method is distinct from other ways of producing social knowledge – express meta-propositions about social science. What is distinct about Steuer’s approach is his conviction that these questions are best addressed not through abstract argument but rather by carefully examining what social scientists actually do. In this spirit, while chapters in the beginning and at the end of the book contain his general, or philosophical, discussion, at the heart of Steuer’s inquiry are six central chapters comprising long and painstaking reports of actual research. By the author’s own admission, the philosophical discussions at either end of the book are of a rather informal nature and do not seek to engage explicitly with the philosophical literature. Rather, the rhetorical strategy is one of argument by illustration. Does it succeed? The arguments presented in the early chapters are typical of a broadly naturalistic view of social science. Thus social science’s goals are taken to be similar in kind to those of natural science, and its relatively bad empirical record to be explained by a number of practical disadvantages it faces. One is that the phenomena studied by social science are subject to change at a much greater rate. Physical and biological phenomena also change, but many of their underlying principles are both quite stable and also directly relevant to explanation and intervention. In the social world, the underlying principles (for example, self-interested behavior in the