scispace - formally typeset
Search or ask a question

Showing papers in "Theory and Decision in 2002"


Journal ArticleDOI
TL;DR: An overview of recent results on lexicographic, linear, and Bayesian models for paired comparison from a cognitive psychology perspective, and identifies the optimal model in each class, where optimality is defined with respect to performance when fitting known data.
Abstract: This article provides an overview of recent results on lexicographic, linear, and Bayesian models for paired comparison from a cognitive psychology perspective. Within each class, we distinguish subclasses according to the computational complexity required for parameter setting. We identify the optimal model in each class, where optimality is defined with respect to performance when fitting known data. Although not optimal when fitting data, simple models can be astonishingly accurate when generalizing to new data. A simple heuristic belonging to the class of lexicographic models is Take The Best (Gigerenzer & Goldstein (1996) Psychol. Rev. 102: 684). It is more robust than other lexicographic strategies which use complex procedures to establish a cue hierarchy. In fact, it is robust due to its simplicity, not despite it. Similarly, Take The Best looks up only a fraction of the information that linear and Bayesian models require; yet it achieves performance comparable to that of models which integrate information. Due to its simplicity, frugality, and accuracy, Take The Best is a plausible candidate for a psychological model in the tradition of bounded rationality. We review empirical evidence showing the descriptive validity of fast and frugal heuristics.

296 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present an axiomatization of the rule which requires updating of all the priors by Bayes rule and show that when all priors give positive probability to an event E, a certain coherence property between conditional and unconditional preferences is satisfied if and only if the set of subjective probability measures considered by the agent given E is obtained by updating all subjective prior probability measures using Bayes rules.
Abstract: When preferences are such that there is no unique additive prior, the issue of which updating rule to use is of extreme importance. This paper presents an axiomatization of the rule which requires updating of all the priors by Bayes rule. The decision maker has conditional preferences over acts. It is assumed that preferences over acts conditional on event E happening, do not depend on lotteries received on E c, obey axioms which lead to maxmin expected utility representation with multiple priors, and have common induced preferences over lotteries. The paper shows that when all priors give positive probability to an event E, a certain coherence property between conditional and unconditional preferences is satisfied if and only if the set of subjective probability measures considered by the agent given E is obtained by updating all subjective prior probability measures using Bayes rule.

163 citations


Journal ArticleDOI
TL;DR: It is proposed that missing information influences the attractiveness of a bet contingent upon an uncertain event, especially when the information is available to someone else.
Abstract: In normative decision theory, the weight of an uncertain event in a decision is governed solely by the probability of the event. A large body of empirical research suggests that a single notion of probability does not accurately capture peoples' reactions to uncertainty. As early as the 1920s, Knight made the distinction between cases where probabilities are known and where probabilities are unknown. We distinguish another case –- the unknowable uncertainty –- where the missing information is unavailable to all. We propose that missing information influences the attractiveness of a bet contingent upon an uncertain event, especially when the information is available to someone else. We demonstrate that the unknowable uncertainty –- falls in preference somewhere in between the known and the known uncertainty.

162 citations


Journal ArticleDOI
TL;DR: In this article, the authors summarized work that has been done in this area with the assumptions of: Impartial Anonymous, Maximal Culture, Dual Culture and Uniform Culture, and concluded that while examples of Condorcet's Paradox are observed, one should not expect to observe them with great frequency in three candidate elections.
Abstract: Many studies have considered the probability that a pairwise majority rule (PMR) winner exists for three candidate elections. The absence of a PMR winner indicates an occurrence of Condorcet's Paradox for three candidate elections. This paper summarizes work that has been done in this area with the assumptions of: Impartial Culture, Impartial Anonymous Culture, Maximal Culture, Dual Culture and Uniform Culture. Results are included for the likelihood that there is a strong winner by PMR, a weak winner by PMR, and the probability that a specific candidate is among the winners by PMR. Closed form representations are developed for some of these probabilities for Impartial Anonymous Culture and for Maximal Culture. Consistent results are obtained for all cultures. In particular, very different behaviors are observed for odd and even numbers of voters. The limiting probabilities as the number of voters increases are reached very quickly for odd numbers of voters, and quite slowly for even numbers of voters. The greatest likelihood of observing Condorcet's Paradox typically occurs for small numbers of voters. Results suggest that while examples of Condorcet's Paradox are observed, one should not expect to observe them with great frequency in three candidate elections.

83 citations


Journal ArticleDOI
TL;DR: In this article, the authors present the results of a within-subject experiment testing whether an increase in the monetary stakes by a factor of 50 -which had never been done before - influences individual behavior in a simple ultimatum bargaining game.
Abstract: This paper presents the results of a within-subject experiment testing whether an increase in the monetary stakes by a factor of 50 – which had never been done before – influences individual behavior in a simple ultimatum bargaining game. Contrary to current wisdom, we found that lowest acceptable offers stated by the responder are proportionally lower in the high-stake condition than in the low-stake condition. This result may be interpreted in terms of the type of utility functions which characterize the subjects. However, in line with prior results, we find that an important increase of the monetary stakes in the ultimatum game has no effect on the offers made by the proposer. Yet, the present research suggests that the reasons underlying these offers are quite different when the stakes are high.

68 citations


Journal ArticleDOI
TL;DR: In this paper, an extension of Nash's axioms is used to define a solution for bargaining problems with exogenous reference points, and the reference points are endogenized into the model and find a unique solution giving reference points and outcomes that satisfy two reasonable properties, which they predict would be observed in a steady state.
Abstract: We consider bargaining situations where two players evaluate outcomes with reference-dependent utility functions, analyzing the effect of differing levels of loss aversion on bargaining outcomes. We find that as with risk aversion, increasing loss aversion for a player leads to worse outcomes for that player in bargaining situations. An extension of Nash's axioms is used to define a solution for bargaining problems with exogenous reference points. Using this solution concept we endogenize the reference points into the model and find a unique solution giving reference points and outcomes that satisfy two reasonable properties, which we predict would be observed in a steady state. The resulting solution also emerges in two other approaches, a strategic (non-cooperative) approach using Rubinstein's (1982) alternating offers model and a dynamic approach in which we find that even under weak assumptions, outcomes and reference points converge to the steady state solution from any non-equilibrium state.

59 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigate the canonical case where the sample statistic is itself also majority rule and the samples are drawn from real world distributions gathered from national election surveys in Germany, France, and the United States.
Abstract: The Condorcet efficiency of a social choice procedure is usually defined as the probability that this procedure coincides with the majority winner (or majority ordering) in random samples, given a majority winner exists (or given the majority ordering is transitive) Consequently, it is in effect a conditional probability that two sample statistics coincide, given certain side conditions We raise a different issue of Condorcet efficiencies: What is the probability that a social choice procedure applied to a sample matches with the majority preferences of the population from which the sample was drawn? We investigate the canonical case where the sample statistic is itself also majority rule and the samples are drawn from real world distributions gathered from national election surveys in Germany, France, and the United States We relate the results to the existing literature on majority cycles and social homogeneity We find that these samples rarely display majority cycles, whereas the probability that a sample misrepresents the majority preferences of the underlying population varies dramatically and always exceeds the probability that the sample displays cyclic majority preferences Social homogeneity plays a fundamental role in the type of Condorcet efficiency investigated here

46 citations


Journal ArticleDOI
TL;DR: In this article, a new hierarchical model which is mathematically equivalent to some of the earlier, possibilistic models and also has a simple behavioural interpretation, in terms of betting rates concerning whether or not a decision maker will agree to buy or sell a risky investment for a specified price is presented.
Abstract: Hierarchical models are commonly used for modelling uncertainty. They arise whenever there is a `correct' or `ideal' uncertainty model but the modeller is uncertain about what it is. Hierarchical models which involve probability distributions are widely used in Bayesian inference. Alternative models which involve possibility distributions have been proposed by several authors, but these models do not have a clear operational meaning. This paper describes a new hierarchical model which is mathematically equivalent to some of the earlier, possibilistic models and also has a simple behavioural interpretation, in terms of betting rates concerning whether or not a decision maker will agree to buy or sell a risky investment for a specified price. We give a representation theorem which shows that any consistent model of this kind can be interpreted as a model for uncertainty about the behaviour of a Bayesian decision maker. We describe how the model can be used to generate buying and selling prices and to make decisions.

45 citations


Journal ArticleDOI
TL;DR: In this article, a generalization of Tomiyama's 1987 result on ordinal power equivalence in simple games is presented, where the desirability relation keeps itself in all the veto-holder extensions of any simple game.
Abstract: In this paper, we are concerned with the preorderings (SS) and (BC) induced in the set of players of a simple game by the Shapley–Shubik and the Banzhaf–Coleman's indices, respectively. Our main result is a generalization of Tomiyama's 1987 result on ordinal power equivalence in simple games; more precisely, we obtain a characterization of the simple games for which the (SS) and the (BC) preorderings coincide with the desirability preordering (T), a concept introduced by Isbell (1958), and recently reconsidered by Taylor (1995): this happens if and only if the game is swap robust, a concept introduced by Taylor and Zwicker (1993). Since any weighted majority game is swap robust, our result is therefore a generalization of Tomiyama's. Other results obtained in this paper say that the desirability relation keeps itself in all the veto-holder extensions of any simple game, and so does the (SS) preordering in all the veto-holder extensions of any swap robust simple game.

45 citations


Journal ArticleDOI
TL;DR: In this paper, the Shapley value, core, marginal vectors and selectope vectors of digraph games are characterized in terms of so-called simple score vectors, and applications to the ranking of teams in sports competitions and of alternatives in social choice theory are discussed.
Abstract: Digraph games are cooperative TU-games associated to domination structures which can be modeled by directed graphs. Examples come from sports competitions or from simple majority win digraphs corresponding to preference profiles in social choice theory. The Shapley value, core, marginal vectors and selectope vectors of digraph games are characterized in terms of so-called simple score vectors. A general characterization of the class of (almost positive) TU-games where each selectope vector is a marginal vector is provided in terms of game semi-circuits. Finally, applications to the ranking of teams in sports competitions and of alternatives in social choice theory are discussed.

37 citations


Journal ArticleDOI
TL;DR: The authors showed that all classical social choice rules are asymptotically strategy-proof with the proportion of manipulable profiles being of order O (1/√n).
Abstract: We show that, when the number of participating agents n tends to infinity, all classical social choice rules are asymptotically strategy-proof with the proportion of manipulable profiles being of order O (1/√n).

Journal ArticleDOI
TL;DR: In this article, the authors consider an alternative dynamic formulation in which the common goal (dispersion) can only be achieved by agents taking distinct actions, and show how this goal can be achieved gradually, by indistinguishable non-communicating agents, in a dynamic setting.
Abstract: Following Schelling (1960), coordination problems have mainly been considered in a context where agents can achieve a common goal (e.g., rendezvous) only by taking common actions. Dynamic versions of this problem have been studied by Crawford and Haller (1990), Ponssard (1994), and Kramarz (1996). This paper considers an alternative dynamic formulation in which the common goal (dispersion) can only be achieved by agents taking distinct actions. The goal of spatial dispersion has been studied in static models of habitat selection, location or congestion games, and network analysis. Our results show how this goal can be achieved gradually, by indistinguishable non-communicating agents, in a dynamic setting.

Journal ArticleDOI
TL;DR: In this paper, the authors present a simple way of interpreting the Gini coefficient of inequality, in ''equivalent'' welfare terms, as the proportion of a cake of given size going to the poorer of two individuals in a two-person cake-sharing problem.
Abstract: This note presents a very simple way of interpreting the Gini coefficient of inequality, in `equivalent' welfare terms, as the proportion of a cake of given size going to the poorer of two individuals in a two-person cake-sharing problem.

Journal ArticleDOI
TL;DR: The justification of Bayes' rule by cognitive rationality principles is undertaken by extending the propositional axiom systems usually proposed in two contexts of belief change: revising and updating as discussed by the authors.
Abstract: The justification of Bayes' rule by cognitive rationality principles is undertaken by extending the propositional axiom systems usually proposed in two contexts of belief change: revising and updating. Probabilistic belief change axioms are introduced, either by direct transcription of the set-theoretic ones, or in a stronger way but nevertheless in the spirit of the underlying propositional principles. Weak revising axioms are shown to be satisfied by a General Conditioning rule, extending Bayes' rule but also compatible with others, and weak updating axioms by a General Imaging rule, extending Lewis' rule. Strong axioms (equivalent to the Miller–Popper axiom system) are necessary to justify Bayes' rule in a revising context, and justify in fact an extended Bayes' rule which applies, even if the message has zero probability.

Journal ArticleDOI
Fany Yuval1
TL;DR: The research reported here was the first empirical examination of strategic voting under the Sequential Voting by Veto (SVV) voting procedure, proposed by Mueller (1978), and suggests that while voters under SVV use all three models, their choice is conditioned by group size.
Abstract: The research reported here was the first empirical examination of strategic voting under the Sequential Voting by Veto (SVV) voting procedure, proposed by Mueller (1978). According to this procedure, a sequence of n voters must select s out of s+m alternatives (m≥n≥2; s>0). Hence, the number of alternatives exceeds the number of participants by one (n+1). When the ith voter casts her vote, she vetoes the alternative against which a veto has not yet been cast, and the s remaining non-vetoed alternatives are elected. The SVV procedure invokes the minority principle, and it has advantages over all majoritarian procedures; this makes SVV a very desirable means for relatively small groups to make collective decisions. Felsenthal and Machover (1992) pointed out three models of voting under SVV: sincere, optimal, and canonical. The current research investigated, through laboratory experiments, which cognitive model better accounts for the voters' observed behavior and the likelihood of obtaining the optimal outcome as a function of the size of n (when s=1). The findings suggest that while voters under SVV use all three models, their choice is conditioned by group size. In the small groups (n=3), the canonical mode was a better predictor than the sincere model. In the larger groups (n=5), the sincere model was a better predictor than the canonical model. There is also evidence of players' learning during the experiment.

Journal ArticleDOI
TL;DR: In this paper, the authors considered two case examples where decision makers dissatisfied with the results of their investments, and having lost money, appear to compound their losses by selling out at prices less than their own estimates of the remaining financial worth of the failed assets.
Abstract: The all too common sunk cost effect is apparent when an investor influenced by what has been spent already persists in a venture, committing further resources or foregoing more profitable opportunities, when the economically rational action is to quit Less common but arguably just as much a sunk cost effect is the mistake of giving up on a failed or failing venture too readily, sometimes out of nothing but pique at what has been lost, or perhaps through the more subtle psychological forces posited by Kahneman, Tversky, Thaler and others within prospect theory and related work on ``mental budgeting'' Two case examples are considered, wherein decision makers dissatisfied with the results of their investments, and having lost money, appear to compound their losses by selling out at prices less than their own estimates of the remaining financial worth of the failed assets These decisions are evaluated from the perspectives of both behavioral and prescriptive economics, and are found to have possible explanations in both Their prescriptive rationale assumes a portfolio theory of investment decisions, and is demonstrated within both expected utility (economics) and mean-variance (finance) frameworks

Journal ArticleDOI
TL;DR: In this paper, the Shapley value has been generalized by Owen to TU-games in coalition structure and this solution satisfies the multiplication property that the share of a player in some coalition is equal to the product of Shapley share of the coalition in a game between the coalitions.
Abstract: A cooperative game with transferable utility–or simply a TU-game– describes a situation in which players can obtain certain payoffs by cooperation A value function for these games assigns to every TU-game a distribution of payoffs over the players Well-known solutions for TU-games are the Shapley and the Banzhaf value An alternative type of solution is the concept of share function, which assigns to every player in a TU-game its share in the worth of the grand coalition In this paper we consider TU-games in which the players are organized into a coalition structure being a finite partition of the set of players The Shapley value has been generalized by Owen to TU-games in coalition structure We redefine this value function as a share function and show that this solution satisfies the multiplication property that the share of a player in some coalition is equal to the product of the Shapley share of the coalition in a game between the coalitions and the Shapley share of the player in a game between the players within the coalition Analogously we introduce a Banzhaf coalition structure share function Application of these share functions to simple majority games show some appealing properties

Journal ArticleDOI
TL;DR: In this article, the authors apply the Hurwicz decision rule to an adjustment problem concerning the decision whether a given action should be improved in the light of some knowledge on the states of nature or on other actors' behaviour.
Abstract: In this paper the Hurwicz decision rule is applied to an adjustment problem concerning the decision whether a given action should be improved in the light of some knowledge on the states of nature or on other actors' behaviour. In comparison with the minimax and the minimin adjustment principles the general Hurwicz rule reduces to these specific classes whenever the underlying loss function is quadratic and knowledge is given by an ellipsoidal set. In the framework of the adjustment model discussed in this paper Hurwicz's optimism index can be interpreted as a mobility index representing the actor's attitude towards new external information. Examples are given that serve to illustrate the theoretical findings.

Journal ArticleDOI
TL;DR: In this article, the authors follow the decision theory approach and show that if positivity of the bid-ask spread is identified with strong risk aversion for an expected utility market-maker, this is no longer true for a rank-dependent expected utility one.
Abstract: A usual argument in finance refers to no arbitrage opportunities for the positivity of the bid-ask spread. Here we follow the decision theory approach and show that if positivity of the bid-ask spread is identified with strong risk aversion for an expected utility market-maker, this is no longer true for a rank-dependent expected utility one. For such a decision-maker only a very weak form of risk aversion is required, a result which seems more in accordance with his actual behavior. We conclude by showing that the no-trade interval result of Dow and Werlang (1992a) remains valid for a rank-dependent expected utility market-maker merely exhibiting this weak form of risk aversion.

Journal ArticleDOI
TL;DR: This paper analyzes the intertwined dynamics of beliefs and decision, in order to determine conditions on the agents that allow them to reach agreements, based on defeasible evaluation of possibilities.
Abstract: In economically meaningful interactions negotiations are particularly important because they allow agents to improve their information about the environment and even to change accordingly their own characteristics. In each step of a negotiation an agent has to emit a message. This message conveys information about her preferences and endowments. Given that the information she uses to decide which message to emit comes from beliefs generated in previous stages of the negotiation, she has to cope with the uncertainty associated with them. The assessment of the states of the world also evolves during the negotiation. In this paper we analyze the intertwined dynamics of beliefs and decision, in order to determine conditions on the agents that allow them to reach agreements. The framework for decision making we consider here is based on defeasible evaluation of possibilities: an argument for a choice defeats another one if it is based on a computation that better uses all the available information.

Journal ArticleDOI
TL;DR: In this paper, a class of multi-person bargaining mechanisms based on games in coalitional form is studied and a non-cooperative axiomatization of the core of the game is presented.
Abstract: We treat a class of multi-person bargaining mechanisms based on games in coalitional form. For this class of games we identify properties of non-cooperative solution concepts, which are necessary and sufficient for the equilibrium outcomes to coincide with the core of the underlying coalitional form game. We view this result as a non-cooperative axiomatization of the core. In contrast to most of the literature on multi-person bargaining we avoid a precise specification of the rules of the game. Alternatively, we impose properties of such games, which give rise to a large class of mechanisms, all of which are relevant for our axiomatization.

Journal ArticleDOI
TL;DR: The normative conclusion is shown: usual arguments linking violations of DC to departures from IND are shown to be actually based on specific assumptions which may rightfully be released, so that it is actually possible for a non-EU maximizer to be dynamically consistent and thus avoid normative difficulties.
Abstract: In a dynamic (sequential) framework, departures from the independence axiom (IND) are reputed to induce violations of dynamic consistency (DC), which may in turn have undesirable normative consequences. This result thus questions the normative acceptability of non expected-utility (non-EU) models, which precisely relax IND. This paper pursues a twofold objective. The main one is to discuss the normative conclusion: usual arguments linking violations of DC to departures from IND are shown to be actually based on specific (but usually remaining implicit) assumptions which may rightfully be released, so that it is actually possible for a non-EU maximizer to be dynamically consistent and thus avoid normative difficulties. The second objective is to introduce a kind of 'reality principle' (through two other evaluation criteria) in order to mitigate the normative requirement when examining adequate moods for non-EU decision making.

Journal ArticleDOI
TL;DR: In this article, it was shown that even a risk averter might optimally choose a riskier coin when learning is allowed, and the authors express most of their results in the language of stochastic orderings, allowing comparisons that are valid for large classes of utility functions.
Abstract: A decision maker bets on the outcomes of a sequence of coin-tossings. At the beginning of the game the decision maker can choose one of two coins to play the game. This initial choice is irreversible. The coins can be biased and the player is uncertain about the nature of one (or possibly both) coin(s). If the player is an expected-utility maximizer, her choice of the coin will depend on different elements: the nature of the game (namely, whether she can observe the outcomes of the previous tosses before making her next decision), her utility function, the prior distribution on the bias of the coin. We will show that even a risk averter might optimally choose a riskier coin when learning is allowed. We will express most of our results in the language of stochastic orderings, allowing comparisons that are valid for large classes of utility functions.

Journal ArticleDOI
TL;DR: In this paper, the cognitive effort required to decide in multiattribute binary choice using a variation of the additive difference strategy was investigated, and it was shown that there is a positive relationship between the effort needed to decide and the mean of the differences between the dimensions of the choice alternatives.
Abstract: This article reports an empirical investigation of the cognitive effort required to decide in multiattribute binary choice using a variation of the Additive Difference strategy. In contrast with other studies, this paper focuses on the effect of various context variables (rather than task variables) on cognitive effort. In order to select the context variables to be manipulated, we used the model proposed by Shugan (1980; J. Consumer Res. 75 (1980) 99). Our results indicate that there is a positive relationship between the cognitive effort required to decide and the mean of the differences between the dimensions of the choice alternatives. We have also found an inverse relationship between cognitive effort and the variance of the differences between the dimensions of the choice alternatives. Finally, we have found that in negative correlation contexts the effort needed to decide is greater than in positive and null correlation contexts.

Journal ArticleDOI
Werner Güth1
TL;DR: In this paper, the authors generalize the concept of strict equilibrium and question the practical relevance of the existence requirement for refinements of the equilibrium concept, and suggest more complex reduced games which capture the inclinations of ''players who already left''.
Abstract: Consistency and optimality together with converse consistency provide an illuminating and novel characterization of the equilibrium concept (Peleg and Tijs, 1996). But (together with non-emptiness) they preclude refinements of the equilibrium notion and selection of a unique equilibrium (Norde et al., 1996). We suggest two escape routes: By generalizing the concept of strict equilibrium we question the practical relevance of the existence requirement for refinements. To allow for equilibrium selection we suggest more complex reduced games which capture the inclinations of ``players who already left''.

Journal ArticleDOI
TL;DR: In this paper, it is shown that a complete and transitive preference relation over the commodity bundles is equivalent to regular tastes where regularity means that tastes can be derived from a pure qualitative relation between the different commodities.
Abstract: An individual is said to have a taste for a particular menu, i.e. a subset of available commodities, if he is indifferent between all commodity bundles that contain the same quantity for each commodity which actually is in the menu, whatever the rest of the bundle. Then, a taste is directly defined as a deep property of preferences. As a first result, it is shown that a complete and transitive preference relation over the commodity bundles is equivalent to regular tastes where regularity means that tastes can be derived from a pure qualitative relation between the different commodities. Besides, a preference family based on preference relations corresponding to each particular commodity is said to be rationalizable if there exists a metapreference over commodity bundles which consistently summarizes the preference family and then allows to decide. As a second result, it is shown that if a preference family is rationalizable, then the tastes are organized thanks to a reflexive and transitive qualitative relation between the different commodities.

Journal ArticleDOI
Christian List1
TL;DR: In this paper, a multidimensional generalization of Arrow's original framework is presented, which highlights not only the threat of dictatorship of a single individual, but also the possibility of dominating a single dimension.
Abstract: Arrow's account (1951/1963) of the problem of social choice is based upon the assumption that the preferences of each individual in the relevant group are expressible by a single ordering. This paper lifts that assumption and develops a multidimensional generalization of Arrow's framework. I show that, like Arrow's original framework, the multidimensional generalization is affected by an impossibility theorem, highlighting not only the threat of dictatorship of a single individual, but also the threat of dominance of a single dimension. In particular, even if preferences are single-peaked across individuals within each dimension – a situation called intradimensional single-peakedness – any aggregation procedure satisfying Arrow-type conditions will make one dimension dominant. I introduce lexicographic hierarchies of dimensions as a class of possible aggregation procedures under intradimensional single-peakedness. The interpretation of the results is discussed.

Journal ArticleDOI
TL;DR: In this article, the existence of a pair of nonnegative, positively homogeneous and semicontinuous real-valued functionals representing an interval order on a real cone in a topological vector space is discussed.
Abstract: It is well known that interval orders are particularly interesting in decision theory, since they are reflexive, complete and nontransitive binary relations which may be fully represented by means of two real-valued functions. In this paper, we discuss the existence of a pair of nonnegative, positively homogeneous and semicontinuous real-valued functionals representing an interval order on a real cone in a topological vector space. We recover as a particular case a result concerning the existence of a nonnegative, positively homogeneous and continuous utility functional for a complete preorder on a real cone in a topological vector space.

Journal ArticleDOI
TL;DR: In this paper, the authors study the framework of additive utility theory, obtaining new results derived from a concurrence of algebraic and topological techniques such as the concept of a connected topological totally ordered semigroup.
Abstract: In the present paper we study the framework of additive utility theory, obtaining new results derived from a concurrence of algebraic and topological techniques Such techniques lean on the concept of a connected topological totally ordered semigroup We achieve a general result concerning the existence of continuous and additive utility functions on completely preordered sets endowed with a binary operation ``+'', not necessarily being commutative or associative In the final part of the paper we get some applications to expected utility theory, and a representation theorem for a class of complete preorders on a quite general family of real mixture spaces

Journal ArticleDOI
TL;DR: Pulier and Machina as mentioned in this paper describe the Barrett-Arntzenius infinite decision puzzle in a simplified form, address the recent misunderstandings, and describe possible morals for decision theory.
Abstract: Pulier (2000, Theory and Decision 49: 291) and Machina (2000, Theory and Decision 49: 293) seek to dissolve the Barrett–Arntzenius infinite decision puzzle (1999, Theory and Decision 46: 101). The proposed dissolutions, however, are based on misunderstandings concerning how the puzzle works and the nature of supertasks more generally. We will describe the puzzle in a simplified form, address the recent misunderstandings, and describe possible morals for decision theory.