scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Behavioral Decision Making in 1989"


Journal ArticleDOI
TL;DR: The authors found that repeated statements are rated more valid than non-repeated statements and that familiarity is a basis for the judged validity of statements, and that the validity-enhancing effect of repetition does not occur in subject domains about which a person claims not be knowledgeable.
Abstract: Using factual information of uncertain truth value as the stimulus material, previous investigators have found that repeated statements are rated more valid than non-repeated statements. Experiments 1 and 1A were designed to determine if this effect would also occur for opinion statements and for statements initially rated either true or false. Subjects were exposed to a 108-statement list one week and a second list of the same length a week later. This second list was comprised of some of the statements seen earlier plus some statements seen for the first time. Results suggested that all types of repeated statements are rated as more valid than their non-repeated counterparts. Experiment 2 demonstrated that the validity-enhancing effect of repetition does not occur in subject domains about which a person claims not be knowledgeable. From the results of both studies we concluded that familiarity is a basis for the judged validity of statements. The relation between this phenomenon and the judged validity of decisions and predictions was also discussed.

169 citations


Journal ArticleDOI
TL;DR: In this paper, a descriptive model of choice between simple lotteries is proposed, where people combine 'absolute' and 'comparative' strategies in choice situations, and three experimental tests of the model are reported.
Abstract: A descriptive model of choice between simple lotteries is proposed. According to the model, people combine 'absolute' and 'comparative' strategies in choice situations. Three experimental tests of the model are reported. In its limited domain, the model appears superior to Prospect Theory on qualitative grounds, and to 1-parameter versions of Expected Utility Theory on both qualitative and quantitative grounds.

126 citations


Journal ArticleDOI
TL;DR: The impact of accuracy feedback, effort feedback, and emphasis on either a goal of maximizing accuracy relative to effort or minimizing effort relative to accuracy on decision processes was found to be consistent with the shift in strategies predicted by an effort/accuracy model of strategy selection.
Abstract: This paper examines the impact of accuracy feedback, effort feedback, and emphasis on either a goal of maximizing accuracy relative to effort or minimizing effort relative to accuracy on decision processes. Feedback on the accuracy of decisions leads to more normative-like processing of information and improved performance only in the most difficult problems, i.e., decisions with low dispersion in attribute weights. Explicit effort feedback has almost no impact on processing or performance. The impact of the goal manipulation on decision processes was found to be consistent with the shift in strategies predicted by an effort/accuracy model of strategy selection. In particular, a goal of emphasizing accuracy led to more normative-like processing, while emphasis on effort led to less extensive, more selective, and more attribute-based processing and poorer performance. These results provide perhaps the clearest evidence to date of the effect of goals on processing differences. Complex interactive relationships between types of feedback and goal structures suggest the need for additional study of feedback and goals on adaptive decision behavior.

123 citations


Journal ArticleDOI
TL;DR: This paper proposed to use observable antecedents and consequences to code incidents in a group's decision process, thereby providing the basis for more rigorous, quantitative analyses of historical or current decision processes.
Abstract: Research on historical cases of policy decisions thought to involve groupthink nearly always has been qualitative, rather than quantitative. We propose that observable antecedents and consequences can be used to code incidents in a group's decision process, thereby providing the basis for more rigorous, quantitative analyses. As a first step toward such a quantitative case analysis, we coded statements from the investigative report on the space shuttle Challenger accident as positive or negative instances of the observable antecedents and consequences of groupthink. Positive instances of groupthink were twice as frequent as negative instances. More importantly, during the 24 hours prior to launch the ratio of positive to negative instances increased, then remained high. These results are consistent with the notion that the decision to launch the Challenger involved groupthink and provide a first step toward more rigorous quantitative analysis of historical or current decision processes.

116 citations


Journal ArticleDOI
TL;DR: The history of axiomatic measurement of perceived risk of unidimensional risky choice alternatives is briefly reviewed in this article, where the most viable risk model on the basis of these and other results is described.
Abstract: The history of axiomatic measurement of perceived risk of unidimensional risky choice alternatives is briefly reviewed. Experiments 1 and 2 present data that distinguish between two general classes of risk functions (those that assume that gain and loss components of an alternative combine additively versus multiplicatively) on empirical grounds. The most viable risk model on the basis of these and other results is described. Experiment 3 presents data that call into question the descriptive adequacy of some of this risk model’s assumptions, in particular the expectation principle. Suggestions for possible modifications are made.

93 citations


Journal ArticleDOI
TL;DR: This paper examined how shifts in perspective might influence people's perceptions of events, and investigated two possible factors: temporal perspective (whether an event is set in the future or past) and uncertainty (whether the event's occurrence is certain or uncertain).
Abstract: Prospective hindsight involves generating an explanation for a future event as if it had already happened; i.e., one goes forward in time, and then looks back. In order to examine how shifts in perspective might influence people's perceptions of events, we investigated two possible factors: temporal perspective (whether an event is set in the future or past) and uncertainty (whether the event's occurrence is certain or uncertain). In the first experiment, temporal perspective showed little influence while outcome uncertainty strongly affected the nature of explanations for events. Explanations for sure events tended to be longer, to contain a higher proportion of episodic reasons, and to be expressed in past tense. Evidence from the second experiment supports the view that uncertainty mediates not the amount of time spent explaining, but rather subjects' choice of explanation type. The implications of these findings for the use of temporal perspective in decision aiding are discussed.

80 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a critique of the concept of randomness as it occurs in the psychological literature and argue that observed biases may be an artifact of the experimental situation and that even if such biases do generalise they may not have pejorative implications for induction in the real world.
Abstract: This article presents a critique of the concept of randomness as it occurs in the psychological literature. The first section of our article outlines the significance of a concept of randomness to the process of induction; we need to distinguish random and non-random events in order to perceive lawful regularities and formulate theories concerning events in the world. Next we evaluate the psychological research that has suggested that human concepts of randomness are not normative. We argue that, because the tasks set to experimental subjects are logically problematic, observed biases may be an artifact of the experimental situation and that even if such biases do generalise they may not have pejorative implications for induction in the real world. Thirdly we investigate the statistical methodology utilised in tests for randomness and find it riddled with paradox. In a fourth section we find various branches of scientific endeavour that are stymied by the problems posed by randomness. Finally we briefly mention the social significance of randomness and conclude by arguing that such a fundamental concept merits and requires more serious considerations.

71 citations


Journal ArticleDOI
TL;DR: In this article, the credibility of human sources of evidence and its relation to the inferential value of testimony they provide is investigated. But the credibility assessment can be construed as a cascaded inference in which attributes of human source credibility are identified.
Abstract: This paper concerns study of the credibility of human sources of evidence and its relation to the inferential value of testimony they provide. From a certain view of 'knowledge' in epistemology comes the suggestion that credibility assessment can be construed as a cascaded inference in which attributes of human source credibility are identified. Scholarship from evidence law in jurisprudence suggests an evidential basis for credibility assessment in terms of these attributes. Applying Bayes' rule to this cascaded inference offers a way of expressing and combining credibility-related beliefs in the process of assessing the inferential value of evidence.

37 citations


Journal ArticleDOI
TL;DR: The authors investigated two possible explanations for this finding: people's belief-value structures may change in the choice task as they try to find the best alternative, and a difficult choice task may cause the decision maker to use simplifying heuristics.
Abstract: A combined multi-attribute utility and expectancy-value model has repeatedly been found to yield a worse fit to choices than to preference ratings. The present study investigated two possible explanations for this finding. First, people's belief-value structures may change in the choice task as they try to find the best alternative. Second, a difficult choice task may cause the decision maker to use simplifying heuristics. In the first of two experiments, subjective belief-value structures were measured on two occasions separated by about one week. Immediately before the second measurement, different groups of subjects performed a choice task, gave preference ratings, or performed a control task. The results did not support an interpretation of the greater difficulty of predicting choices in terms of changes in belief-value structures. However, the notion of simplifying heuristics received support by the finding that adopting simpler versions of the original model improved the predictions of the choices. In the second experiment, beliefs were measured immediately before or after each of a series of choices or preference ratings. The results indicated that although temporary changes in beliefs may occur, they can hardly provide a full account of the differential predictability of preferences and choices.

31 citations


Journal ArticleDOI
TL;DR: Within-persons tests of expectancy-value models must be made on theoretical rather than empirical grounds as discussed by the authors, which is not a justification for within-person analyses of expectancy value models.
Abstract: It has been asserted that (I) tests of expectancy-value models require within-persons analyses and that (2) within-persons analyses yield better predictions of behavioral tendencies than do across-persons analyses The first assertion is correct; the second is not Justification for within-persons tests of expectancy-value models must be made on theoretical rather than empirical grounds

25 citations


Journal ArticleDOI
TL;DR: The authors found that as the size of the purchase increased, subjects would be more willing to buy additional smaller items and responded not only to actual changes in purchase size but also to changes in the presentation or framing of a purchase.
Abstract: Several theories of decision making are based, in part, upon the principles of psychophysics (Kahneman and Tversky, 1979; Thaler, 1985). For example, due to the psychophysics of quantity, the difference between $10 and $20 seems greater than the difference between $110 and $120. To determine whether psychophysics influences consumers' decisions, subjects in four studies made real or hypothetical purchases. It was predicted that as the size of the purchase increased, subjects would be more willing to buy additional smaller items. Small extra purchases should seem like minor expenditures when they follow larger purchases. Results of the four studies supported our hypothesis. In addition, it was found that subjects responded not only to actual changes in purchase size but also to changes in the presentation or framing of a purchase.

Journal ArticleDOI
TL;DR: In this paper, the authors adopt a more direct approach by estimating subjects' indifference maps in the Marschak-Machina Triangle, and find that most subjects have maps which are nearly consistent with those implied by Subjective Expected Utility theory.
Abstract: The ongoing debate on the correct modelling of economic behaviour under risk makes heavy expositional use of the 'Marschak-Machina Triangle'. This has also been used when constructing tests of the various competing theories. In this paper, we adopt a more direct approach - by estimating subjects' indifference maps in the Triangle. Employing an interview technique, we find that most subjects have maps which are nearly consistent with those implied by Subjective Expected Utility theory.

Journal ArticleDOI
TL;DR: It appears that no one solution yet is enough for the St. Petemburn Paradox for r-oided 'coinr' (where z is any integer grco(er than or equal to 2) arc-presented.
Abstract: Although the controversy over the comt solution to the St. Petersburg paradox continues in the decision making Literature, few of the solutions have been emniricallv evaluated. Via the develooment of alternative versions of the St ~eiersburg~ame, we were able to empi;ically test some of thesesolutions. Experts and novices behaved in aceordana with Trekman's exmalion heuristic when bidding for the right to play the various versions of the st: Petersburggamc. When subjsu wen asked their preferences among the game versions, novices continued to behave in aceordana with the cxdon heuristic but a oluralitv of exwm seemed to fonow another strategy. 7% preference reversal and h implkntiod and wssible causes arc thorounhlv discussed. An alternative theow which mimicks the ;xpMation heuristic is cokidered, and generalizations of theixpcnation heuristic ad the St. Petemburn Paradox for r-oided 'coinr' (where z is any integer grco(er than or equal to 2) arc-presented. It appears that no one solution yet nch enough for the St. Pehrsburg paradox.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a robust interactive decision analysis (RID) approach, which allows a decision maker to voluntarily and interactively express strong (viz, sure) binary preferences for actions, partial decision functions and full decision functions, and only imprecise probability and utility function assessments.
Abstract: We have proposed a novel interactive procedure for performing decision analysis, called Robust Interactive Decision Analysis (RID), which permits a decision maker (DM) to voluntarily and interactively express strong (viz, sure) binary preferences for actions, partial decision functions, and full decision functions, and only imprecise probability and utility function assessments. These serve as INPUTS TO operators to prune the state probability space and decision space until an optimal choice strategy is obtained. The viability of the RID approach depends on a DM's ability to provide such information consistently and meaningfully. On a limited scale we experimentally investigate the behavioral implications of the RID method in order to ascertain its potential operational feasibility and viability. More specifically, we examine whether a DM can (1) express strong preferences between pairs of vectors of unconditional and conditional payoffs or utilities consistently; (2) provide imprecise (ordinal and interval) state probabilities that are individually as well as mutually consistent with the state probabilities imputed from the expressed strong preferences. The results show that a DM can provide strong individually and mutually consistent preference and ordinal probability information. Moreover, most individuals also appear to be able to provide interval probabilities that are individually and mutually consistent with their strong preference inputs. However, the several violations observed, our small sample size, and the limited scope of our investigation suggest that further experimentation is needed to determine whether and/or how such inputs should be elicited. Overall, the results indicate that the RID method is behaviorally viable.

Journal ArticleDOI
TL;DR: In this article, psychological guidelines for simplifying medical information are described to aid the perception and recognition of abnormalities in medical test reports, which resemble natural human editing strategies outlined in prospect theory.
Abstract: In this paper, 1 describe psychological guidelines for simplifying medical information — methods to aid the perception and recognition of abnormalities in medical test reports These techniques resemble natural human editing strategies outlined in prospect theory The methods pre-edit data as humans do but to reduce human effort I review empirical studies assessing these techniques and discuss needs for further research

Journal ArticleDOI
TL;DR: Five different models are proposed in which optimal performance is achieved when the power of the system is judged to be compatible with the proficiency of the user, including an averaging model in which expected performance is the average of the values of user proficiency and system power, and a multiplying model.
Abstract: How do managers expect the proficiency of users and the power of the computers to determine overall performance? Five different models are proposed: (a) a matching model in which optimal performance is achieved when the power of the system is judged to be compatible with the proficiency of the user, (b) an averaging model in which expected performance is the average of the values of user proficiency and system power, (c) a multiplying model in which performance is the product of the values of user proficiency and system power, (d) a human/computer ratio model in which performance is determined by the ratio of system power over total effort, and (e) a computer/human ratio model in which performance is determined by the ratio of user proficiency over total effort. The applicability of these models was assessed by having managers and students of management predict performance in human/computer systems from information about the user's proficiency with computers and the power of the system. Participants rated 16 combinations of user proficiency and system power from a 4 × 4 factorial design. The pattern of ratings indicated that 51 per cent used a multiplying model and 25 percent used an averaging model; whereas, only 6 percent used the matching model and 4 percent used a ratio model. The remaining 14 percent did not follow any model clearly. Implications of these results were discussed for the design of the human/ computer interface, training and selection of users, and the cost-benefit tradeoffs for investment in user training versus equipment acquisition.