scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Behavioral Decision Making in 2000"


Journal ArticleDOI
TL;DR: In this paper, the inverse relationship between perceived risk and perceived benefit was examined and it was shown that people rely on affect when judging the risk and benefit of specific hazards, such as nuclear power.
Abstract: This paper re-examines the commonly observed inverse relationship between perceived risk and perceived benefit. We propose that this relationship occurs because people rely on affect when judging the risk and benefit of specific hazards. Evidence supporting this proposal is obtained in two experimental studies. Study 1 investigated the inverse relationship between risk and benefit judgments under a time-pressure condition designed to limit the use of analytic thought and enhance the reliance on affect. As expected, the inverse relationship was strengthened when time pressure was introduced. Study 2 tested and confirmed the hypothesis that providing information designed to alter the favorability of one's overall affective evaluation of an item (say nuclear power) would systematically change the risk and benefit judgments for that item. Both studies suggest that people seem prone to using an ‘affect heuristic’ which improves judgmental efficiency by deriving both risk and benefit evaluations from a common source—affective reactions to the stimulus item. Copyright © 2000 John Wiley & Sons, Ltd.

2,525 citations


Journal ArticleDOI
TL;DR: In this article, a simple formal model of self-control problems is proposed and applied to some specific economic applications, and discusses some general lessons and open questions in the economic analysis of immediate gratification.
Abstract: People have self-control problems: We pursue immediate gratification in a way that we ourselves do not appreciate in the long run. Only recently have economists considered the behavioral and welfare implications of such time-inconsistent preferences. This paper outlines a simple formal model of self-control problems, applies this model to some specific economic applications, and discusses some general lessons and open questions in the economic analysis of immediate gratification. We emphasize the importance of the timing of the rewards and costs of an activity, as well as a person's awareness of future self-control problems. We identify situations where knowing about self-control problems helps a person and situations where it hurts her, and also identify situations where even mild self-control problems can severely damage a person. In the process, we describe specific implications of self-control problems for addiction, incentive theory, and consumer choice and marketing. Copyright © 2000 John Wiley & Sons, Ltd.

362 citations


Journal ArticleDOI
TL;DR: In this paper, the authors take stock of recent research on how people summarize and evaluate extended experiences, focusing on a few features (gestalt characteristics), such as the rate at which the transient state components of the experience become more or less pleasant over its duration, and the intensity of the state at key instances, in particular the most intense (peak) and the final (end) moments.
Abstract: In this paper we take stock of recent research on how people summarize and evaluate extended experiences. Summary assessments do not simply integrate all the components of the evaluated events, but tend to focus on only a few features (gestalt characteristics). Examples of these defining features include the rate at which the transient state components of the experience become more or less pleasant over its duration, and the intensity of the state at key instances, in particular the most intense (peak) and the final (end) moments. It is not yet suAciently clear which specific gestalt characteristics dominate summary assessments of experiences, nor how this diAers across types of experiences or measurement approaches. To address some of these issues, we describe new research in this area, discuss potential methodological diAculties, and suggest directions for future research. Copyright # 2000 John Wiley & Sons, Ltd.

311 citations


Journal ArticleDOI
TL;DR: This paper showed that a high percentage of adolescent smokers see no health risk from smoking the next cigarette or even from smoking regularly for the first few years, indicating that many young people do not really understand the risks from smoking cigarettes.
Abstract: A particularly important aspect of risk is its cumulative nature, when exposure to a hazard occurs repeatedly over time. The degree to which people understand cumulative risk has important theoretical and social implications. The latter play a role in disputes about whether those who smoke cigarettes know the risks of that activity. Proponents of the view that cigarette smoking reflects rational choices made by people well informed about the risks assume that knowledge of smoking risks is adequately assessed in terms of perceptions of the long-term risks. However, there is reason to question this assumption. The risks of smoking cumulate, one cigarette at a time. The present study demonstrates that a high percentage of adolescent smokers see no health risk from smoking the next cigarette or even from smoking regularly for the ‘first few years’. This denial of ‘short-term’ risks, coupled with a tendency observed in other studies for young smokers to underestimate the addictive properties of tobacco, indicates that many young people do not really understand the risks from smoking cigarettes. Copyright © 2000 John Wiley & Sons, Ltd.

193 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examined the effect of multiple reference points on ratings of salary satisfaction and fairness in a series of scenarios that described a salary offer made to a hypothetical MBA graduate and provided information about the salary offers made to either one or two other similar graduates.
Abstract: Most studies of reference point effects have used a single referent, such as a price, a salary, or a target. There is considerable evidence that the judged fairness of, or satisfaction with, an outcome is significantly influenced by discrepancies from such single referents. In many settings, however, more than one reference point may be available, so the subject may be confronted simultaneously with some referents above, some at, and some below the focal outcome. Little is known about the simultaneous impact of such multiple reference points. We examine here the effects of two referents on ratings of salary satisfaction and fairness. Subjects were presented with a series of scenarios that described a salary offer made to a hypothetical MBA graduate and provided information about the salary offers made to either one or two other similar graduates. For each scenario, subjects judged how fair the focal graduate would feel the offer to be, and how satisfied he or she would be with it. Satisfaction ratings displayed asymmetric effects of comparisons: the pain associated with receiving a salary lower than another MBA is greater than the pleasure associated with a salary higher than the other student by the same amount. Fairness ratings showed a different pattern of asymmetric effects of discrepancies from the reference salaries: the focal graduate's salary was judged somewhat less fair when others received lower offers, and much less fair when others received higher offers. The asymmetric effects occurred for both reference points, suggesting that the focal salary was compared separately to each of the referents rather than to a single reference point formed by prior integration of the referents. Copyright © 2000 John Wiley & Sons, Ltd.

185 citations


Journal ArticleDOI
TL;DR: In this article, the authors examine the relationship between the patterns of experiences over time and their overall evaluations, and show that the rules for combining such experiences depend on whether the experiences are perceived to be single or multiple parts (i.e. continuous or discrete).
Abstract: How do people create overall evaluations for experiences that change in intensity over time? What ‘rules’ do they use for combining such diAerent intensities into single overall evaluations? And what factors influence these integration rules? This paper starts by examining the relationship between the patterns of experiences over time and their overall evaluations. Within this framework, we propose and test the idea that the rules for combining such experiences depend on whether the experiences are perceived to be composed of single or multiple parts (i.e. continuous or discrete). In two experiments we demonstrate that an experience’s level of cohesiveness moderates the relationship between its pattern and overall evaluation. The results show that breaking up experiences substantially reduces the impact of patterns on overall evaluations. In addition, we demonstrate that continuously measuring momentary intensities produces a similar eAect on this relationship, causing us to speculate that providing continuous intensity responses causes subjects to self-segment the experience. Copyright # 2000 John Wiley & Sons, Ltd.

159 citations


Journal ArticleDOI
TL;DR: In this article, half of the participants read a scenario in which a sunk cost was or was not present, suggesting that the inflated estimate is a consequence of the decision to invest.
Abstract: The sunk cost effect is manifested in a tendency to continue an endeavor once an investment has been made. Arkes and Blumer (1985) showed that a sunk cost increases one's estimated probability that the endeavor will succeed [p(s)]. Is this p(s) increase a cause of the sunk cost effect, a consequence of the effect, or both? In Experiment 1 participants read a scenario in which a sunk cost was or was not present. Half of each group read what the precise p(s) of the project would be, thereby discouraging p(s) inflation. Nevertheless these participants manifested the sunk cost effect, suggesting p(s) inflation is not necessary for the effect to occur. In Experiment 2 participants gave p(s) estimates before or after the investment decision. The latter group manifested higher p(s), suggesting that the inflated estimate is a consequence of the decision to invest. Copyright © 2000 John Wiley & Sons, Ltd.

122 citations


Journal ArticleDOI
TL;DR: The role of computer-based decision aids in reducing cognitive effort and therefore influencing strategy selection is examined and it is shown that decision aids can induce the use of normatively oriented strategies.
Abstract: This paper examines the role of computer-based decision aids in reducing cognitive effort and therefore influencing strategy selection. It extends and complements the work reported in the behavioral decision theory literature on the role of effort and accuracy in choice tasks. The central proposition of the research is that if a decision aid makes a strategy that should lead to a more accurate outcome at least as easy to employ as a simpler, but less accurate, heuristic, then the use of a decision aid should induce that more accurate strategy and as a consequence improve decision quality. Otherwise, a decision aid may only influence decision-making efficiency. This occurs because decision makers use a decision aid in such a way as to minimize their overall level of effort expenditure. Results from a laboratory experiment support this proposition. When a more accurate normative strategy is made less effortful to use, it is used. This result is consistent with the findings of our prior studies, but more clearly demonstrates that decision aids can induce the use of normatively oriented strategies. The key to inducing these strategies is to make the normative strategy easier to execute than competing alternative strategies. Copyright © 2000 John Wiley & Sons, Ltd.

118 citations


Journal ArticleDOI
TL;DR: The authors explored how this preference for improving sequences is moderated by expectations about how sequences are usually experienced, and found that the length of the sequence and the particular health attribute described influenced both preferences and expectations such that preferences tracked expectations of how the sequences would realistically occur.
Abstract: Whereas choices among individual outcomes at different points in time generally show a positive time preference, choices between sequences of outcomes usually show a negative time preference, that is, a preference for improvement. The present studies explored how this preference for improving sequences is moderated by expectations about how sequences are usually experienced. Subjects in three experiments evaluated four types of health sequences with multiple sequence lengths. The length of the sequence and the particular health attribute described influenced both preferences and expectations such that preferences tracked expectations about how the sequences would realistically occur. Several mechanisms by which expectations could influence preferences are discussed. Copyright © 2000 John Wiley & Sons, Ltd.

115 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the pattern of intransitive responses is inconsistent with alternative-based choice, and that intransitivity in intertemporal choice can best be explained by a version of Tversky's (1969) lexicographic-semi-order rule, in which choice is based on the amount of money when that amount exceeds a threshold, but on delay otherwise.
Abstract: Multiattribute choice rules can be classified as being either alternative-based or attribute-based. Conventional accounts of intertemporal choice, hyperbolic and exponential discounting, assume alternative-based rules. One consequence of using these rules is that choices will be transitive, meaning that if a is preferred to b, and b is preferred to c, then a will be preferred to c. There have been many demonstrations of intransitivity in domains other than intertemporal choice, and in this paper we undertake to establish whether intransitive intertemporal choice can be explained by a stochastic specification of exponential discounting, or if we need to invoke an attribute-based choice process. In an experiment, we demonstrate that the pattern of intransitive responses is inconsistent with alternative-based choice. We argue that intransitive choices can best be explained by a version of Tversky's (1969) lexicographic-semiorder rule, in which choice is based on the amount of money when that amount exceeds a threshold, but on delay otherwise. Transitive choices, on the other hand, seem to be based on the rule that ‘earlier is better’ or else on a consistent rate of discount. Copyright © 2000 John Wiley & Sons, Ltd.

110 citations


Journal ArticleDOI
TL;DR: In this paper, the authors extended Loewenstein's notions of savoring and dread to the domain of uncertainty and found that positive skewed gambles were more attractive, and associated with the highest tolerance for delayed resolution.
Abstract: This study extends Loewenstein's (1987) notions of savoring and dread to the domain of uncertainty. Measures of attractiveness and of willingness to delay the resolution of uncertainty were obtained for 16 two-outcome gambles with expected value of $1000 and for reflected versions of the same gambles. In both sets, the correlation between the means of the two measures was almost perfect. Positively skewed gambles were most attractive, and associated with the highest tolerance for delayed resolution. A measure of willingness to pay for early resolution showed a similar pattern. Copyright (C) 2000 John Wiley & Sons, Ltd. Language: en

Journal ArticleDOI
TL;DR: In this article, the authors examine multi-period observation and selection problems with an unknown number of applicants in which applicants are interviewed one at a time on each period, recall of applicants that were interviewed and rejected is not possible, the decision based on relative ranks, and the objective is to maximize the probability of accepting the top-ranked applicant.
Abstract: We examine multi-period observation and selection problems with an unknown number of applicants in which applicants are interviewed one at a time on each period, recall of applicants that were interviewed and rejected is not possible, the decision on each period to reject or accept an applicant is based on relative ranks, and the objective is to maximize the probability of accepting the top-ranked applicant. We propose and then assess the efficiency of three descriptive models by simulation, and then test them competitively in a computer-controlled experiment. A cutoff decision model, in which the first r−1 applicants are rejected and then the first applicant who is ranked higher than all previously observed applicants is accepted, outperforms the two other two models. Compared with the optimal policy, subjects stop the search too early. Their behavior is accounted for by a cutoff model that postulates an endogenous cost of search. Copyright © 2000 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, the authors derived the delay and probability discount functions for different amounts of money, available immediately and certainly, relative to a standard amount ranging across groups from $1,000,000 to $10.
Abstract: Subjects drew lines proportional in length to their subjective valuation of various amounts of money, available immediately and certainly, relative to a standard amount ranging across groups from $1,000,000 to $10. They also drew lines proportional to their subjective valuation of standard amounts with delays ranging from 1 day to 50 years and with probabilities ranging from 1/10 to 1/10,000,000. Amounts of certain‐immediate moneyequivalent (in terms of drawn line length) to delayed or probabilistic money were determined. The delay and probability discount functions thereby obtained were hyperbolic in form, rather than exponential, consistent with previous findings. Large money amounts were valued higherwhentheyweredelayedbyadaythanwhenavailableimmediately.Steepness of delay discounting was not systematically related to standard money amount but probabilistic discounting was steeper for higher standard amounts than for lower amounts. Some of these results diAer from those obtained with choice procedures. Possible reasons for the diAerences are discussed. Copyright #2000 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, the invariance of the desired wealth principle and two mental accounting rules (mixed gain, e.g. $100 gain and a $50 loss) across types of decision maker and frame was found to vary depending upon the thoughtfulness of the decision maker.
Abstract: Perhaps the most fundamental principle of decision theory is that more money is preferred to less: the principle of desired wealth. Based on this and other principles such as reference dependence and loss aversion, researchers have derived and demonstrated mental accounting (MA) rules for multiple outcome situations. Experiment 1 tested the invariance of the desired wealth principle and two mental accounting rules (mixed gain, e.g. $100 gain and a $50 loss; mixed loss, e.g. $100 loss and a $50 gain) across types of decision maker and frame. The desired wealth principle and the MA rule for mixed gains were found to vary depending upon (1) the thoughtfulness of the decision maker (need for cognition, NC), and (2) the frame used to describe gains and losses (e.g. a gain of $x versus a gain of y%). The MA rule for mixed losses, however, was found to be immune to framing effects, even among people who are generally less thoughtful. The differential processing of gains and losses across frames (dollar versus percentage) and individuals (less versus more thoughtful) was tested further in Experiment 2 where evaluations of mixed losses were made at the level of the gestalt as well as the constituent (the gain and the loss being evaluated separately). Framing effects were evidenced only among subjects lower in NC and only when the constituent gain was evaluated. Evaluations of the overall mixed loss and the constituent loss were comparable across situation and individual, suggesting that losses motivate greater processing among people otherwise inclined toward cognitive miserliness. Copyright © 2000 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the authors manipulated both the surface and depth task characteristics that are likely to induce cognitive modes at different points along the cognitive continuum and found that the surface manipulation influenced cognitive mode in the predicted direction with an iconic information display inducing a more intuitive mode than a numeric information display.
Abstract: Cognitive Continuum Theory (CCT) is an adaptive theory of human judgement and posits a continuum of cognitive modes anchored by intuition and analysis. The theory specifies surface and depth task characteristics that are likely to induce cognitive modes at different points along the cognitive continuum. The current study manipulated both the surface (information representation) and depth (task structure) characteristics of a multiple-cue integration threat assessment task. The surface manipulation influenced cognitive mode in the predicted direction with an iconic information display inducing a more intuitive mode than a numeric information display. The depth manipulation influenced cognitive mode in a pattern not predicted by CCT. Results indicate this difference was due to a combination of task complexity and participant satisfacing. As predicted, analysis produced a more leptokurtic error distribution than intuition. Task achievement was a function of the extent to which participants demonstrated an analytic cognitive mode index, and not a function of correspondence, as predicted. This difference was likely due to the quantitative nature of the task manipulations. Copyright © 2000 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The authors used an original sample of 146 business managers to examine the rationality of choices with respect to deferred lotteries and used a new empirical methodology to explicitly estimate implicit rates of time preference.
Abstract: This paper uses an original sample of 146 business managers to examine the rationality of choices with respect to deferred lotteries. Using a new empirical methodology, it explicitly estimates implicit rates of time preference with respect to these deferred gambles. The estimated discount rate decreases with the time horizon of the gamble, which is consistent with violations observed in discounted utility contexts. Cigarette smokers exhibit lower estimated discount rates in this context, which is contrary to many popular hypotheses about the economic causes of smoking behavior. Copyright # 2000 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The authors found that positive reasons are given more often than negative ones, even for p values below 0.5, especially when the probability is higher than expected, and the target outcome is non-normal, undesirable, and phrased as a negation.
Abstract: Verbal phrases denoting uncertainty are usually held to be more vague than numerical probability statements. They are, however, directionally more precise, in the sense that they are either positive, suggesting the occurrence of a target outcome, or negative, drawing attention to its non-occurrence. A numerical probability will, in contrast, sometimes be perceived as positive and sometimes as negative. When asked to complete sentences such as ‘The operation has a 30% chance of success, because’ some people will give reasons for success (‘the doctors are expert surgeons’), whereas others will give reasons for failure (‘it is a difficult operation’). It is shown in two experiments that positive reasons are given more often than negative ones, even for p values below 0.5, especially when the probability is higher than expected, and the target outcome is non-normal, undesirable, and phrased as a negation. We conclude that the directionality of numerical probabilities (as opposed to verbal phrases) is context-dependent, but biased towards a positive interpretation. Copyright © 2000 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, the authors analyzed a large sample of recent UK horseraces on decision-making behavior within the parimutuel and the parallel bookmaker-based betting markets.
Abstract: This paper offers new insights into the behavioural origins of the favourite-longshot bias - an established feature of betting markets, whereby longshots win less often than the subjective probabilities imply and favourites more often. A number of alternative explanations has been offered for this phenomenon but the main debate focuses on whether it is caused by the behaviour of those supplying betting markets (bookmakers) or of the demand-side agents in these markets (bettors) . This study analyses a new data source which offers detailed information for a large sample of recent UK horseraces on decision-making behaviour within the parimutuel and the parallel bookmaker-based betting markets. The results offer strong evidence for the existence of the favourite-longshot bias in bookmaker-based markets, with a corresponding absence of such an effect in the parimutuel case. These results offer support for the view that the origins of the favourite-longshot bias lie principally in the decisions of bookmakers rather than in the decisions of bettors.

Journal ArticleDOI
TL;DR: Kip Viscusi as discussed by the authors rejects my critique of his work and restates his view that individuals greatly overestimate the risks from lung cancer and other diseases caused by smoking.
Abstract: Kip Viscusi rejects my critique of his work and restates his view that individuals greatly overestimate the risks from lung cancer and other diseases caused by smoking. But Viscusi's methods are deeply flawed and his analyses, arguments, and conclusions are incorrect. First, he neglects to take into account optimism bias, which leads smokers to believe that they personally are at less risk than other smokers. Second, he fails to demonstrate that smokers appreciate the cumulative nature of smoking risks and the power of addiction that makes it extraordinarily difficult for them to stop smoking when their preferences change and they desire to quit. Third, the quantitative judgments of risk that Viscusi relies upon are so highly determined by methodological biases as to be completely unreliable. A substantial body of evidence supports the conclusion, contrary to Viscusi's, that many young people do not adequately appreciate the risks of smoking. Copyright © 2000 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this article, a model of social utility function is presented, which predicts that social actors should reciprocate costs and benefits they receive, even when there are costs in conforming to the norm.
Abstract: In this contribution the norm of reciprocity is defined as a basic internal motivation. Using formal tools of game theory, a model of social utility function is presented. The reciprocity model predicts that social actors should reciprocate costs and benefits they receive, even when there are costs in conforming to the norm. Hypotheses about actors' behavior, expectations and evaluations are derived from the model. The hypotheses were tested in an experimental situation, the reciprocity game, consisting of a prisoner's dilemma game (PD) followed by a dictator game (DG). The sample was composed of 74 Italian undergraduate students. In line with the model's predictions, the experimental results showed that participants reciprocate the behavior of the opponent in the PD. In the DG, if the opponent cooperated, participants gave back an almost equal share, whereas if the opponent defected, participants gave a minimal amount. These reciprocity effects are modulated by individual differences in the concern for reciprocity. Copyright © 2000 John Wiley & Sons, Ltd.

Journal ArticleDOI
Robert H. Ashton1
TL;DR: This article analyzed existing research on the test-retest reliability of human judgment, i.e., the extent to which a judge makes identical judgments when presented with identical stimuli on two occasions.
Abstract: This paper analyzes existing research on the test–retest reliability of human judgment, i.e. the extent to which a judge makes identical judgments when presented with identical stimuli on two occasions. Only research involving professional judges who make experimental judgments in a reasonable analog of their everyday experience is included. Studies of both internal consistency reliability and temporal stability reliability are analyzed (where the former refers to the inclusion of repeat stimuli in the same experimental session, and the latter refers to the repeating of the experimental task from a few days to several months later). It is found that (1) the test–retest reliability literature is concentrated in four substantive judgment areas (medicine/psychology, meteorology, human resources management, and business), (2) the literature is extremely variable in terms of research approach/design, the determinants or correlates of test–retest reliability that have been studied, and the quality of the execution and analysis, and (3) mean test–retest reliability differs across both substantive judgment areas and the internal consistency versus temporal stability distinction. An inescapable conclusion from the analysis is that our knowledge of this fundamental property of human judgment is quite meager. Therefore, the paper concludes with suggestions about future research that would address test–retest reliability more systematically. Copyright © 2000 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: This paper found that five to 10-year old children showed a disordinal violation of utility because they averaged the part worths of duplex gambles rather than add them, as adults do, and as normatively prescribed.
Abstract: Violations of utility are often attributed to people's differential reactions to risk versus certainty or uncertainty, or more generally to the way that people perceive outcomes and consequences. However; a core feature of utility is additivity, and violations may also occur because of averaging effects. Averaging is pervasive in intuitive riskless judgement throughout many domains, as shown with Anderson's Information Integration approach. The present study extends these findings to judgement under risk. Five- to 10-year old children showed a disordinal violation of utility because they averaged the part worths of duplex gambles rather than add them, as adults do, and as normatively prescribed. Thus adults realized that two prizes are better than one, but children preferred a high chance to win one prize to the same gamble plus an additional small chance to win a second prize. This result suggests that an additive operator may not be a natural component of the intuitive psychological concept of expected value that emerges in childhood. The implications of a developmental perspective for the :study of judgement and decision are discussed. Copyright (C) 2000 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: The discounted utility (DU) model as mentioned in this paper assumes that individuals discount future events at a constant rate, so that the value of an experience extended over time is given by discounting future events.
Abstract: This special issue showcases cutting-edge contributions to the ®eld of intertemporal choice, which refers to tradeo€s between costs and bene®ts occurring at di€erent points in time. Although intertemporal choice is a central aspect of decision theory and decision research Ð it is dicult to think of any important decision that lacks an intertemporal component Ð research on it emerged and ̄owered only in the last two decades. Until the 1980s, the discounted utility (DU) model, which was ®rst proposed by Samuelson in 1939 (see Loewenstein, 1992, for a historical discussion), was adopted uncritically by economists and decision theorists. Discounted Utility theory assumes that individuals discount future events at a constant rate, so that the value of an experience extended over time is given by:

Journal ArticleDOI
TL;DR: This paper argued that the normative argument is irrelevant for our descriptive hypothesis and, as a normative claim, only valid for a specific situation, and the descriptive argument is correct but consistent with our review, and is incorrect.
Abstract: Recently we proposed an explanation for the apparently inconsistent result that people sometimes take account of sample size and sometimes do not: Human intuitions conform to the ‘empirical law of large numbers,’ which helps to solve what we called ‘frequency distribution tasks’ but not ‘sampling distribution tasks’ (Sedlmeier and Gigerenzer, 1997). Keren and Lewis (2000) do not provide an alternative explanation but present a three-pronged criticism of ours: (1) the normative argument that a larger sample size will not invariably provide more reliable estimates, (2) the descriptive argument that under certain circumstances, people are insensitive to sample size, and (3) the claim that sampling distributions are essential for solving both frequency and sampling distribution tasks. We argue that (1) the normative argument is irrelevant for our descriptive hypothesis and, as a normative claim, only valid for a specific situation, (2) the descriptive argument is correct but consistent with our review, and (3) is incorrect. Bernoulli’s assertion that the intuitions of ‘even the stupidest man’ follow the empirical law of large numbers may have been rather on the optimistic side, but in general the intuitions of the vast majority of people do. Copyright # 2000 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the authors focus on Paul Slovic's critique of existing evidence on smoking risks as well as his own evidence regarding perceptions of risks by high school students and present new smoking risk perception results.
Abstract: Quantitative measures of smoking risks indicate substantial overassessment of these hazards. The qualitative risk measures developed by Slovic have no implications for either the direction or degree of perceptional bias. Qualitative risk questions also suAer from the problem that respondents diAer in terms of their reference point for what is risky and what agreement with qualitative risk statements implies about their objective risk beliefs. Meaningful objective risk measures imply that people overassess smoking risks. Copyright # 2000 John Wiley & Sons, Ltd. Smoking risks are among the largest health risks one might potentially face. An increasingly pertinent question for policy is whether people understand these risks. In a quest to further such understanding, for over three decades the US government has required hazard warning labels on cigarettes, has required the inclusion of these warnings on cigarette advertising, and has restricted cigarette advertising in certain media. In addition, the government has issues annual reports of the US Surgeon General to highlight diAerent hazards of smoking, and there has been widespread attention to smoking and its associated health consequences surrounding the entire public debate over smoking. A reasonable hypothesis that one might develop based on the literature dealing with risk perception is that the overwhelming amount of information pertaining to smoking hazards would lead people to overestimate the risk. Indeed, evidence suggests that the mortality risks that have received the greatest media coverage tend to be the most overestimated (see for example Combs and Slovic, 1979). Rather than extrapolate to the smoking context from the literature, with its potentially diverse implications, a more reliable approach to assessing the hazards of smoking is to focus directly on how people perceive these risks. Assessing these risk perceptions is an eAort in which I have been engaged in for the past decade. Here I will focus on Paul Slovic’s critique of existing evidence on smoking risks as well as his own evidence regarding perceptions of risks by high school students. Slovic’s paper consists of both a critique of my work as well as a presentation of new smoking risk perception results. His criticisms are that my risk measures do not convey the severity of the outcome, do not consider optimism bias, fail to consider the repetitive mature of cigarette smoking, and fail to consider the risk of addiction. In the discussion below I will show that these criticisms are not well founded. Moreover, they are even more pertinent to his risk measures than to those that I have developed. Perhaps most fundamentally, I will also show that the qualitative risk measures used by Slovic have no empirical content. These measures are not comparable

Journal ArticleDOI
TL;DR: In this article, the authors compare hypothesis-testing behavior with risk-taking behavior, and propose a framework to determine whether a suitable test for a given hypothesis requires making a preposterior analysis of two aspects of such a test: the probability of obtaining supporting evidence and the evidential value of this evidence.
Abstract: In this paper hypothesis-testing behaviour is compared to risk-taking behaviour. It is proposed that choosing a suitable test for a given hypothesis requires making a preposterior analysis of two aspects of such a test: the probability of obtaining supporting evidence and the evidential value of this evidence. This consideration resembles the one a gambler makes when choosing among bets, each having a probability of winning and an amount to be won. A confirmatory testing strategy can be defined within this framework as a strategy directed at maximizing either the probability or the value of a confirming outcome. Previous theories on testing behaviour have focused on the human tendency to maximize the probability of a confirming outcome. In this paper, two experiments are presented in which participants tend to maximize the confirming value of the test outcome. Motivational factors enhance this tendency dependent on the context of the testing situation. Both this result and the framework are discussed in relation to other studies in the field of testing behaviour. Copyright © 2000 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, students made judgments in terms of probability, relative frequency or absolute frequency on a full or a pruned list of causes on an event and found that when they had little personal experience of the event (causes of death), the pruning bias was smaller with relative frequencies than with absolute frequencies or probabilities.
Abstract: Biases in probabilistic reasoning are affected by alterations in the presentation of judgment tasks. In our experiments, students made likelihood judgments that an event was produced by various causes. These judgments were made in terms of probability, relative frequency or absolute frequency on a full or a pruned list of causes. When they had little personal experience of the event (causes of death), the pruning bias was smaller with relative frequencies than with absolute frequencies or probabilities. When they had more personal experience of the event (missing a lecture), the bias was less with both types of frequency than with probability but still lowest with relative frequency. We suggest that likelihood information is usually stored as relative frequencies when it has been obtained from public sources but that it is based on event counts when it is derived from personal experience. Copyright (C) 2000 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the authors argue that investors frequently utilize existing knowledge as a basis for generating predictions about a company's future, using two different forms of analogical reasoning: structural correspondence between a novel company and an existing schema and literal similarity reasoning.
Abstract: Previous work on investor decision making has focused almost exclusively on information specific to the company being judged. Consequently, every decision is viewed as a novel event, disconnected from the investor's existing knowledge. In this study, the analogical reasoning literature provides the theoretical support for arguing that investors frequently utilize existing knowledge as a basis for generating predictions about a company's future. The specific proposal is that investors transfer their existing knowledge via two different forms of analogical reasoning. The first, relational reasoning, is based primarily on structural correspondence between a novel company and an existing schema. The second, literal similarity reasoning, is based primarily on surface correspondence of a novel company and a previously encountered company. Our theoretical framework is tested in a study in which experienced investors predict the outcome of a novel company's strategy after reading about the experiences of other companies who implemented a similar strategy. The results are consistent with the occurrence of both relational and literal similarity reasoning, with relational reasoning emerging as the dominant approach to generating investors' predictions. Copyright © 2000 John Wiley & Sons, Ltd.