scispace - formally typeset
Open AccessJournal ArticleDOI

On the reality of cognitive illusions

Daniel Kahneman, +1 more
- 01 Jul 1996 - 
- Vol. 103, Iss: 3, pp 582-596
Reads0
Chats0
TLDR
The article responds to Gigerenzer's critique and shows that it misrepresents the authors' theoretical position and ignores critical evidence.
Abstract
The study of heuristics and biases in judgement has been criticized in several publications by G. Gigerenzer, who argues that "biases are not biases" and "heuristics are meant to explain what does not exist" (1991, p. 102). The article responds to Gigerenzer's critique and shows that it misrepresents the authors' theoretical position and ignores critical evidence. Contrary to Gigerenzer's central empirical claim, judgments of frequency--not only subjective probabilities--are susceptible to large and systematic biases. A postscript responds to Gigerenzer's (1996) reply.

read more

Content maybe subject to copyright    Report

Psychological
Review
1996,
Vol.
103,
No.
3,
582-591
Copyright
1996
by the
American
Psychological
Association,
Inc.
0033-295X/96/S3.00
THEORETICAL
NOTES
On the
Reality
of
Cognitive
Illusions
Daniel
Kahneman
Princeton University
Amos
Tversky
Stanford
University
The
study
of
heuristics
and
biases
in
judgment
has
been
criticized
in
several publications
by G.
Gigerenzer,
who
argues
that
"biases
are not
biases"
and
"heuristics
are
meant
to
explain what
does
not
exist"
(1991,
p.
102).
This
article
responds
to
Gigerenzer's
critique
and
shows
that
it
misrepre-
sents
the
authors'
theoretical
position
and
ignores
critical
evidence.
Contrary
to
Gigerenzer's
central
empirical claim, judgments
of
frequency—not
only subjective
probabilities—are
susceptible
to
large
and
systematic
biases.
A
postscript
responds
to
Gigerenzer's
(1996)
reply.
Some time
ago
we
introduced
a
program
of
research
on
judgment under uncertainty, which
has
come
to be
known
as
the
heuristics
and
biases
approach
(Kahneman, Slovic,
&
Tversky, 1982; Tversky
&
Kahneman,
1974).
We
suggested
that intuitive predictions
and
judgments
are
often
mediated
by
a
small number
of
distinctive mental
operations,
which
we
called
judgmental
heuristics.
For
example,
a
judgment
of the
prevalence
of
suicide
in a
community
is
likely
to be
mediated
by
the
ease with which instances come
to
mind;
this
is an ex-
ample
of the
availability heuristic.
And a
politician
of
erect
bearing walking briskly
to the
podium
is
likely
to be
seen
as
strong
and
decisive; this
is an
example
of
judgment
by
representativeness.
These heuristics,
we
argued,
are
often
useful
but
they some-
times lead
to
characteristic errors
or
biases,
which
we and
others
have
studied
in
some detail. There
are
several reasons
for
study-
ing
judgmental
or
perceptual
biases. First, they
are of
interest
in
their
own
right.
Second,
they
can
have
practical
implications
(e.g.,
to
clinical judgment
or
intuitive forecasting). Third,
the
study
of
systematic error
can
illuminate
the
psychological pro-
cesses
that
underlie perception
and
judgment. Indeed,
a
com-
mon
method
to
demonstrate that
a
particular variable
affects
a
judgment
is to
establish
a
correlation
between that variable
and
the
judgment, holding
the
objective
criterion
constant.
For ex-
ample,
the
effect
of
aerial perspective
on
apparent
distance
is
confirmed
by the
observation
that
the
same mountain appears
Daniel
Kahneman,
Department
of
Psychology
and
Woodrow
Wilson
School
of
Public
and
International
Affairs,
Princeton
University;
Amos
Tversky,
Department
of
Psychology, Stanford University.
Amos Tversky
died
on
June
2,
1996.
This
work
was
supported
by
National
Science
Foundation
Grants
SBR-9496347
and
SBR-940684
and by
National
Institute
of
Mental
Health
Grant
MH53046.
Correspondence
concerning
this
article
should
be
addressed
to
Daniel
Kahneman,
Woodrow Wilson
School
of
Public
and
Interna-
tional
Affairs,
Prospect
Street,
Princeton University,
Princeton,
New
Jersey
08544-1013.
Electronic
mail
may be
sent
via
Internet
to
Kahneman@pucc.princeton.edu.
closer
on a
clear than
on a
hazy
day. Similarly,
the
role
of
avail-
ability
in
frequency
judgments
can be
demonstrated
by
com-
paring
two
classes
that
are
equal
in
objective
frequency
but
differ
in the
memorability
of
their instances.
The
main goal
of
this research
was to
understand
the
cogni-
tive
processes that
produce
both valid
and
invalid judgments.
However,
it
soon became apparent
that
"although errors
of
judgments
are but a
method
by
which some cognitive processes
are
studied,
the
method
has
become
a
significant
part
of the
message"
(Kahneman
&
Tversky,
1982a,p.
124).
The
method-
ological
focus
on
errors
and the
role
of
judgmental
biases
in
discussions
of
human rationality have evoked
the
criticism
that
our
research portrays
the
human mind
in an
overly
negative
light
(see, e.g., Cohen, 1981; Einhorn
&
Hogarth, 1981;
Lopes,
1991).
The
present article
is a
response
to the
latest round
in
this
controversy.
In a
series
of
articles
and
chapters Gigerenzer
(1991,
1993,
1994; Gigerenzer, Hell,
&
Blank, 1988; Gigerenzer
&
Murray,
1987,
chap.
5)
has
vigorously attacked
the
heuristics
and
biases
approach
to
judgment under uncertainty. Gigerenzer's critique
consists
of a
conceptual argument against
our use of the
term
"bias,"
and an
empirical claim about
the
"disappearance"
of
the
patterns
of
judgment that
we had
documented.
The
conceptual argument against
the
notion
of
judgmental
bias
is
that there
is a
disagreement among
statisticians
and
phi-
losophers about
the
interpretation
of
probability. Proponents
of
the
Bayesian school interpret probability
as a
subjective mea-
sure
of
belief. They allow
the
assignment
of
probabilities
to
unique
events (e.g.,
the
result
of the
next Super Bowl,
or the
outcome
of
a
single
toss
of a
coin)
and
require these assignments
to
obey
the
probability axioms. Frequentists,
on the
other hand,
interpret probability
as
long-run relative
frequency
and
refuse
to
assign probability
to
unique events. Gigerenzer argues
that
because
the
concept
of
subjective
probability
is
controversial
in
statistics, there
is no
normative basis
for
diagnosing such judg-
ments
as
wrong
or
biased. Consequently, "biases
are not
biases"
(1991,
p.
86),
and
"heuristics
are
meant
to
explain what does
not
exist"
(1991,
p.
102).
On the
empirical side, Gigerenzer argues that "allegedly
sta-
582

THEORETICAL NOTES
583
ble"
errors
of
judgments
can be
"made
to
disappear"
by two
simple
manipulations: asking questions
in
terms
of
frequencies
rather than
in
terms
of
probabilities
and
emphasizing
the
role
of
random sampling.
He
illustrates these claims
by a
critical
discussion
of
three
judgmental biases: base-rate neglect, con-
junction errors,
and
overconfidence.
He
suggests that
the
same
methods
can be
used
to
make other cognitive illusions disappear
(p.
300). Gigerenzer concludes that
the
heuristics
and
biases
approach
is a
"conceptual dead end" that "has
^iot
given
us
much
purchase
in
understanding judgment under uncertainty"
(1991,p.
103).
This article examines
the
validity
of
Gigerenzer's critique
of
heuristics
and
biases research, which
has
focused
primarily
on
our
work.
We
make
no
attempt
here
to
evaluate
the
achieve-
ments
and the
limitations
of
several decades
of
research
on
heu-
ristics and
biases,
by
ourselves
and by
others.
The
next section
assesses
the
accuracy
of
Gigerenzer's presentation.
The
follow-
ing
three sections
address,
in
turn,
the
three phenomena
targeted
in his
critique.
The final
section provides
a
summary
and
discusses
the
relation between degree
of
belief
and
assess-
ments
of
frequency.
Scope
and
Accuracy
It
is not
uncommon
in
academic
debates
that
a
critic's
de-
scription
of
the
opponent's
ideas
and findings
involves some loss
of fidelity.
This
is a
fact
of
life
that targets
of
criticism should
learn
to
expect,
even
if
they
do not
enjoy
it. In
some exceptional
cases,
however,
the fidelity of the
presentation
is so low
that
readers
may be
misled about
the
real issues under discussion.
In
our
view,
Gigerenzer's critique
of the
heuristics
and
biases
program
is one of
these cases.
The
main
goal
of the
present
reply
is to
correct
his
misleading description
of our
work
and
his
tendentious presentation
of the
evidence.
The
correction
is
needed
to
distinguish
genuine disagreements
from
objections
to
positions
we do not
hold.
In
this section
we
identify
some
of the
major
misrepresentations
in
Gigerenzer's critique.
The
scope
of the
research
program
is a
case
in
point.
The
reader
of
Gigerenzer's critique
is
invited
to
believe that
the
heu-
ristics
and
biases approach
was
exclusively
concerned
with
bi-
ases
in
assessments
of
subjective
probability,'
to
which Giger-
enzer
has had a
philosophical objection. However, much
of our
research
has
been concerned
with
tasks
to
which
his
objection
does
not
apply.
Our
1974
(Tversky
&
Kahneman)
Science
article,
for
example, discussed
twelve
biases. Only
two
(in-
sensitivity
to
prior probability
of
outcomes
and
overconfidence
in
subjective probability
distributions)
involve
subjective
prob-
ability;
the
other
ten
biases
do
not. These include
the
effect
of
arbitrary anchors
on
estimates
of
quantities, availability biases
in
judgment
of
frequency,
illusory correlation,
nonregressive
prediction,
and
misconceptions
of
randomness. These
findings
are
not
mentioned
in
Gigerenzer's account
of
heuristics
and bi-
ases. Inexplicably,
he
dismisses
the
entire body
of
research
be-
cause
of a
debatable philosophical objection
to two of
twelve
phenomena.
The
failure
to
address most
of our
research
has
allowed Gig-
erenzer
to
offer
an
erroneous characterization
of our
normative
position
as
"narrowly
Bayesian."
Contrary
to
this description,
the
normative standards
to
which
we
have compared
intuitive
judgments
have
been eclectic
and
often
objective. Thus,
we
showed
that judgments
of
frequency
and
estimates
of
numerical
quantities deviate systematically
from
measured objective val-
ues,
that estimates
of
sampling
outcomes
depart
from
the
values
obtained
by
elementary combinatorial analysis
and
sampling
theory,
and
that
intuitive
numerical predictions violate
the
principle
of
regression.
Perhaps
the
most serious misrepresentation
of our
position
concerns
the
characterization
of
judgmental
heuristics
as
"in-
dependent
of
context
and
content"
(Gigerenzer
et
al.,
1988)
and
insensitive
to
problem representation (Gigerenzer,
1993).
Gig-
erenzer
also
charges that
our
research "has consistently
ne-
glected
Feynman's
(1967)
insight
that
mathematically
equiva-
lent
information
formats need
not be
psychologically equiva-
lent"
(Gigerenzer
&
Hoffrage,
1995,
p.
697).
Nothing could
be
further
from
the
truth:
The
recognition
that
different
framings
of
the
same problem
of
decision
or
judgment
can
give rise
to
different
mental processes
has
been
a
hallmark
of
our
approach
in
both domains.
The
peculiar
notion
of
heuristics
as
insensitive
to
problem
representation
was
presumably introduced
by
Gigerenzer
be-
cause
it
could
be
discredited,
for
example,
by
demonstra-
tions that some problems
are
difficult
in one
representation
(probability),
but
easier
in
another (frequency). However,
the
assumption
that heuristics
are
independent
of
content, task,
and
representation
is
alien
to our
position,
as is the
idea that
different
representations
of a
problem
will
be
approached
in the
same way.
In
discussing this point
we
wrote,
Many
adults
do not
have generally valid intuitions
corresponding
to the law of
large numbers,
the
role
of
base
rates
in
Bayesian
infer-
ence,
or the
principle
of
regressive prediction.
But it is
simply
not
the
case
that
every
problem
to
which
these
rules
are
relevant will
be
answered incorrectly,
or
that
the
rules
cannot
appear
compelling
in
particular
contexts.
The
properties
that
make
formally
equiva-
lent problems easy
or
hard
to
solve
appear
to be
related
to the
men-
tal
models,
or
schemas,
that
the
problems
evoke
(Kahneman
&
Tversky,
1982a,
pp.
129-130).
We
believe that Gigerenzer agrees with
our
position,
and we
wonder
why it is
misrepresented
in his
writings.
Although
we
were
not
able
to
offer
a
comprehensive
treat-
ment
of the
process
by
which
different
representations
and
different
tasks evoke
different
heuristics,
we
investigated
this
question
in
several studies.
For
example,
we
showed that
graphic
and
verbal representations
of a
binomial
process
yield
qualitatively
different
patterns
in
judgments
of
frequency
(Tversky
and
Kahneman,
1973),
we
argued that
the use of
base-rate data
is
enhanced
when
a
problem
is
framed
as
repeti-
tive
rather than unique (Kahneman
and
Tversky, 1979),
and
we
observed that
the
impact
of
base-rate data
is
increased when
these data
are
given
a
causal interpretation
(Tversky
&
Kahne-
man, 1980;
see
also
Ajzen,
1977).
We
also demonstrated that
a
representation
in
terms
of
absolute
frequencies
largely elimi-
nated conjunction errors (Tversky
&
Kahneman,
1983)—a
finding
that Gigerenzer appears
to
have appropriated.
' For the
purposes
of the
present
discussion,
we use
"subjective
prob-
abilities"
to
refer
to
probability judgments about unique events.

584
THEORETICAL
NOTES
The
major
empirical
claim
in
Gigerenzer's critique, that cog-
nitive
illusions
"disappear"
when people
assess
frequencies
rather
than
subjective
probabilities,
also
rests
on a
surprisingly
selective
reading
of the
evidence. Most
of our
early
work
on
availability
biases
was
concerned with judgments
of
frequency
(Tversky
&
Kahneman
,1973),
and we
illustrated
anchoring
by
inducing
errors
in
judgments
of the
frequency
of
African
nations
in the
United Nations (Tversky
&
Kahneman, 1974).
Systematic biases
in
judgments
of
frequency
have been
ob-
served
in
numerous
other
studies (e.g., Slovic,
Fischhoff,
&
Lichtenstein,
1982).
These examples should
suffice
to
demonstrate
why,
in our
view,
Gigerenzer's reports
on our
work
and on the
evidence can-
not be
taken
at
face
value. Further examples
can be
found
by
comparing Gigerenzer's writings (e.g., 1991, 1993, 1994) with
our
own
(in
particular, Kahneman
&
Tversky, 1982a,
1982b;
Tversky
&
Kahneman, 1974, 1983).
The
position
described
by
Gigerenzer
is
indeed easy
to
refute,
but it
bears little resem-
blance
to
ours.
It
is
useful
to
remember
that
the
refutation
of a
caricature
can be no
more than
a
caricature
of
refutation.
In
the
next
sections
we
discuss
the
three
phenomena
that
Gig-
erenzer
used
to
illustrate
the
disappearance
of
cognitive
illu-
sions.
In
each case
we
briefly
review
the
original work, then
ex-
amine
his
critique
in
light
of the
experimental evidence.
Base-Rate
Neglect
Intuitive
predictions
and
judgments
of
probability,
we
pro-
posed,
are
often
based
on the
relation
of
similarity
or
represen-
tativeness
between
the
evidence
and
possible outcomes. This
concept
was
characterized
as
follows:
Representativeness
is an
assessment
of the
degree
of
correspon-
dence
between
a
sample
and a
population,
an
instance
and a
cate-
gory,
an act and an
actor,
or
more generally between
an
outcome
and
a
model.
The
model
may
refer
to a
person,
a
coin,
or
the
world
economy,
and the
respective outcomes
could
be
marital
status,
a
sequence
of
heads
and
tails,
or the
current
price
of
gold. Represen-
tativeness
can
be
investigated
empirically
by
asking
people,
for ex-
ample,
which
of
two
sequences
of
heads
and
tails
is
more
represen-
tative
of a
fair
coin
or
which
of two
professions
is
more representa-
tive
of a
given
personality (Tversky
&
Kahneman, 1983,
pp.
295-296).
The
relation
of
correspondence
or
similarity between events,
we
reasoned,
is
largely independent
of
their
frequency.
Conse-
quently,
the
base rates
of
outcomes
are
likely
to
have little
im-
pact
on
predictions that
are
based primarily
on
similarity
or
representativeness.
We
have used
the
term base-rate
neglect
to
describe situations
in
which
a
base rate that
is
known
to the
subject,
at
least approximately,
is
ignored
or
significantly
un-
derweighted.
We
tested this hypothesis
in
several experimental
paradigms. Gigerenzer's critique
of
base-rate neglect focuses
on
a
particular design,
in
which
base-rate
information
is
explicitly
provided
and
experimentally manipulated.
In
our
original
experiment, participants read brief
descrip-
tions
of
different
individuals, allegedly sampled
at
random
from
a
group consisting
of 30
engineers
and 70
lawyers
(or 30
lawyers
and
70
engineers).
Participants
assessed
the
probability
that
each description
referred
to an
engineer rather than
to a
lawyer.
The
effect
of the
manipulation
of
base rate
in
this experiment
was
statistically significant,
but
small. Subsequent studies have
identified
several
factors
that
enhance
the use
of
base-rate
infor-
mation
in
this
paradigm:
presenting
the
base-rate
data
after
the
personality description
(Krosnick,
Li, &
Lehman,
1990),
vary-
ing
base
rate
across
trials
(Bar-Hillel
&
Fischhoff,
1981),
and
encouraging participants
to
think
as
statisticians
(Schwarz,
Strack, Hilton,
&
Naderer,
1991).
In the
same vein, Gigerenzer,
Hell,
and
flank
(1988)
reported that repeated random sam-
pling
of
descriptions increased
the use of
base
rates.
The
impact
of
base-rate data
was
larger
in
these experiments than
in our
original study,
but
less than expected according
to
Bayes'
rule.
A
fair
summary
of the
evidence
is
that explicitly stated
base
rates
are
generally
underweighted
but not
ignored (see, e.g.,
Bar-Hillel,
1983).
Gigerenzer, however, reaches
a
different
conclusion.
He
claims that
"If one
lets
the
subjects
do the
random drawing
base-rate neglect
disappears"
(1991,
p.
100).
This claim
is in-
consistent
with
the
data:
Underweighting
of
base-rate
was
dem-
onstrated
in
several studies
in
which participants actually drew
random
samples
from
a
specified
population,
such
as
numbered
balls
from
a
bingo cage (Camerer, 1990:
Grether,
1980, 1992;
Griffin
&
Dukeshire, 1993).
Even
in
Gigerenzer's
own
study,
all six
informative
descriptions deviated
from
the
Bayesian
so-
lution
in the
direction predicted
by
representativeness;
the de-
viations ranged
from
6.6%
to
15.5%(seeGigerenzeretal.,
1988,
Table
1,
p.
516).
Griffin
&
Dukeshire
(1993)
observed substan-
tially
larger deviations
in the
same design.
To
paraphrase Mark
Twain,
it
appears that Gigerenzer's announcement about
the
disappearance
of
base-rate
neglect
is
premature.
Gigerenzer
notes that
"In
many
natural environments
. . .
frequencies
must
be
sequentially learned through experience"
(1994,
p.
149)
and
suggests that this
process
allows people
to
adopt
a
more
effective
algorithm
for
assessing posterior proba-
bility.
He
offers
a
hypothetical example
in
which
a
physician
in
a
nonliterate society
learns
quickly
and
accurately
the
posterior
probability
of a
disease
given
the
presence
or
absence
of a
symp-
tom. Indeed, there
is
evidence that people
and
other animals
often
register environmental frequencies with impressive accu-
racy.
However, Gigerenzer's speculation about what
a
nonli-
terate
physician might learn
from
experience
is not
supported
by
existing evidence. Subjects
in an
experiment reported
by
Gluck
and
Bower
(1988)
learned
to
diagnose whether
a
patient
has a
rare
(25%)
or a
common
(75%)
disease.
For 250
trials
the
subjects
guessed
the
patient's
disease
on the
basis
of a
pattern
of
four
binary symptoms, with immediate feedback. Following
this
learning phase,
the
subjects estimated
the
relative
frequency
of
the
rare
disease,
given each
of the
four
symptoms separately.
If
the
mind
is "a
frequency
monitoring
device,"
as
argued
by
Gigerenzer
(1993,
p.
300),
we
should expect subjects
to be
reasonably accurate
in
their assessments
of the
relative
frequen-
cies
of the
diseases, given each symptom. Contrary
to
this naive
frequentist
prediction, subjects' judgments
of the
relative fre-
quency
of the two
diseases
were determined
entirely
by the di-
agnosticity
of the
symptom, with
no
regard
for the
base-rate
frequencies
of the
diseases.
Although
the
participants
in
this
experiment encountered
the
common disease three times more
frequently
than
the
rare
disease,
they
estimated
the
frequency
of
disease given symptom,
as if the two
diseases were
equally

THEORETICAL NOTES
585
likely.
Additional evidence
for
base-rate
neglect
in
this para-
digm
has
been reported
by
Estes, Campbell,
Hatsopoulos,
and
Hurwitz
(1989)
and by
Nosofsky,
Kruschke,
and
McKinley
(1992).
Contrary
to
Gigerenzer's
unqualified
claim,
the re-
placement
of
subjective probability judgments
by
estimates
of
relative
frequency
and the
introduction
of
sequential random
sampling
do not
provide
a
panacea against base-rate neglect.
Most
of the
research
on the use or
neglect
of
base-rate
infor-
mation
has
focused
on
situations
in
which
that information
is
explicitly
provided
or
made observable
to the
subject. However,
the
most direct evidence
for the
role
of
representativeness
in
prediction comes
from
a
different
experimental situation,
which
we
label
the
outcome-ranking paradigm.
In
this para-
digm,
subjects
are
given case data about
a
person (e.g.,
a
per-
sonality
description)
and are
asked
to
rank
a set of
outcomes
(e.g.,
occupations
or
fields
of
study)
by
different
criteria.
Sub-
jects
in one
condition rank
the
outcomes
by
representativeness:
the
degree
to
which
the
person resembles
the
stereotype associ-
ated
with
each outcome. Subjects
in the
second condition
rank
the
same outcomes
by the
probability
that
they
apply
to the
person
in
question.
Subjects
in a
third
group
are not
given
case
data;
they rank
the
outcomes
by
their base
rate
in the
population
from
which
the
case
is
said
to be
drawn.
The
results
of
several experiments showed that
the
rankings
of
outcomes
by
representativeness
and by
probability were
nearly
identical
(Kahneman,
&
Tversky,
1973;
Tversky,
&
Kah-
neman,
1982).
The
probability ranking
of
outcomes
did not
regress toward
the
base-rate ranking, even when
the
subjects
were
told that
the
predictive
validity
of the
personality descrip-
tions
was
low. However,
when
subjects were asked
to
make
pre-
dictions about
an
individual
for
whom
no
personality sketch
was
given,
the
probability ranking
was
highly
correlated
with
the
base-rate ranking. Subjects evidently
consulted
their knowl-
edge
of
base
rates
in the
absence
of
case data,
but not
when
a
personality description
was
provided
(Kahneman
&
Tversky,
1973).
Gigerenzer's discussion
of
representativeness
and
base-rate
neglect
has
largely
ignored
the findings
obtained
in the
out-
come-ranking paradigm.
He
dismisses
the
results
of one
study
involving
a
particular
case
(Tom
W.)
on the
grounds that
our
subjects
were
not
given
reason
to
believe
that
the
target vignette
had
been randomly sampled
(Gigerenzer,
1991,
p.
96).
Unac-
countably,
he
fails
to
mention that identical results
were
ob-
tained
in a
more extensive study, reported
in the
same
article,
in
which
the
instructions explicitly referred
to
random sampling
(Kahneman
&
Tversky,
1973, Table
2, p.
240).
The
outcome-ranking paradigm
is
especially relevant
to
Gig-
erenzer's complaint
that
we
have
not
provided formal
defini-
tions
of
representativeness
or
availability
and
that
these heuris-
tics
are
"largely
undefined
concepts
and can
post
hoc be
used
to
explain
almost
everything"
(1991,
p.
102). This objection
misses
the
point that representativeness
(like
similarity)
can be
assessed experimentally; hence
it
need
not be
defined
a
priori.
Testing
the
hypothesis
that
probability judgments
are
mediated
by
representativeness does
not
require
a
theoretical model
of
either
concept.
The
heuristic analysis only assumes that
the
lat-
ter is
used
to
assess
the
former
and not
vice versa.
In the
out-
come-ranking
paradigm, representativeness
is
defined opera-
tionally
by the
subjects' ranking, which
is
compared
to an
inde-
pendent ranking
of the
same outcomes
by
their probability.
These
rankings
of the
outcomes
rely,
of
course,
on
subjects'
un-
derstanding
of the
terms
probability,
similarity,
or
representa-
tiveness.
This
is a
general characteristic
of
research
in
percep-
tion
and
judgment: Studies
of
loudness, fairness,
or
confidence
all
rest
on the
meaning that subjects attach
to
these attributes,
not
on the
experimenter's theoretical model.
What does
all
this
say
about
the
base-rate controversy
and
about
prediction
by
representativeness?
First,
it is
evident
that
subjects
sometimes
use
explicitly mentioned base-rate informa-
tion
to a
much greater extent than they
did in our
original engi-
neer-lawyer study, though generally less than required
by
Bayes'
rule.
Second,
the use of
repeated random sampling
is not
suffi-
cient
to
eliminate
base-rate
neglect, contrary
to
Gigerenzer's
claim.
Finally,
the
most direct evidence
for the
role
of
represen-
tativeness
in
intuitive prediction, obtained
in the
outcome-
ranking
paradigm,
has not
been challenged.
Conjunction
Errors
Perhaps
the
simplest
and
most fundamental
principle
of
probability
is the
inclusion
rule:
If A
includes
B
then
the
prob-
ability
of B
cannot exceed
the
probability
of A;
that
is, A
3
B
implies P(A)
s
P(B). This
principle
can
also
be
expressed
by
the
conjunction
rule,
P( A & B)
£
P(
A),
since
A & B is a
subset
of
A.
Because representativeness
and
availability
are not
con-
strained
by
this
rule, violations
are
expected
in
situations where
a
conjunction
is
more representative
or
more available than
one
of
its
components.
An
extensive series
of
studies (Tversky
&
Kahneman, 1983) demonstrated such violations
of the
con-
junction rule
in
both probability
and
frequency
judgments.
The
Normative
Issue
Imagine
a
young
woman, named Linda,
who
resembles
a
feminist,
but not a
bank teller.
You are
asked
to
consider which
of
two
hypotheses
is
more likely:
(a)
Linda
is a
bank teller
or
(b)
Linda
is a
bank teller
who is
active
in the
feminist
movement.
Gigerenzer
insists that there
is
nothing wrong
with
the
state-
ment that
(b) is
more probable than
(a).
He
defends this
view
on
the
ground that
for a
frequentist
this proposition
is
meaning-
less
and
argues that
"it
would
be
foolish
to
label these judgments
'fallacies'"
(1991,
p.
95).
The
refusal
to
apply
the
concept
of
probability
to
unique events
is a
philosophical position that
has
some
following
among statisticians,
but it is not
generally
shared
by
the
public. Some weather forecasters,
for
instance, make
probabilistic predictions (e.g., there
is 50%
chance
of
rain
on
Sunday),
and the
sports pages commonly discuss
the
chances
of
competitors
in
a
variety
of
unique
contests.
Although
lay
people
are
often
reluctant
to
express their degree
of
belief
by a
number,
they
readily
make comparative statements (e.g., Brown
is
more
likely
than Green
to win the
party's
nomination),
which
refer
to
unique events
and are
therefore meaningless
to a
radical
frequentist.
Although
Gigerenzer invokes
the
meaninglessness
argument
with
great conviction,
his
position
on the
issue
is
problematic.
On the one
hand,
he
surely does
not
regard statements
of
sub-
jective probability
as
meaningless;
he has
even collected such
judgments
from
subjects.
On the
other hand,
he
invokes
the

586
THEORETICAL
NOTES
argument
that subjective probabilities
are
meaningless
to
deny
that these judgments
are
subject
to any
normative standards.
This position, which
may be
described
as
normative agnosti-
cism,
is
unreasonably permissive.
Is it not a
mistake
for a
speaker
to
assign probabilities
of .99
both
to an
event
and to its
complement?
We
think that such judgments should
be
treated
as
mistaken;
they
violate
accepted
constraints
on the use of
probability statements
in
everyday
discourse.
Normative
agnosticism
is
particularly inappropriate
in the
case
of the
conjunction rule. First,
the
application
of
this rule
does
not
require numerical estimates, only
an
ordinal judgment
of
which
of two
events
is
more
probable.
Second,
the
normative
basis
for
the
conjunction rule
is
essentially logical:
If the
con-
junction
A &
B
is
true
then
A
must
also
be
true,
but the
converse
does
not
hold.
In
support
of his
agnostic position, Oigerenzer
cites
von
Mises's
(1928/1957)
statement that
We
can say
nothing
about
the
probability
of
death
of an
individual
even
if we
know
his
condition
of
life
and
health
in
detail.
The
phrase "probability
of
death,"
when
it
refers
to a
single
person,
has
no
meaning
at
all for
us
(p.
II).
Whether
or not it is
meaningful
to
assign
a
definite
numerical
value
to the
probability
of
survival
of a
specific individual,
we
submit
(a)
that this individual
is
less likely
to die
within
a
week
than
to die
within
a
year
and (b)
that most people regard
the
preceding statement
as
true—not
as
meaningless—and
treat
its
negation
as an
error
or a
fallacy.
Normative agnosticism
is
even harder
to
justify
when viola-
tions
of the
conjunction rule lead
to a
preference
for a
domi-
nated course
of
action.
Several such cases have been docu-
mented.
For
example,
we
found
that
most subjects chose
to bet
on
the
proposition that Linda
is a
feminist bank teller rather
than
on the
proposition that
she is a
bank teller.
We
also
found
that most subjects violated
the
conjunction rule
in
betting
on
the
outcomes
of a
dice game
involving
real
payoffs
(Tversky
&
Kahneman,
1983).
Further evidence
for
conjunction errors
in
choice between
bets
has
been presented
by
Bar-Hillel
and
Neter
(1993)
and by
Johnson,
Hershey,
Meszaros,
and
Kunreuther
(1993).
Would
Gigerenzer's
agnosticism extend
to the
choice
of
a
dominated option?
Or
would
he
agree that there are,
after
all,
some biases that need
to be
explained?
The
Descriptive
Issue
Gigerenzer's
major empirical claim
is
that violations
of the
conjunction
rule
are
confined
to
subjective probabilities
and
that
they
do not
arise
in
judgments
of
frequencies. This claim
is
puzzling
because
the first
demonstration
in our
conjunction
paper involves judgments
of
frequency.
Subjects were asked
to
estimate
the
number
of
"seven-letter
words
of the
form
' n-' in 4
pages
of
text."
Later
in the
same questionnaire,
these subjects estimated
the
number
of
"seven-letter
words
of
the
form
'---ing'
in 4
pages
of
text."
Because
it
is
easier
to
think
of
words ending with "ing" than
to
think
of
words
with
"n" in
the
next-to-last
position,
availability suggests
that
the
former
will
be
judged more numerous than
the
latter,
in
violation
of
the
conjunction
rule. Indeed,
the
median estimate
for
words ending
with
"ing"
was
nearly
three times higher than
for
words
with
"n" in the
next-to-the-last
position.
This
finding
is a
counter-
example
to
Gigerenzer's
often
repeated claim that conjunction
errors
disappear
in
judgments
of
frequency,
but we
have
found
no
mention
of it in his
writings.
Early
in our
investigation
of the
conjunction problem,
we be-
lieved
that violations
of the
conjunction rule
only
occur when
the
critical events
are
evaluated independently, either
by
differ-
ent
subjects
or by the
same subject
on
different
occasions.
We
expected that subjects would conform
to the
inclusion rule
when
asked
to
judge
the
probability
or
frequency
of a set and
of
one of its
subsets
in
immediate succession.
To our
surprise,
violations
of the
conjunction rule turned
out to be
common
even
in
this case;
the
detection
of
inclusion
and the
appreciation
of
its
significance
were
evidently more
difficult
than
we had
thought.
We
therefore
turned
to the
study
of
cues that
may
encourage
extensional
reasoning
and
developed
the
hypothesis that
the de-
tection
of
inclusion could
be
facilitated
by
asking subjects
to
estimate frequencies.
To
test this hypothesis,
we
described
a
health survey
of 100
adult
men and
asked subjects, "How many
of the 100
participants
have
had one or
more heart
attacks?"
and
"How
many
of the 100
participants both
are
over
55
years
old
and
have
had one or
more heart attacks?"
The
incidence
of
conjunction
errors
in
this problem
was
only
25%,
compared
to
65%
when
the
subjects were asked
to
estimate percentages
rather than frequencies.
Reversing
the
order
of the
questions
further
reduced
the
incidence
to
11%.
We
reasoned that
the
fre-
quency
formulation
may
lend itself
to a
spatial representation,
in
terms
of
tokens
or
areas, which makes
the
relation
of set in-
clusion
particularly salient. This representa-
tion seems less natural
for
percentages,
which
require
normalization.
2
Gigerenzer
has
essentially ignored
our
discovery
of the
effect
of
frequency
and our
analysis
of
extensional cues.
As
primary
evidence
for the
"disappearance"
of the
conjunction
fallacy
in
judgments
of
frequency,
he
prefers
to
cite
a
subsequent study
by
Fiedler
(1988),
who
replicated both
our
procedure
and our
findings,
using
the
bank-teller problem. There were
relatively
few
conjunction errors
when
subjects estimated
in
immediate
succession
the
number
of
bank tellers
and of
feminist
bank tell-
ers,
among
100
women
who fit
Linda's description. Gigerenzer
concludes that "the conceptual distinction between single
events
and
frequency
representations
is
sufficiently
powerful
to
make
this allegedly-stable cognitive illusion
disappear"
(1993,
p.
294).
In
view
of our
prior experimental results
and
theoreti-
cal
discussion,
we
wonder
who
alleged that
the
conjunction fal-
lacy
is
stable under this particular manipulation.
It is in the
nature
of
both
visual
and
cognitive
illusions
that
there
are
conditions under
which
the
correct
answer
is
made
transparent.
The
Muller-Lyer
Illusion,
for
example,
"disap-
pears"
when
the two figures are
embedded
in a
rectangular
frame,
but
this
observation does
not
make
the
illusion
less
in-
teresting.
The
hypothesis that people
use a
heuristic
to
answer
a
2
Cosmides
and
Tooby
(1996)
have shown that
a
frequentistic
formu-
lation
also
helps subjects solve
a
base-rate
problem
that
is
quite
difficult
when
framed
in
terms
of
percentages
or
probabilities.
Their result
is
readily explained
in
terms
of
extensional cues
to set
inclusion.
These
authors,
however, prefer
the
speculative
interpretation
that evolution
has
favored
reasoning with frequencies
but not
with percentages.

Citations
More filters
Book

Simple Heuristics That Make Us Smart

TL;DR: Fast and frugal heuristics as discussed by the authors are simple rules for making decisions with realistic mental resources and can enable both living organisms and artificial systems to make smart choices, classifications, and predictions by employing bounded rationality.
Book

Heuristics and Biases: The Psychology of Intuitive Judgment

TL;DR: In this article, a review is presented of the book "Heuristics and Biases: The Psychology of Intuitive Judgment, edited by Thomas Gilovich, Dale Griffin, and Daniel Kahneman".
Journal ArticleDOI

Individual differences in reasoning: Implications for the rationality debate?

TL;DR: In this paper, the authors examined the implica- tions of individual differences in performance for each of the four explanations of the normative/descriptive gap, including performance errors, computational limitations, the wrong norm being applied by the experi- menter, and a different construal of the task by the subject.
Book ChapterDOI

Representativeness revisited: Attribute substitution in intuitive judgment.

TL;DR: The program of research now known as the heuristics and biases approach began with a survey of 84 participants at the 1969 meetings of the Mathematical Psychology Society and the American Psychological Association (Tversky & Kahneman, 1971) as discussed by the authors.
Journal ArticleDOI

Dual-Process Theories of Higher Cognition: Advancing the Debate

TL;DR: It is argued that the dual-processing distinction is supported by much recent evidence in cognitive science.
References
More filters
Book

Judgment Under Uncertainty: Heuristics and Biases

TL;DR: The authors described three heuristics that are employed in making judgements under uncertainty: representativeness, availability of instances or scenarios, and adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available.
Journal ArticleDOI

Availability: A heuristic for judging frequency and probability

TL;DR: A judgmental heuristic in which a person evaluates the frequency of classes or the probability of events by availability, i.e., by the ease with which relevant instances come to mind, is explored.
Journal ArticleDOI

Judgment Under Uncertainty: Heuristics and Biases.

TL;DR: Three heuristics that are employed in making judgements under uncertainty are described: representativeness, availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development.
Journal ArticleDOI

On the psychology of prediction

TL;DR: In this article, the authors explore the rules that determine intuitive predictions and judgments of confidence and contrast these rules to the normative principles of statistical prediction and show that people do not appear to follow the calculus of chance or the statistical theory of prediction.
Journal ArticleDOI

Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment.

TL;DR: The conjunction rule as mentioned in this paper states that the probability of a conjunction cannot exceed the probabilities of its constituents, P (A) and P (B), because the extension (or the possibility set) of the conjunction is included in the extension of their constituents.
Trending Questions (2)
How to Make Cognitive Illusions Disappear: Beyond “Heuristics and Biases”?

The answer to the query is not present in the provided paper. The paper discusses the critique of the heuristics and biases approach to judgment under uncertainty by G. Gigerenzer, but it does not provide information on how to make cognitive illusions disappear beyond "heuristics and biases".

What are criticism of Daniel kahnemen's cogitive bias concept?

The paper does not provide any specific criticisms of Daniel Kahneman's cognitive bias concept.