scispace - formally typeset
Open AccessJournal ArticleDOI

Reasoning the fast and frugal way: models of bounded rationality.

Gerd Gigerenzer, +1 more
- 01 Jan 1996 - 
- Vol. 103, Iss: 4, pp 650-669
Reads0
Chats0
TLDR
The authors have proposed a family of algorithms based on a simple psychological mechanism: one-reason decision making, and found that these fast and frugal algorithms violate fundamental tenets of classical rationality: they neither look up nor integrate all information.
Abstract
Humans and animals make inferences about the world under limited time and knowledge. In contrast, many models of rational inference treat the mind as a Laplacean Demon, equipped with unlimited time, knowledge, and computational might. Following H. Simon's notion of satisficing, the authors have proposed a family of algorithms based on a simple psychological mechanism: onereason decision making. These fast and frugal algorithms violate fundamental tenets of classical rationality: They neither look up nor integrate all information. By computer simulation, the authors held a competition between the satisficing "Take The Best" algorithm and various "rational" inference procedures (e.g., multiple regression). The Take The Best algorithm matched or outperformed all competitors in inferential speed and accuracy. This result is an existence proof that cognitive mechanisms capable of successful performance in the real world do not need to satisfy the classical norms of rational inference.

read more

Content maybe subject to copyright    Report

Psychological
Review
1996,
Vol.
103,
No.
4,
650-669
Copyright
1996
by the
American
Psychological
Association,
Inc.
00
33-29
5X/96/S
3.00
Reasoning
the
Fast
and
Frugal
Way:
Models
of
Bounded
Rationality
Gerd
Gigerenzer
and
Daniel
G.
Goldstein
Max
Planck Institute
for
Psychological Research
and
University
of
Chicago
Humans
and
animals
make
inferences
about
the
world
under
limited
time
and
knowledge.
In
con-
trast,
many
models
of
rational
inference
treat
the
mind
as a
Laplacean
Demon,
equipped
with
un-
limited
time,
knowledge,
and
computational
might.
Following
H.
Simon's
notion
of
satisncing,
the
authors
have
proposed
a
family
of
algorithms
based
on a
simple
psychological
mechanism:
one-
reason
decision
making.
These
fast
and
frugal
algorithms
violate
fundamental
tenets
of
classical
rationality:
They
neither
look
up nor
integrate
all
information.
By
computer
simulation,
the
authors
held
a
competition
between
the
satisncing
"Take
The
Best"
algorithm
and
various
"rational"
infer-
ence
procedures
(e.g.,
multiple
regression).
The
Take
The
Best
algorithm
matched
or
outperformed
all
competitors
in
inferential
speed
and
accuracy.
This
result
is an
existence
proof
that
cognitive
mechanisms
capable
of
successful
performance
in the
real
world
do not
need
to
satisfy
the
classical
norms
of
rational
inference.
Organisms make inductive inferences. Darwin
(1872/1965)
observed that people
use
facial
cues, such
as
eyes
that
waver
and
lids
that hang low,
to
infer
a
person's guilt. Male
toads,
roaming
through
swamps
at
night,
use the
pitch
of a
rival's croak
to
infer
its
size
when
deciding whether
to fight
(Krebs
&
Davies,
1987).
Stock
brokers must make
fast
decisions about which
of
several
stocks
to
trade
or
invest
when
only limited information
is
avail-
able.
The
list goes
on.
Inductive inferences
are
typically based
on
uncertain cues:
The
eyes
can
deceive,
and so can a
tiny
toad
with
a
deep
croak
in the
darkness.
How
does
an
organism make inferences
about
unknown
as-
pects
of the
environment? There
are
three directions
in
which
to
look
for an
answer. From Pierre Laplace
to
George Boole
to
Jean Piaget, many scholars have defended
the
now
classical
view
that
the
laws
of
human inference
are the
laws
of
probability
and
statistics
(and
to a
lesser degree logic, which does
not
deal
as
easily
with uncertainty). Indeed,
the
Enlightenment
probabi-
lists
derived
the
laws
of
probability
from
what they believed
to
be
the
laws
of
human reasoning
(Daston,
1988).
Following this
time-honored tradition, much contemporary research
in
psy-
chology,
behavioral ecology,
and
economics assumes standard
Gerd
Gigerenzer
and
Daniel
G.
Goldstein,
Center
for
Adaptive
Be-
havior
and
Cognition,
Max
Planck
Institute
for
Psychological
Research,
Munich,
Germany,
and
Department
of
Psychology.
University
of
Chicago.
This
research
was
funded
by
National
Science
Foundation
Grant
SBR-9320797/GG.
We
are
deeply
grateful
to the
many
people
who
have
contributed
to
this
article,
including
Hal
Arkes,
Leda
Cosmides,
Jean
Czerlinski,
Lor-
raine
Daston,
Ken
Hammond,
Reid
Hastie,
Wolfgang
Hell,
Ralph
Her-
twig,
Ulrich
Hoffrage,
Albert
Madansky,
Laura
Martignon,
Geoffrey
Miller,
Silvia
Papai,
John
Payne,
Terry
Regier,
Werner
Schubo,
Peter
Sedlmeier,
Herbert
Simon,
Stephen
Stigler,
Gerhard
Strube,
Zeno
Swi-
jtink,
John
Tooby,
William
Wimsatt,
and
Werner
Wittmann.
Correspondence
concerning
this
article
should
be
addressed
to
Gerd
Gigerenzer
or
Daniel
G.
Goldstein,
Center
for
Adaptive
Behavior
and
Cognition,
Max
Planck
Institute
for
Psychological
Research,
Leo-
poldstrasse
24,
80802
Munich,
Germany.
Electronic
mail
may
be
sent
via
Internet to
giger@mpipf-muenchen.mpg.de.
statistical tools
to be the
normative
and
descriptive models
of
inference
and
decision
making.
Multiple
regression,
for in-
stance,
is
both
the
economist's
universal tool (McCloskey,
1985)
and a
model
of
inductive
inference
in
multiple-cue
learn-
ing
(Hammond, 1990)
and
clinical judgment
(B.
Brehmer,
1994);
Bayes's theorem
is a
model
of how
animals
infer
the
presence
of
predators
or
prey
(Stephens
&
Krebs,
1986)
as
well
as of
human reasoning
and
memory (Anderson, 1990). This
Enlightenment
view
that
probability theory
and
human reason-
ing
are two
sides
of the
same coin crumbled
in the
early
nine-
teenth
century
but has
remained strong
in
psychology
and
economics.
In
the
past
25
years, this stronghold came under attack
by
proponents
of the
heuristics
and
biases
program,
who
con-
cluded
that human inference
is
systematically biased
and
error
prone, suggesting that
the
laws
of
inference
are
quick-and-dirty
heuristics
and not the
laws
of
probability (Kahneman, Slovic,
&
Tversky,
1982).
This
second
perspective
appears
diametrically
opposed
to the
classical rationality
of the
Enlightenment,
but
this
appearance
is
misleading.
It has
retained
the
normative
kernel
of the
classical
view.
For
example,
a
discrepancy between
the
dictates
of
classical rationality
and
actual reasoning
is
what
defines
a
reasoning
error
in
this program. Both views
accept
the
laws
of
probability
and
statistics
as
normative,
but
they disagree
about
whether
humans
can
stand
up to
these norms.
Many
experiments
have
been conducted
to
test
the
validity
of
these
two
views,
identifying
a
host
of
conditions under which
the
human
mind
appears more rational
or
irrational.
But
most
of
this
work
has
dealt
with
simple situations, such
as
Bayesian
inference
with binary hypotheses,
one
single piece
of
binary
data,
and all the
necessary information conveniently laid
out
for
the
participant (Gigerenzer
&
Hoffrage,
1995).
In
many
real-world
situations,
however,
there
are
multiple pieces
of in-
formation,
which
are not
independent,
but
redundant. Here,
Bayes's theorem
and
other
"rational"
algorithms
quickly
be-
come mathematically complex
and
computationally intracta-
ble,
at
least
for
ordinary human minds. These situations make
neither
of the two
views
look
promising.
If one
would
apply
the
classical
view
to
such complex real-world environments, this
650

REASONING
THE
FAST
AND
FRUGAL
WAY
651
would
suggest that
the
mind
is a
supercalculator
like
a
Lapla-
cean Demon
(Wimsatt,
1976)—carrying
around
the
collected
works
of
Kolmogoroff,
Fisher,
or
Neyman—and
simply needs
a
memory jog, like
the
slave
in
Plato's
Meno.
On the
other hand,
the
heuristics-and-biases
view
of
human irrationality would
lead
us to
believe
that
humans
are
hopelessly lost
in the
face
of
real-world complexity, given their supposed inability
to
reason
according
to the
canon
of
classical rationality, even
in
simple
laboratory experiments.
There
is a
third
way
to
look
at
inference,
focusing
on the
psy-
chological
and
ecological rather than
on
logic
and
probability
theory. This
view
questions classical rationality
as a
universal
norm
and
thereby questions
the
very
definition
of
"good"
rea-
soning
on
which both
the
Enlightenment
and the
heuristics-
and-biases
views
were built. Herbert Simon, possibly
the
best-
known
proponent
of
this
third
view,
proposed looking
for
models
of
bounded
rationality
instead
of
classical rationality.
Simon
(1956,
1982) argued that information-processing sys-
tems typically need
to
satisfies
rather than optimize.
Satisficing,
a
blend
of
sufficing
and
satisfying,
is a
word
of
Scottish
origin,
which Simon uses
to
characterize algorithms that
successfully
deal with conditions
of
limited time, knowledge,
or
computa-
tional capacities.
His
concept
of
satisficing
postulates,
for in-
stance, that
an
organism would choose
the first
object
(a
mate,
perhaps)
that
satisfies
its
aspiration
level—instead
of
the
intrac-
table sequence
of
taking
the
time
to
survey
all
possible alterna-
tives, estimating probabilities
and
utilities
for the
possible out-
comes
associated
with each alternative, calculating expected
utilities,
and
choosing
the
alternative
that
scores highest.
Let us
stress that Simon's notion
of
bounded rationality
has
two
sides,
one
cognitive
and one
ecological.
As
early
as in Ad-
ministrative
Behavior
(1945),
he
emphasized
the
cognitive lim-
itations
of
real minds
as
opposed
to the
omniscient
Laplacean
Demons
of
classical rationality.
As
early
as in his
Psychological
Review
article
titled
"Rational
Choice
and the
Structure
of the
Environment"
(1956),
Simon emphasized
that
minds
are
adapted
to
real-world environments.
The two go in
tandem:
"Human
rational behavior
is
shaped
by a
scissors
whose
two
blades
are the
structure
of
task environments
and the
computa-
tional
capabilities
of the
actor"
(Simon,
1990,
p. 7). For the
most
part,
however,
theories
of
human inference
have
focused
exclusively
on the
cognitive side, equating
the
notion
of
bounded rationality with
the
statement
that
humans
are
limited
information
processors,
period.
In a
Procrustean-bed
fashion,
bounded
rationality
became almost synonymous with
heuris-
tics
and
biases, thus paradoxically reassuring classical rational-
ity
as the
normative
standard
for
both
biases
and
bounded
ra-
tionality
(for
a
discussion
of
this
confusion
see
Lopes,
1992).
Simon's insight that
the
minds
of
living systems should
be un-
derstood
relative
to the
environment
in
which they evolved,
rather than
to the
tenets
of
classical rationality,
has had
little
impact
so far in
research
on
human inference. Simple psycho-
logical algorithms
that
were observed
in
human inference, rea-
soning,
or
decision making were
often
discredited without
a
fair
trial,
because they looked
so
stupid
by the
norms
of
classical
rationality.
For
instance, when Keeney
and
Raifta
(1993)
dis-
cussed
the
lexicographic ordering procedure they
had
observed
in
practice—a
procedure
related
to the
class
of
satisficing algo-
rithms
we
propose
in
this
article—they
concluded
that
this pro-
cedure
"is
naively
simple"
and
"will rarely
pass
a
test
of
'reasonableness'
"
(p.
78).
They
did not
report such
a
test.
We
shall.
Initially,
the
concept
of
bounded rationality
was
only vaguely
defined,
often
as
that which
is not
classical economics,
and one
could "fit
a lot of
things into
it by
foresight
and
hindsight,"
as
Simon
(1992,
p.
18)
himself
put it. We
wish
to do
more than
oppose
the
Laplacean Demon
view.
We
strive
to
come
up
with
something positive that could replace
this
unrealistic view
of
mind.
What
are
these simple, intelligent algorithms capable
of
making
near-optimal
inferences?
How
fast
and how
accurate
are
they?
In
this article,
we
propose
a
class
of
models that exhibit
bounded rationality
in
both
of
Simon's
senses.
These satisficing
algorithms
operate with simple psychological principles
that
satisfy
the
constraints
of
limited time, knowledge,
and
compu-
tational might, rather than those
of
classical rationality.
At the
same time, they
are
designed
to be
fast
and
frugal
without
a
significant
loss
of
inferential accuracy, because
the
algorithms
can
exploit
the
structure
of
environments.
The
article
is
organized
as
follows.
We
begin
by
describing
the
task
the
cognitive algorithms
are
designed
to
address,
the
basic
algorithm
itself,
and the
real-world environment
on
which
the
performance
of the
algorithm
will
be
tested. Next,
we
report
on
a
competition
in
which
a
satisficing
algorithm competes with
"rational"
algorithms
in
making
inferences
about
a
real-world
environment.
The
"rational"
algorithms
start
with
an
advan-
tage:
They
use
more time, information,
and
computational
might
to
make inferences.
Finally,
we
study variants
of the
sati-
sficing
algorithm that make
faster
inferences
and get by
with
even
less knowledge.
The
Task
We
deal
with
inferential tasks
in
which
a
choice must
be
made
between
two
alternatives
on a
quantitative dimension. Consider
the
following
example:
Which city
has a
larger
population?
(a)
Hamburg
(b)
Cologne.
Two-alternative-choice tasks occur
in
various contexts
in
which
inferences
need
to be
made with limited time
and
knowledge,
such
as in
decision making
and
risk assessment during driving
(e.g., exit
the
highway
now or
stay
on);
treatment-allocation
de-
cisions (e.g.,
who to
treat
first in the
emergency
room:
the 80-
year-old
heart attack victim
or the
16-year-old
car
accident
victim);
and financial
decisions (e.g., whether
to buy or
sell
in
the
trading pit). Inference concerning population demograph-
ics, such
as
city populations
of the
past,
present,
and
future
(e.g.,
Brown
&
Siegler,
1993),
is
of
importance
to
people work-
ing
in
urban planning, industrial development,
and
marketing.
Population demographics, which
is
better
understood
than,
say,
the
stock market,
will
serve
us
later
as a
"drosophila"
environ-
ment that allows
us to
analyze
the
behavior
of
satisficing
algorithms.
We
study two-alternative-choice tasks
in
situations where
a
person
has to
make
an
inference
based
solely
on
knowledge
re-
trieved
from
memory.
We
refer
to
this
as
inference
from mem-
ory,
as
opposed
to
inference
from
givens.
Inference
from
mem-
ory
involves search
in
declarative knowledge
and has
been
in-
vestigated
in
studies
of,
inter
alia,
confidence
in
general
knowledge
(e.g.,
Juslin,
1994; Sniezek
&
Buckley, 1993);
the

652
GIGERENZER
AND
GOLDSTEIN
effect
of
repetition
on
belief (e.g.,
Hertwig,
Gigerenzer,
&
Hoffrage,
in
press);
hindsightbias(e.g.,
Fischhoff,
1977);quan-
titative estimates
of
area
and
population
of
nations
(Brown
&
Siegler,
1993);
and
autobiographic memory
of
time
(Huttenlocher,
Hedges,
&
Prohaska,
1988). Studies
of
infer-
ence
from
givens,
on the
other hand, involve making inferences
from
information presented
by an
experimenter
(e.g.,
Ham-
mond,
Hursch,
&
Todd,
1964).
In the
tradition
of
Ebbinghaus's
nonsense syllables,
attempts
are
often
made here
to
prevent
in-
dividual
knowledge
from
impacting
on the
results
by
using
problems
about
hypothetical
referents instead
of
actual ones.
For
instance,
in
celebrated judgment
and
decision-making
tasks, such
as the
"cab"
problem
and the
"Linda"
problem,
all
the
relevant information
is
provided
by the
experimenter,
and
individual
knowledge about
cabs
and
hit-and-run
accidents,
or
feminist bank
tellers,
is
considered
of no
relevance
(Gigerenzer
&
Murray,
1987).
As a
consequence, limited knowledge
or in-
dividual
differences
in
knowledge
play
a
small role
in
inference
from
givens.
In
contrast,
the
satisficing
algorithms
proposed
in
this
article perform inference
from
memory, they
use
limited
knowledge
as
input,
and as we
will
show, they
can
actually
profit
from
a
lack
of
knowledge.
Assume
that
a
person does
not
know
or
cannot deduce
the
answer
to the
Hamburg-Cologne
question
but
needs
to
make
an
inductive inference
from
related
real-world knowledge.
How
is
this inference derived?
How can we
predict choice
(Hamburg
or
Cologne)
from
a
person's
state
of
knowledge?
Theory
The
cognitive algorithms
we
propose
are
realizations
of a
framework
for
modeling
inferences
from
memory,
the
theory
of
probabilistic mental models
(PMM
theory;
see
Gigerenzer,
1993;
Gigerenzer,
Hoffrage,
&
Kleinbolting,
1991).
The
theory
of
probabilistic mental models assumes that inferences about
unknown
states
of the
world
are
based
on
probability cues
(Brunswik,
1955).
The
theory relates three visions:
(a)
Induc-
tive
inference
needs
to be
studied with respect
to
natural envi-
ronments,
as
emphasized
by
Brunswik
and
Simon;
(b)
induc-
tive
inference
is
carried
out by
satisficing
algorithms,
as
empha-
sized
by
Simon;
and (c)
inductive inferences
are
based
on
frequencies
of
events
in a
reference
class,
as
proposed
by
Rei-
chenbach
and
other
frequentist
statisticians.
The
theory
of
probabilistic mental
models
accounts
for
choice
and
confi-
dence,
but
only
choice
is
addressed
in
this
article.
The
major thrust
of the
theory
is
that
it
replaces
the
canon
of
classical rationality with
simple,
plausible psychological mech-
anisms
of
inference—mechanisms
that
a
mind
can
actually
carry
out
under limited time
and
knowledge
and
that could have
possibly arisen through evolution.
Most
traditional
models
of
inference,
from
linear multiple regression models
to
Bayesian
models
to
neural networks,
try to find
some optimal integration
of
all
information available: Every
bit of
information
is
taken
into account, weighted,
and
combined
in a
computationally
ex-
pensive way.
The
family
of
algorithms
in PMM
theory
does
not
implement
this
classical
ideal.
Search
in
memory
for
relevant
information
is
reduced
to a
minimum,
and
there
is no
integra-
tion
(but rather
a
substitution)
of
pieces
of
information. These
satisficing
algorithms
dispense
with
the fiction of the
omni-
scient
Laplacean
Demon,
who has
all
the
lime
and
knowledge
Recognition
Cuel
Cue 2
Cue
3
Cue 4
Cue5
+
+
:::*:::
::::*«:::
:::*:::
+
:::^::
:;;;$;:
;;;;«;;
*
-
Figure
1,
Illustration
of
bounded
search
through
limited
knowledge.
Objects
a, b, and c are
recognized;
object
rfis
not.
Cue
values
are
posi-
tive
(+)
or
negative
{-);
missing
knowledge
is
shown
by
question
marks.
Cues
are
ordered
according
to
their
validities.
To
infer
whether
a > b, the
Take
The
Best
algorithm
looks
up
only
the cue
values
in
the
shaded
space;
to
infer
whether
b
>
c,
search
is
bounded
to the
dotted
space.
The
other
cue
values
are not
looked
up.
to
search
for all
relevant information,
to
compute
the
weights
and
covariances,
and
then
to
integrate
all
this information into
an
inference,
Limited
Knowledge
A
PMM is an
inductive device that uses limited knowledge
to
make
fast
inferences.
Different
from
mental models
of
syllo-
gisms
and
deductive inference
(Johnson-Laird,
1983),
which
focus
on the
logical task
of
truth preservation
and
where knowl-
edge
is
irrelevant (except
for the
meaning
of
connectives
and
other logical terms), PMMs perform intelligent guesses
about
unknown
features
of the
world, based
on
uncertain indicators.
To
make
an
inference about which
of two
objects,
a or b, has a
higher
value, knowledge about
a
reference class
R
is
searched,
with
a, b
e
K.
In
our
example, knowledge about
the
reference
class
"cities
in
Germany" could
be
searched.
The
knowledge
consistsof
probability
cues
C/(/=
I,...,«),
and
the
cue
values
a/
and
hi
of the
objects
for the
ith
cue.
For
instance, when mak-
ing
inferences about populations
of
German cities,
the
fact
that
a
city
has
a
professional soccer
team
in the
major league
(Bundesliga)
may
come
to a
person's
mind
as a
potential cue.
That
is,
when considering
pairs
of
German cities,
if
one
city
has
a
soccer
team
in the
major
league
and the
other
does
not, then
the
city with
the
team
is
likely,
but not
certain,
to
have
the
larger
population.
Limited
knowledge
means that
the
matrix
of
objects
by
cues
has
missing entries (i.e., objects, cues,
or cue
values
may be
unknown).
Figure
1
models
the
limited knowledge
of a
person.
She
has
heard
of
three German cities,
a, b, and c, but not of
d
(represented
by
three positive
and one
negative recognition
values).
She
knows some
facts
(cue values) about these cities
with
respect
to five
binary cues.
For a
binary cue, there
are two
cue
values, positive (e.g.,
the
city
has
a
soccer team)
or
negative
(it
does
not).
Positive refers
to a cue
value that signals
a
higher
value
on the
target variable
(e.g.,
having
a
soccer
team
is
corre-
lated
with high
population).
Unknown
cue
values
are
shown
by
a
question mark. Because
she has
never heard
of
d, all cue
val-
ues for
object
f/are,
by
definition,
unknown.
People
rarely know
all
information
on
which
an
inference

REASONING
THE
FAST
AND
FRUGAL
WAY
653
could
be
based, that
is,
knowledge
is
limited.
We
model limited
knowledge
in two
respects:
A
person
can
have
(a)
incomplete
knowledge
of the
objects
in the
reference class
(e.g.,
she
recog-
nizes
only
some
of the
cities),
(b)
limited knowledge
of the cue
values
(facts
about
cities),
or (c)
both.
For
instance,
a
person
who
does
not
know
all of the
cities with
soccer
teams
may
know
some
cities
with positive
cue
values (e.g., Munich
and
Hamburg
certainly have
teams),
many with negative
cue
values
(e.g.,
Hei-
delberg
and
Potsdam
certainly
do not
have
teams),
and
several
cities
for
which
cue
values will
not be
known.
The
Take
The
Best
Algorithm
The first
satisficing
algorithm presented
is
called
the
Take
The
Best algorithm,
because
its
policy
is
"take
the
best, ignore
the
rest."
It
is the
basic
algorithm
in the PMM
framework. Vari-
ants
that
work faster
or
with less knowledge
are
described
later.
We
explain
the
steps
of the
Take
The
Best algorithm
for
binary
cues
(the
algorithm
can be
easily generalized
to
many valued
cues), using Figure
1 for
illustration.
The
Take
The
Best algorithm assumes
a
subjective rank
order
of
cues according
to
their validities
(as in
Figure
1).
We
call
the
highest
ranking
cue
(that discriminates between
the two
alternatives)
the
best cue.
The
algorithm
is
shown
in the
form
of
a flow
diagram
in
Figure
2.
Step
1:
Recognition
Principle
The
recognition principle
is
invoked when
the
mere recogni-
tion
of an
object
is a
predictor
of the
target variable (e.g.,
population).
The
recognition principle
states
the
following:
If
only
one of the two
objects
is
recognized, then choose
the
rec-
ognized object.
If
neither
of the two
objects
is
recognized, then
choose randomly between them.
If
both
of the
objects
are
rec-
ognized,
then
proceed
to
Step
2.
Example:
If a
person
in the
knowledge state shown
in
Figure
Stan
Object
a
positive
unknown
negative
positive
Object
b
unknown
negative
Figure
2.
Flow diagram
of the
Take
The
Best algorithm.
Figure
3.
Discrimination rule.
A cue
discriminates between
two al-
ternatives
if one has a
positive
cue
value
and the
other
does
not.
The
four
discriminating
cases
are
shaded.
1
is
asked
to
infer
which
of
city
a and
city
d
has
more inhabi-
tants,
the
inference
will
be
city
a,
because
the
person
has
never
heard
of
city
d
before.
Step
2:
Search
for Cue
Values
For the two
objects, retrieve
the cue
values
of the
highest
ranking
cue
from
memory.
Step
3:
Discrimination
Rule
Decide whether
the cue
discriminates.
The cue is
said
to
dis-
criminate between
two
objects
if one has a
positive
cue
value
and the
other does
not.
The
four
shaded knowledge states
in
Figure
3 are
those
in
which
a cue
discriminates.
Step
4:
Cue-Substitution
Principle
If
the cue
discriminates,
then stop searching
for cue
values.
If
the
cue
does
not
discriminate,
go
back
to
Step
2 and
continue
with
the
next
cue
until
a cue
that discriminates
is
found.
Step
5:
Maximizing
Rule
for
Choice
Choose
the
object with
the
positive
cue
value.
If no cue
dis-
criminates,
then choose randomly.
Examples:
Suppose
the
task
is
judging which
of
city
a or b is
larger
(Figure
I).
Both cities
are
recognized (Step
1),
and
search
for the
best
cue
results with
a
positive
and a
negative
cue
value
for Cue I
(Step
2).
The cue
discriminates
(Step
3),
and
search
is
terminated (Step
4).
The
person makes
the
inference
that city
a is
larger
(Step
5).
Suppose
now the
task
is
judging which
of
city
b or c is
larger.
Both cities
are
recognized (Step
1),
and
search
for the cue
val-
ues cue
results
in
negative
cue
value
on
object
b for Cue 1, but
the
corresponding
cue
value
for
object
c
is
unknown
(Step
2).
The cue
does
not
discriminate
(Step
3),
so
search
is
continued
(Step
4).
Search
for the
next
cue
results with positive
and a
negative
cue
values
for Cue 2
(Step
2).
This
cue
discriminates
(Step
3),
and
search
is
terminated
(Step
4).
The
person makes
the
inference that
city
b is
larger (Step
5).
The
features
of
this algorithm
are (a)
search extends through
only
a
portion
of the
total knowledge
in
memory
(as
shown
by
the
shaded
and
dotted
parts
of
Figure
1)
and is
stopped
imme-

654
GIGERENZER
AND
GOLDSTEIN
diately
when
the first
discriminating
cue is found, (b) the
algo-
rithm
does
not
attempt
to
integrate
information
but
uses
cue
substitution
instead,
and (c) the
total
amount
of
information
processed
is
contingent
on
each task
(pair
of
objects)
and
varies
in
a
predictable
way
among individuals with
different
knowl-
edge. This
fast
and
computationally simple algorithm
is a
model
of
bounded
rationality
rather than
of
classical
rationality.
There
is
a
close
parallel
with
Simon's
concept
of
"satisficing":
The
Take
The
Best algorithm stops search
after
the
first
discriminat-
ing
cue is
found,
just
as
Simon's satisficing algorithm stops
search
after
the first
option that
meets
an
aspiration
level,
The
algorithm
is
hardly
a
standard statistical tool
for
induc-
tive
inference:
It
does
not use all
available
information,
it is
non-
compensatory
and
nonlinear,
and
variants
of it can
violate tran-
sitivity.
Thus,
it
differs
from
standard linear tools
for
inference
such
as
multiple
regression,
as
well
as
from
nonlinear
neural
networks
that
are
compensatory
in
nature.
The
Take
The
Best
algorithm
is
noncompensatory because
only
the
best discrimi-
nating
cue
determines
the
inference
or
decision;
no
combina-
tion
of
other
cue
values
can
override
this
decision.
In
this
way,
the
algorithm does
not
conform
to the
classical
economic
view
of
human behavior
{e.g.,
Becker,
1976),
where, under
the as-
sumption
that
all
aspects
can be
reduced
to one
dimension
(e.g.,
money),
there exists
always
a
trade-off
between commodities
or
pieces
of
information.
That
is, the
algorithm violates
the
Arehi-
median
axiom,
which
implies that
for any
multidimensional
object
a
(a,,
a
3
,...,
a
n
)
preferred
to b
(b
t
,b
2
,.,.,
£„),
where
a,
dominates
b\,
this preference
can
be
reversed
by
taking
multiples
of any one or a
combination
of
b
2
,
b$,...,
b,,.
As we
discuss,
variants
of
this
algorithm also violate transitivity,
one
of
the
cornerstones
of
classical rationality
(McCIennen,
1990).
Empirical
Evidence
Despite
their
flagrant
violation
of the
traditional
standards
of
rationality,
the
Take
The
Best
algorithm
and
other models
from
the
framework
of PMM
theory
have
been
successful
in
integrat-
ing
various
striking phenomena
in
inference
from
memory
and
predicting
novel
phenomena,
such
as the
confidence-frequency
effect
(Gigerenzer
et
al.,
1991)
and the
less-is-more
effect
(Goldstein,
1994;
Goldstein
&
Gigerenzer,
1996).
The
theory
of
probabilistic
mental
models
seems
to be the
only existing
process
theory
of the
overconfidence
bias that
successfully
pre-
dicts conditions under
which
overestimation
occurs, disappears,
and
inverts
to
underestimation (Gigerenzer,
1993;
Gigerenzer
et
al.,
1991;
Juslin,
1993, 1994;
Juslin,
Winman,
&
Persson,
1995;
but see
Griffin
&
Tversky,
1992).
Similarly,
the
theory
predicts
when
the
hard-easy
effect
occurs, disappears,
and in-
verts—predictions
that
have
been experimentally
confirmed
by
Hoffrage
(1994)
and by
Juslin
(1993).
The
Take
The
Best
algo-
rithm
explains also
why the
popular confirmation-bias expla-
nation
of the
overconfidence bias
(Koriat,
Lichtenstein,
&
Fischhoff,
1980)
is not
supported
by
experimental data
(Gigerenzer
etal.,
1991,
pp.
521-522).
Unlike
earlier accounts
of
these striking phenomena
in
con-
fidence
arid
choice,
the
algorithms
in the PMM
framework
al-
low
for
predictions
of
choice based
on
each individual's
knowl-
edge. Goldstein
and
Gigerenzer
(1996)
showed that
the
recog-
nition principle predicted individual participants" choices
in
about
90% to
100%
of all
cases,
even
when
participants
were
taught
information
that
suggested
doing
otherwise
(negative
cue
values
for the
recognized
objects).
Among
the
evidence
for
the
empirical
validity
of the
Take-The-Best
algorithm
are the
tests
of a
bold prediction,
the
less-is-more
effect,
which
postu-
lates
conditions under which people
with
little knowledge make
better inferences than those
who
know more. This surprising
prediction
has
been experimentally confirmed.
For
instance,
U.S. students make slightly more
correct
inferences
about
Ger-
man
city
populations
(about
which they know
little)
than about
U.S. cities,
and
vice
versa
for
German students (Gigerenzer,
1993; Goldstein
1994;
Goldstein
&
Gigerenzer,
1995;
Hoffrage,
1994).
The
theory
of
probabilistic
mental models
has
been
ap-
plied
to
other
situations
in
which
inferences
have
to be
made
under
limited time
and
knowledge,
such
as
rumor-based stock
market trading
(DiFonzo,
1994).
A
general review
of the
theory
and
its
evidence
is
presented
in
McClelland
and
Bolger
(1994).
The
reader
familiar
with
the
original algorithm presented
in
Gigerenzer
et
al.(1991)
will
have
noticed
that
we
simplified
the
discrimination
rule.'
In
the
present version, search
is
already
terminated
if one
object
has a
positive
cue
value
and the
other
does not, whereas
in the
earlier version, search
was
terminated
only
when
one
object
had a
positive value
and the
other
a
nega-
tive
one
(cf. Figure
3 in
Gigerenzer
et al.
with Figure
3 in
this
article).
This
change
follows
empirical evidence that partici-
pants tend
to use
this
faster,
simpler discrimination rule
(Hoffrage,
1994).
This
article does
not
attempt
to
provide
further
empirical
ev-
idence.
For the
moment,
we
assume
that
the
model
is
descrip-
tively
valid
and
investigate
how
accurate
this
satisficing
algo-
rithm
is in
drawing inferences about unknown
aspects
of a
real-world environment.
Can an
algorithm based
on
simple
psychological
principles that violate
the
norms
of
classical
ra-
tionality
make
a
fair
number
of
accurate inferences?
The
Environment
We
tested
the
performance
of the
Take
The
Best algorithm
on
how
accurately
it
made inferences
about
a
real-world
environ-
ment.
The
environment
was the set of all
cities
in
Germany
with
more
than
100,000 inhabitants
(83
cities
after
German
reunification),
with
population
as the
target variable.
The
model
of the
environment
consisted
of 9
binary ecological cues
and the
actual
9X83
cue
values.
The
full
model
of the
environ-
ment
is
shown
in the
Appendix.
Each
cue has an
associated
validity,
which
is
indicative
of its
predictive
power.
The
ecological
validity
of a cue is the
relative
frequency
with which
the cue
correctly
predicts
the
target,
de-
fined
with
respect
to the
reference class (e.g.,
all
German cities
with
more than 100,000 inhabitants).
For
instance,
if one
checks
all
pairs
in
which
one
city
has a
soccer
team
but the
other
city
does not,
one finds
that
in 87% of
these cases,
the
city
with
the
team also
has the
higher population. This
value
is the
eco-
logical
validity
of the
soccer team cue.
The
validity
B,-
of the
ith
cue is
»,
=
p[t(a)>
t(b)\ai
is
positive
and b, is
negative],
1
Also,
we now
use
the
term
discrimination
rule
instead
of
activation
rule.

Citations
More filters
Journal ArticleDOI

The emotional dog and its rational tail: a social intuitionist approach to moral judgment.

TL;DR: The author gives 4 reasons for considering the hypothesis that moral reasoning does not cause moral judgment; rather, moral reasoning is usually a post hoc construction, generated after a judgment has been reached.
Book

Simple Heuristics That Make Us Smart

TL;DR: Fast and frugal heuristics as discussed by the authors are simple rules for making decisions with realistic mental resources and can enable both living organisms and artificial systems to make smart choices, classifications, and predictions by employing bounded rationality.
Book

Heuristics and Biases: The Psychology of Intuitive Judgment

TL;DR: In this article, a review is presented of the book "Heuristics and Biases: The Psychology of Intuitive Judgment, edited by Thomas Gilovich, Dale Griffin, and Daniel Kahneman".
Journal ArticleDOI

Individual differences in reasoning: Implications for the rationality debate?

TL;DR: In this paper, the authors examined the implica- tions of individual differences in performance for each of the four explanations of the normative/descriptive gap, including performance errors, computational limitations, the wrong norm being applied by the experi- menter, and a different construal of the task by the subject.
Book ChapterDOI

Representativeness revisited: Attribute substitution in intuitive judgment.

TL;DR: The program of research now known as the heuristics and biases approach began with a survey of 84 participants at the 1969 meetings of the Mathematical Psychology Society and the American Psychological Association (Tversky & Kahneman, 1971) as discussed by the authors.
References
More filters
Book

Judgment Under Uncertainty: Heuristics and Biases

TL;DR: The authors described three heuristics that are employed in making judgements under uncertainty: representativeness, availability of instances or scenarios, and adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available.
Book

Classification and regression trees

Leo Breiman
TL;DR: The methodology used to construct tree structured rules is the focus of a monograph as mentioned in this paper, covering the use of trees as a data analysis method, and in a more mathematical framework, proving some of their fundamental properties.
Book

The Expression of the Emotions in Man and Animals

TL;DR: The Expression of the Emotions in Man and Animals Introduction to the First Edition and Discussion Index, by Phillip Prodger and Paul Ekman.