scispace - formally typeset
Search or ask a question
Journal ArticleDOI

The Rule of Probabilities: A Practical Approach for Applying Bayes' Rule to the Analysis of DNA Evidence

TL;DR: This Article shows that a correct application of Bayes’ rule should lead fact-finders and litigants to focus on the size of two variables that influence the source probability: the probability that a non-source in the DNA database would have an alibi, and the likelihood that the source of the DNA is included in the database.
Abstract: Bayes’ rule is not being used to guide jury decision making in the vast majority of criminal cases introducing evidence of DNA testing. Instead of telling juries the “source probability,” the probability that the individual whose DNA matches was the source of the forensic evidence found at the crime scene, experts only present pieces of the puzzle. They provide the probability that a randomly selected innocent person would have a match or the expected number of innocent matches in the database. In some cases, the random match probability will be so low (one in a quadrillion) that the intuitive source probability is practically one hundred percent. But, in other cases, with large database trawls and random match probability at 1 in a million, jurors will have no ability to convert the random match probability or the likelihood ratio based on expected number of matches into relevant data that will help them address the question of guilt. This Article shows that a correct application of Bayes’ rule should lead fact-finders and litigants to focus on the size of two variables that influence the source probability: the probability that a non-source in the DNA database would have an alibi, and the probability that the source of the DNA is included in the database. This Article suggests practical means of estimating these two variables and argues that as a legal matter these parameters as well as the Bayesian posterior source probability are admissible in court. In particular, focusing on the prior probability that the “database is guilty,” i.e. the probability that someone in the database is the source of the forensic evidence, is not just analytically and empirically tractable, but avoids the evidentiary limitations concerning a particular defendant’s prior bad acts. Appropriate application of Bayes’ rule, far from preempting the fact-finding and adversarial process, can guide advocates to engage the important aspects of the evidence that are still likely to be open to contestation. Perhaps most important, appropriate application of Bayes’ rule will also allow jurors to reach verdicts via a coherent path that employs sound logic and reasoning.

Summary (5 min read)

INTRODUCTION

  • With the recent Supreme Court decision allowing the collection of DNA samples from any person arrested and detained for a serious offense, it seems inevitable that the justice system will collect and use large DNA databases.
  • There is concern that as database size increases, so too will the rate of false positives, and thus innocent peole will be convicted when their DNA matches evidence left at a crime scene.
  • The authors will show how estimation of both the prior probability and relevant database size can be assessed under alternative assumptions that are appropriately open to literal and figurative cross-examination to assure the robustness of the bottom-line conclusion: the defendant was or was not the true source of the crime scene evidence.

. THE ISLAND OF EDEN

  • Imagine that a singular crime has been committed on the otherwise idyllic island of Eden.
  • All of the other people in the population have been ruled out by the lack of a DNA match, Prior to the test, both Mr. Baker and Mr. Fisher were equally likely to have been the criminal.
  • Had the population been above 51,294, a test with a one-ina-million chance of a false positive would lead to more than a five percent chance that at least one person would match even when everyone is innocent.
  • What matters is the probability that the guilty party is in the database.
  • The defendant might have been convicted even without the confirmation of DNA evidence.

II.

  • BAYES FOR AN ERA OF BIG DATA Court cases introducing DNA evidence have traditionally focused on three different numbers: 1. The random match probability: the probability that a randomly selected person will be a DNA match.
  • See infra notes 76, 85 and accompanying text.
  • Indeed, if the expected number of innocent matches in the database were two, the authors would not say that the chance of a database match is 200% and thereby violate a fundamental tenet of probability that all probabilities must be at or below one.
  • This Article describes how, in practical terms, to convert the inputs into the number the trier of fact should care about.
  • It will be of enormous help to introduce some notation: S will stand for the result that the defendant is the source of DNA found at the crime scene.

35. See

  • The authors start with the aggregate probability that someone in the database is the source.
  • This probability will tell us a great deal about the posterior source probability with regard to every individual in the database.
  • Because the authors are assuming no false negatives, the posterior source probability of all unmatched indi-37.
  • For large databases, this makes little difference in that this changes the size of the database by one.
  • If there is a single unalibied match, then the posterior database source probability will be entirely focused on that matching individual (and the remaining unmatching individuals in the database will have a zero source probability).

38, See

  • The rate of false negatives varies depending on the width of the DNA band, or "match window," used to match the suspect to the source.
  • While this does not rule out the possibility of a false negative, for that to have happened, the authors would have to have experienced both a false positive and a false negative at the same time.
  • The intuition for this formula follows from a Venn diagram:.

P(S IM) P(S) P(M I S) P(-SI M) P(~ S) P(M I-S)

  • The odds of observing M unalibied matches is the ratio of the probability this would happen when the source is in the database versus the probability this would happen when the source is not in the database (and the M unalibied matches occur by chance).
  • The binomial distribution provides the probability for each possible number of heads, from 0 to N.
  • If it turns out that these two numbers perfectly coincide, then the authors do not update the prior probabilities.

M : rD(1a)

  • (5) This likelihood ratio can be restated simply as the ratio of the actual number of unalibied matches relative to the expected number of unalibied, nonsource matches 46 : M: E[M] (6) This likelihood ratio indicates how strong the new information is in terms of changing the prior opinion.
  • If the likelihood ratio in equation (6) is 10:1, then it is ten times more likely that the M matches observed are the result of the true match being in the database than all being there by luck.
  • If their initial view was that it was twice as likely that the database did not contain the true match (prior odds are 1:2), then Bayes' rule tells us (via equation ( 4)) that putting these together means the new odds are 5:1 in favor of the database containing the true match.
  • Bayes' rule says that to derive the updated, posterior odds of the source being in the dataset, all the authors need to do is simply multiply the prior odds by the likelihood ratio of equation (6).

III. COMPARATIVE STATICS

  • This Part explores how the source probability of equation (8) changes as the authors change the four underlying variables (a, r, D, and p) while holding M constant.
  • The authors also speculate on how these variables are likely to change over time.
  • Increasing the random match probability, r, while holding everything else equal decreases their confidence that the matching individual was the source of the forensic DNA.

A. Trawling a Larger Database

  • The question that often arises with a database trawl is how to adjust for the size of the database.
  • To answer this question, the authors first assume that the two databases are each comprised of individuals who, from a Bayesian perspective, have identical prior likelihoods of being the source of the forensic DNA.
  • In other words, the larger database has the same average quality as the smaller one in terms of finding matches.
  • 58 As it turns out, there are two forces that almost exactly cancel each other out.

IV. APPLICATION TO PEOPLE V. COLLINS

  • The authors analysis of DNA evidence can be usefully compared to the use of eyewitness evidence in the famous People v. Collins case.
  • Thus, the relevant population should be the number of couples in greater Los Angeles, where the crime was committed.
  • If the couple in court is guilty, then the chance some other innocent couple will match is I -(Ir)7, where T is here the number of couples not yet examined by the police.
  • If the police had searched the entire population of possible couples and found that the defendants were the only match, then the authors would know that the couple is guilty.
  • The fact that they were dead broke just prior to the robbery and yet had unexplained spending right after the robbery should factor into the equation.

V. APPLICATION TO PEOPLE V. PUCKET

  • On February 21, 2008, John Puckett was found guilty of first-degree murder for the 1972 death of Diana Sylvester.
  • 82 Smith describes the ambiguous evidentiary record: [Lead homicide investigator].
  • 8 3 The jury also heard of Puckett's three prior rape and assault convictions.
  • But that is not his burden (and it would be difficult for most people who had lived in San Francisco to explain in 2008 where they were on a particular night in 1972).

p:l-p

  • With the new data, the updated (or posterior Bayesian) odds become: 125x p:lx(l-p) or 125p:1-p (12) Associated with these odds is a probability that Puckett is the guilty party.
  • If the authors imagine that proof beyond reasonable doubt requires establishing a source probability at or above 99%, then they can work backward to derive a minimum prior that would produce that posterior probability range: l25p a 0.99 1+ 124p.
  • If the authors believe there is a 44% or higher chance that the guilty party is in the 338,711-felon database, they can conclude that Puckett has a 99% or higher chance of being the person whose DNA was left at the crime scene.

B. A Model for Calculating Priors

  • Certainly a random person on the street would not have a 44% chance of being the guilty party.
  • One approach to estimating the prior would be to compare the size of the database to the size of the population without alibis.
  • The authors assume that criminals behave in the following manner.
  • Again, fractionf are caught and (1 -f) are not.
  • Those caught are entered into the database, and those who have escaped conviction twice are still not in the database.

1-d

  • Imagine thatf, the chance of getting caught, is 50%.
  • In addition, if a random criminal retires with a 39% chance, this says that the average criminal would commit 2.6 crimes before "retiring," This seems like a small number.
  • This modeling approach to estimating the prior probability of database guilt-more precisely, the prior probability that someone in the database is the source of the crime scene DNA-has to their knowledge never before been used.'.
  • That fraction and the database size are determined by the probability of being caught.
  • Thus, if the database contains felons from all of California rather than just from San Francisco, then moving to Los Angeles is not enough to retire.

VI. EMPIRICIZING THE ALIBI AND PRIOR PROBABILITIES

  • The application to People v. Puckett motivates a broader discussion of how to empirically assess the underlying parameters that influence the estimation of the posterior source probability.
  • At the other extreme, the authors would assume that Baker comprises 30% of the 40% other category, and thus make no change to the 60% priors.

2. Empiricizing the prior probability

  • Of course, there will be and should be reasonable disagreement about what constitutes a similar crime.
  • Similarity would have to be with regard to a host of factors-including not just the crime type but also the modus operandil41 and the characteristics of the defendant.
  • 14 3 A sensible way forward would be to derive alternative priors based on alternative assumptions of what constitutes similar crimes as well as on plausible structural models, and then see if the defendant's source probability is sufficiently high even after combining the likelihood ratio with the most conservative (i.e., lowest) probability estimate within this range,.

B. Small Database Trawls

  • The analysis above is done under the stylized assumption that everyone in the database has the same prior probability of having been at the crime scene.
  • Take the case where a woman, who had a documented history of being a victim of spousal battery, is found murdered.
  • The authors again suggest that the prior can be inferred from adjusting the match rate in similar confirmation cases.
  • It might be tempting to infer that thirty percent of the time the husband was the source of the forensic DNA.

VII. ADMISSIBILITY

  • The authors goal of this Part is to suggest specific ways an expert might present his or her opinion under existing law and when existing evidentiary rules should change to accommodate a more coherent factfinding process.
  • To be admitted, the proposed probability evidence must also be consistent with Rules 104(a), 702, 703, and 403, or their state equivalents.
  • 1 63 However, changes in courts' approaches to similarity with respect to DNA evidence provide some reason for predicting that adjusted match probabilities are increasingly likely to be admissible.
  • Courts that consider the relevance of statistical evidence have different philosophies, are influenced by a host of situation-specific variables, and occasionally make rulings that might go the other way.").

172. The

  • Prior data are just as important as data that allow us to update their beliefs.
  • What ultimately matters is where the authors end up, and that they arrived at that destination via a path that employs sound logic and reasoning.

CONCLUSION

  • In the 2012 presidential election, Nate Silver caused a stir by correctly predicting the winner of all fifty states and the District of Columbia in the general election, 17 6 Beyond accuracy, Silver's larger impact has been in changing the central polling metric and improving the way that metric is calculated.
  • In case 2, each company only has an 80% chance of being found liable for its accidents (as the eyewitness may read the license plate incorrectly), and thus each company only has 80% of the full incentive.
  • After Silver, the same evidence can be described as a 97.5% chance that Obama will win the election.'.
  • And, like Silver, the authors advocate that the method of estimating this probability be explicitly Bayesian.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

THE RULE
OF
PROBABILITIES:
A
PRACTICAL
APPROACH
FOR
APPLYING
BAYES'
RULE
TO
THE
ANALYSIS
OF
DNA
EVIDENCE
Ian
Ayres*
&
Barry
Nalebuff*
Bayes'
rule
is
not
being
used
to
guide
jury
decisionmaking
in
the
vast
major-
ity
of
criminal
cases
involving evidence
of
DNA
testing.
Instead
of
telling
juries
the
"source
probability"-the
probability
that
the
individual
whose
DNA
match-
es
was
the
source
of
the
forensic
evidence
found
at
the
crime
scene-experts
only
present
pieces
of
the
puzzle.
They
provide
the
probability
that
a
randomly
select-
ed
innocent
person
would have
a
match
or
the
expected
number
of
innocent
matches
in
the
database.
In
some
cases,
the
random
match
probability
will
be
so
low
(one
in
a
quadrillion)
that
the
resulting
source
probability
is
practically
one
hundred
percent.
But,
in
other
cases,
with
large database
trawls
and
random
match
probability
at
one
in
a
million,
jurors
will
have
no
ability
to
convert
the
random
match
probability
or
the
likelihood
ratio
based
on
the
expected
number
of
matches
into
relevant
data
that
will
help
them
address
the
question
of
guilt.
This
Article
shows
that
a
correct application
of
Bayes'
rule
should lead
factfind-
ers
and
litigants
to
focus
on
the
size
of
two
variables that
influence
the
source
probability:
the
probability
that a
nonsource
in
the
DNA
database
would
have
an
alibi,
and
the
probability
that
the
source
of
the
DNA
is
included
in
the
database.
This
Article
suggests
practical
means
of estimating
these
two
variables
and
ar-
gues
that,
as
a
legal
matter,
these
parameters
as
well
as
the
Bayesian
posterior
source
probability
are
admissible
in
court,
In
particular,
focusing
on
the
prior
probability
that
the
"database
is
guilty"
(i.e.,
the
probability
that
someone in
the
database
is
the
source
of
the
forensic
evidence)
is
not
just
analytically
and
empir-
ically
tractable,
but
avoids
the
evidentiary limitations concerning
a
particular
de-
fendant's
prior
bad
acts. Appropriate
application
of
Bayes'
rule,
far
from
preempting
the
factfinding
and
adversarial
process,
can
guide
advocates
to
en-
gage
the
important aspects
of
the
evidence
that
are
still
likely
to
be open
to
con-
testation.
Perhaps
most
important,
appropriate
application
of
Bayes'
rule
will
al-
*
William
K.
Townsend
Professor,
Yale
Law
School.
E-mail:
ian.ayres@yale.edu.
**
Milton
Steinbach Professor, Yale
School
of
Management. E-mail:
barry.nalebuff
@yale.edu.
Gregory Keating, Jonathan
Koebler, Lewis
Kornhauser,
Steve
Salop,
Joseph
Stiglitz,
and
Eric
Talley
provided
helpful
comments.
Anthony
Cozart,
Ben Picozzi, and Robert
Baker
provided
excellent research assistance.
We
are
especially
grateful
to
Kwixuan
Maloof
for
providing
us with
court
transcripts.
1447

1448
STANFORD
LAW
REVIEW
[Vol.
67:1447
so
allow
jurors
to
reach verdicts
via
a
coherent
path
that
employs
sound
logic
and
reasoning.
AN
APPRECIATION
OF
RICHARD
CRASWELL............................1448
INTRODUCTION..................................................1449
I.
THE
ISLAND
OF
EDEN..........................................1453
II.
BAYES
FOR
AN
ERA
OF
BIG
DATA.................................1457
III.COMPARATIVE
STATICS..........................................1466
A.
Trawling
a
Larger
Database..............................1468
B,
Tom
Versus
Tom,
Dick,
and
Harry..........................1470
IV.
APPLICATION
TO
PEOPLE
V.
COLLINS.......
............................
1472
V.
APPLICATION
TO
PEOPLE
v.
PUCKETT.
..................................
1476
A.
Minimal
Priors
............................
............
1482
B.
A
Model
for
Calculating
Priors
...............
..............
1483
C.
Application
to
Pukett........................................
1487
VIEmPRICIZING
THE
ALIBI
AND
PRIOR
PROBABILITIES.
...................
1490
A.
Large
Database
Trawls..................................
1491
1.
Empiricizing
the
alibi
probability........................1491
2.
Empiricizing
the
prior
probability
.............................
1492
B.
Small
Database
Trawls.....................
................
1493
VIADMISSI
TY....
...............................................
1496
CONCLUSION....................................................
1502
AN
APPRECIATION
OF
RICHARD
GRAS
WELL
It
is
entirely
fitting
for
an
Article
on
the
legal
implications
of
information
economics
to
honor
Dick
Craswell.
Large
swaths
of
Dick's
writings
are
care-
ful
workings-out
of
the
ways
that
imperfect
information
can
impact
private
be-
1.
Ian
and
Dick
first
connected
in
1987,
when
Dick
delivered
a
paper
at
the
Law
and
Society
Conference.
Ian
was
just
finishing
his
first
year
of
teaching,
and
he
hadn't
yet
over-
come
the
kind
of
anxiety
that
caused
him
to
tremble
at
the
prospect
of
asking
a
question.
Not
only
was
Dick
open
and
gentle
in
his
response,
but
soon
after
the
conference
Ian received
from
him
a
letter
of
encouragement.
Ian
was
floored
by
his
letter
because
Dick,
unbidden,
had
also
included
comments
on
a
draft
article
of
Ian's.
That
e-mail
began
a
long-distance
relationship
that
was
immensely
helpful
to
a
young
scholar.
Over
the
decades,
Dick
and
Ian
mailed
and
e-mailed
comments
on
over
a
dozen
drafts
of
each
other's
writings,
Though
they
never
taught
at
the
same
school
in
the
same
semester,
Ian
nonetheless
felt
like
Dick
was
one
of
his
closest
colleagues.
Their
academic
havruta
not
only
predates
e-mail,
it
predates
WordPerfect
and
Word-as
those
first
comments
were
composed
on
the
XyWrite
word-
processing
software.
No
one
else comes
close
to
taking
the
time
to
have
written
as
many
or
as
high-quality
comments
on
Ian's
drafts.
Dick
has been
a
role model
not
only
on
the
typed
page,
but
also
in
the
way
he
has
carried
himself
in
seminars,
both
while
presenting
and
in
the
audience.
Dick
speaks
softly
but
carries
a
collaborative
and
constructive
stick.
His ready
smile
is
emblematic
of
his
warmth.
Just
bringing
to
mind
memories
of
Dick
in
seminar
or
talking
over
coffee
lifts
one's
spirits.
Of
course,
even
very
good
academics
can
be
acerbic
and
annoying.
As
with
the
Tsar, one
sometimes
wants to
bless
and
keep
them
.
..
far
away,
Not
so
Dick
Craswell.
He
is
proof
that
you
can be
a
serious,
respected scholar,
and
be
uni-
versally
admired.
Even
in
the
boisterous
milieu
of
the University
of
Chicago
seminar,
he
was
a
calming
presence.
There
is
a
balm
in
Gilead.
Richard
Craswell.

THE
RULE
OF
PROBABILITIES
havior
and
constrain
judicial
decisionmaking.2
Dick's
analysis
of
how
consum-
ers
might mistakenly
update their
prior
beliefs
after
a
corrective
advertising
or-
der
3
is
a
close
analogy
to
our
claim
that
unguided
juries
are
likely
to
mistakenly
update
in
response
to
incomplete statistical
DNA
evidence.
INTRODUCTION
With the
recent
Supreme
Court
decision
allowing
the
collection
of
DNA
samples from
any
person
arrested
and
detained
for
a
serious
offense,
it seems
inevitable
that the
justice
system
will
collect
and
use
large
DNA
databases.
Currently,
DNA
databases
are
widely
used.
As
of
April
2015,
the
Combined
DNA
Index
System
(CODIS)
maintained
by
the
Federal Bureau
of
Investiga-
tion
(FBI)
had
more
than "283,440
hits
[and
had
assisted]
in
more
than
270,211
investigations."
5
There
is
concern
that
as
database
size
increases,
so
too
will
the
rate
of
false
positives,
and
thus innocent
peole
will
be
convicted
when
their
DNA
matches
evidence
left
at
a
crime
scene.
This
concern
has
led
courts
to
a
convoluted
and
misguided
use
of
multiple
lenses to
evaluate
DNA
evidence.
In
this Article,
we
argue
that
there
is
a
single
right
answer
for
how
to
in-
corporate
the
use
of
DNA
evidence.
That
answer
is
the
application
of
Bayes'
rule,
a
250-year-old
formula
for
updating
a
starting
probability
estimate
for
a
2.
See,
e.g.,
Richard Craswell
&
John
E.
Calfee,
Deterrence
and Uncertain
Legal
Standards, 2
J.L.
ECON.
&
ORG.
279
(1986);
Richard
Craswell,
Taking
Information
Serious-
ly:
Misrepresentation
and
Nondisclosure
in
Contract
Law and
Elsewhere,
92
VA. L.
REV.
565
(2006);
see
also
Richard
Craswell,
Against
Fuller
and
Perdue,
67
U.
Ctm,
L.
REv.
99
(2000).
3.
Richard
Craswell,
Interpreting
Deceptive
Advertising,
65
B.U.
L.
REv.
657,
689
(1985);
see
also
Howard
Beales,
Richard
Craswell
&
Steven
C.
Salop,
The
Efficient
Regula-
tion
of
Consumer
Information,
24
.L.
&
ECoN.
491,
532-33
(1981)
(discussing
how
consum-
ers may
update
in
response
to
banning
certain advertising
claims).
4.
Maryland
v.
King,
133
S.
Ct.
1958,
1980
(2013)
("When officers
make
an
arrest
supported
by
probable
cause
to
hold
for
a
serious
offense
and
they
bring
the
suspect
to
the
station
to be
detained
in
custody,
taking
and
analyzing
a
cheek
swab
of
the
arrestee's
DNA
is, like
fingerprinting
and
photographing,
a
legitimate police
booking procedure
that
is
rea-
sonable
under
the
Fourth Amendment.").
Under Maryland
law,
a
serious
offense
is
the
commission
of or
attempt
to
commit
violence
or
burglary.
See
id,
at
1967.
5.
CODIS-NDIS
Statistics,
FED.
BUREAU INVESTIGATION,
http://www.fbi.gov/about
-uslab/biometric-analysis/codislndis-statisties
(last
visited
June
8,
2015).
6.
There
is
an
additional
concern
of
false
positives
occurring
not
by
chance
but
by
contamination
or
laboratory
error.
See,
e.g.,
F.H.R.
VINCENT,
REPORT:
INQUIRY
INTO
THE
CIRCUMSTANCES
THAT
LED
TO
THE
CONVICTION OF
MR
FARAH
ABDULKADIR
JAMA
21-23,
48
(2010);
Jonathan
J.
Koehler,
Error
and
Exaggeration
in
the
Presentation
of
DNA Evidence
at
Trial,
34
JURIMETRICs
21,
23
(1993);
William
C.
Thompson,
Subjective
Interpretation,
Laboratory
Error
and
the
Value
of
Forensic
DNA
Evidence:
Three
Case
Studies,
96
GENET-
ICA
153
(1995);
Julie
Szego,
Wrongfully
Accused,
SYDNEY
MORNING
HERALD
(Mar.
29,
2014),
http://www.smh.com.au/nationallwrongfully-accused-20140324-35cga.htmi.
There
is
also
the
possibility
of
an
innocent
defendant's
DNA
being
left
at
the
scene
by
chance.
See
Jonathan
J.
Koehler,
DNA
Matches
and
Statistics:
Important
Questions,
Surprising
Answers,
76
JUDICATURE
224
(1993)
[hereinafter Koehler,
DNA
Matches
and Statistics].
June
2015]1
1449

STANFORD
LAW
REVIEW
hypothesis
given
additional
evidence.
Applying
Bayes'
rule,
we
argue
that
tri-
ers
of
fact
evaluating
DNA
evidence
should be
presented
with what
we
call
the
"source
probability":
(he
probability
that
a
defendant
whose
DNA
matches
the
DNA
found
at
the crime
scene
was
the
true
source
of
that
evidence.
As
we
dis-
cuss
below,
the
source
probability
is
not
the
same
as
the
chance
of
a
random
DNA
match
and
does
not
equal
the
probability
of
guilt; even
if
the
defendant
was
the
source
of
the
forensic
DNA,
the
defendant
might
not have
committed
the
crime.
8
Our
primary
contribution
will
be to
show
that
the
source
probability
may
turn
crucially
on
the
size
of two
variables
that
have
not
been
introduced
(or
re-
lied
upon
by
experts)
in
DNA
matching
cases:
(i) the
initial
or
prior probability
that
the
source
of
the
DNA
is
included
in
the
database,
and
(ii)
the
relevant
or
adjusted
size
of
the
DNA
database,
a
calculation
that takes
into account
the
demographic
information
known
about
the
criminal
and
the
probability
that
a
nonsource
in
the DNA
database
would
have
an
alibi.
Experts have shied
away
from helping
jurors
form
baseline
beliefs,
which
are
more formally called
prior
probabilities,
and
from then
helping
them
con-
vert
those priors into
a
conclusion. The
problem
is
that, absent
priors,
it
is
not
clear
how
to
coherently
employ
the
expert
information.
As
we
discuss
in our
analysis
of
People
v.
Puckett,
9
an
expert
might
well
conclude
that certain
evi-
dence
makes
it
100
times
more
likely
that
the
suspect
was at
the scene
of
the
crime.
But
100
times
more
likely
than
what?
The
starting
point
or
prior
for
a
suspect
who
is
identified
from
a
large
database
trawl
might well
be
less
than
1
in
1000.
In
that
case,
100-to-I
evidence
is
not
persuasive.
If
the
suspect
was
re-
lated
to
the
victim
and
had
motive
and
opportunity,
then
100-to-i would
be
much
more
convincing.
We
will
argue that
there
are
practical means
of
estimating
the
prior proba-
bilities
and
the
relevant database
size
and that,
as
a
legal
matter,
these
parame-
ters
as
well
as
the
final
source
probability
are
admissible.
In
particular, chang-
7.
See
Thomas
Bayes,
An
Essay
Towards
Solving
a
Problem
in the
Doctrine
of
Chances,
53
PHIL.
TRANSACTIONS
378-81
(1763).
We
discuss
Bayes'
rule
in
Part
II
and
note
43
below.
Propositions
2,
3.
and
4
in
that
Part
present
the
rule.
For
an
introduction
to
Bayes'
rule,
see
ERIc
GOSSETT,
DISCRETE
MATHEMATICS
WITH
PRooF
316-22
(2d
ed.
2009).
For
an
application
of
Bayes'
rule
to
trial
evidence,
see
David
L.
Faigman
&
A.J.
Baglioni,
Jr.,
Bayes'
Theorem
in
the
Trial
Process:
Instructing
Jurors
on
the
Value
of
Statistical
Evidence,
12
LAW
&
HUM.
BEHAv.
1,
2
&
n.1
(1988).
For
a
colorful
history
of
the
rule's
uses,
see
SHARON
BERTSCH
MCGRAYNE,
THE
THEORY
THAT
WOULD
NOT
DTE:
How
BAYES'
RULE
CRACKED
THE
ENIGMA
CODE,
HUNTED
DowN
RUSSIAN SUBMARINES,
&
EMERGED TRIUM-
PHANT
FROM
Two
CENTURIES
OF
CONTROVERSY
(2011).
8.
Moreover,
while
we
will
speak
of
this
source
probability
as the
probability
that
the
defendant
was
the source
of
the
forensic
DNA,
the
matching
evidence cannot
distinguish
between
identical
twins
and
thus
can
only
speak
to
the
probability
with
regard
to
the
defend-
ant
or
the
defendant's
close genetic
relatives. See Brief
of
14
Scholars
of
Forensic Evidence
as
Amici
Curiae
Supporting
Respondent at 34,
King,
133
S.
Ct.
1958
(No.
12-207), 2013
WL
476046;
Koehler,
DNA
Matches
and Statistics,
supra
note
6,
at
224;
Natalie
Ram,
For-
tuity
and
Forensic
Familial
Identification,
63
STAN.
L.
REV.
751, 757,
758 n.35
(2011).
9.
See
infra
notes
70-87.
1450
[Vol.
67:1447

THE
RULE
OF
PROBABILITIES
ing
the
focus
from
a
question
about
the
prior
probability
that
the
defendant
was
the
source
to
the
prior
probability
that
the
"database
is
guilty"-that
is,
the
probability
that
someone
in
the
database
is
the
source
of
the forensic
evi-
dence-not
only
is
analytically
and
empirically
more
tractable,
but
also avoids
the
evidentiary
limitations
concerning
a
particular
defendant's
prior
bad
acts.
In
People
v.
Johnson,
a
California Court
of
Appeal
panel,
in
reviewing
dif-
ferent
types
of
DNA
statistics,
emphasized
that
'"the
database
is
not
on
trial.
Only
the
defendant
is.'
Thus,
the question
of
how
probable
it
is
that
the
defend-
ant,
not
the
database,
is
the
source
of
the
crime
scene
DNA
remains
rele-
vant."
t0
But
to
apply
Bayes'
rule,
the
probability
that the database contains
the
source
of
the
forensic
DNA,
assessed
prior
to any
consideration
of
whether
an
individual
in
the
database
actually
matches,
becomes
a
crucial
input
in
deter-
mining
the (posterior)
likelihood
that
a
particular matching defendant
is
the
source
of
the
forensic
DNA.
Contrary
to
Johnson,
assessing
the
prior probabil-
ity
that
the
database
includes
the
source-colloquially,
"the
probability
that
the
database is
guilty"-provides
at
once
a
readier
means
of
estimation
and
a
stronger
argument
for admissibility.
At
the
end
of
the day,
we
will
acquit
or
convict
a
defendant,
not
a
database.
The
problem
is
that
it
is
very
hard to
directly
estimate
a
starting
point
or
prior
probability
for
the
likelihood
that
a
specific
defendant committed
a
crime.
For
example,
what
is
the
chance
that
some
"John
Doe" committed
a
crime before
we
have
any evidence
about Mr.
Doe? In
contrast, it
is
more
coherent
to
ask the
chance
that
a
class
of
individuals,
for
example,
convicted
felons,
would include
the
perpetrator
of
a
crime."
For
example,
if half
of
rapes
are
committed
by
convicted
felons,
then the
starting point would
be
fifty
percent,
assuming
that
the
database
contains
all
convicted felons.
If
jurors
are
to
properly
understand
the
implications
of
finding
a
match
from
a
large
database trawl,
the size
and
characteristics
of
that
database
are
relevant information.
Some
legal analysts
have
been dismayed
by
the
ways
in
which evidence
of
a
DNA
match
tends
to
eclipse
any
role
for
adversarial
engagement-turning
litigants
into
little more
than
potted
plants.i
2
But
appropriate
application of
10.
No.
C068950,
2014
WL
4724896,
at
*20
(Cal. Ct.
App.
Sept.
24,
2014)
(citations
omitted)
(quoting
People
v.
Nelson,
185
P.3d
49,66
(Cal.
2008)).
11.
If
the
defendant
is a
member
of
that
group,
then
we can
convert
the group proba-
bility
to
an
individual
probability
by
dividing
by
the
size
of
the
group
or
the
relevant
subset
of
that
group
that matches
any
known
facts
of
the
case.
While
we
could
call
this the
prior
probability
for
the
individual,
the
heavy
lifting
has
all
been
done
at
the
group
level,
and
we
should
not
pretend
otherwise.
12.
A
rich
scholarly literature
investigates
jurors'
tendencies
to
over-
or
underestimate
the probative
value
of
DNA
statistical
evidence.
See
Jonathan
J.
Koehler,
The
Psychology
of
Numbers
in
the
Courtroom;
How
to
Make
DNA-Match
Statistics
Seem
Impressive
or
Insuffi-
cient,
74
S.
CAL.
L.
REV.
1275,1277-80
(2001); Jonathan
J.
Koehler
&
Laura
Macchi,
Think-
ing
About
Low-Probability
Events:
An
Exemplar-Cuing
Theory,
15
PSYCHOL.
Sci.
540
(2004); Jonathan
J.
Koebler,
When
Are
People
Persuaded
by DNA
Match
Statistics?,
25
LAW
&
HUM.
BEHAV.
493
(2001);
Jason Schklar
&
Shari
Seidman
Diamond,
Juror
Reactions
to
DNA
Evidence:
Errors
and
Expectancies,
23
LAW
&
HUM.
BEHAV.
159
(1999).
This
lit-
June
2015]
1451

Citations
More filters
01 Jan 2014
TL;DR: In this paper, Cardozo et al. proposed a model for conflict resolution in the context of bankruptcy resolution, which is based on the work of the Cardozo Institute of Conflict Resolution.
Abstract: American Bankruptcy Institute Law Review 17 Am. Bankr. Inst. L. Rev., No. 1, Spring, 2009. Boston College Law Review 50 B.C. L. Rev., No. 3, May, 2009. Boston University Public Interest Law Journal 18 B.U. Pub. Int. L.J., No. 2, Spring, 2009. Cardozo Journal of Conflict Resolution 10 Cardozo J. Conflict Resol., No. 2, Spring, 2009. Cardozo Public Law, Policy, & Ethics Journal 7 Cardozo Pub. L. Pol’y & Ethics J., No. 3, Summer, 2009. Chicago Journal of International Law 10 Chi. J. Int’l L., No. 1, Summer, 2009. Colorado Journal of International Environmental Law and Policy 20 Colo. J. Int’l Envtl. L. & Pol’y, No. 2, Winter, 2009. Columbia Journal of Law & the Arts 32 Colum. J.L. & Arts, No. 3, Spring, 2009. Connecticut Public Interest Law Journal 8 Conn. Pub. Int. L.J., No. 2, Spring-Summer, 2009. Cornell Journal of Law and Public Policy 18 Cornell J.L. & Pub. Pol’y, No. 1, Fall, 2008. Cornell Law Review 94 Cornell L. Rev., No. 5, July, 2009. Creighton Law Review 42 Creighton L. Rev., No. 3, April, 2009. Criminal Law Forum 20 Crim. L. Forum, Nos. 2-3, Pp. 173-394, 2009. Delaware Journal of Corporate Law 34 Del. J. Corp. L., No. 2, Pp. 433-754, 2009. Environmental Law Reporter News & Analysis 39 Envtl. L. Rep. News & Analysis, No. 7, July, 2009. European Journal of International Law 20 Eur. J. Int’l L., No. 2, April, 2009. Family Law Quarterly 43 Fam. L.Q., No. 1, Spring, 2009. Georgetown Journal of International Law 40 Geo. J. Int’l L., No. 3, Spring, 2009. Georgetown Journal of Legal Ethics 22 Geo. J. Legal Ethics, No. 2, Spring, 2009. Golden Gate University Law Review 39 Golden Gate U. L. Rev., No. 2, Winter, 2009. Harvard Environmental Law Review 33 Harv. Envtl. L. Rev., No. 2, Pp. 297-608, 2009. International Review of Law and Economics 29 Int’l Rev. L. & Econ., No. 1, March, 2009. Journal of Environmental Law and Litigation 24 J. Envtl. L. & Litig., No. 1, Pp. 1-201, 2009. Journal of Legislation 34 J. Legis., No. 1, Pp. 1-98, 2008. Journal of Technology Law & Policy 14 J. Tech. L. & Pol’y, No. 1, June, 2009. Labor Lawyer 24 Lab. Law., No. 3, Winter/Spring, 2009. Michigan Journal of International Law 30 Mich. J. Int’l L., No. 3, Spring, 2009. New Criminal Law Review 12 New Crim. L. Rev., No. 2, Spring, 2009. Northern Kentucky Law Review 36 N. Ky. L. Rev., No. 4, Pp. 445-654, 2009. Ohio Northern University Law Review 35 Ohio N.U. L. Rev., No. 2, Pp. 445-886, 2009. Pace Law Review 29 Pace L. Rev., No. 3, Spring, 2009. Quinnipiac Health Law Journal 12 Quinnipiac Health L.J., No. 2, Pp. 209-332, 2008-2009. Real Property, Trust and Estate Law Journal 44 Real Prop. Tr. & Est. L.J., No. 1, Spring, 2009. Rutgers Race and the Law Review 10 Rutgers Race & L. Rev., No. 2, Pp. 441-629, 2009. San Diego Law Review 46 San Diego L. Rev., No. 2, Spring, 2009. Seton Hall Law Review 39 Seton Hall L. Rev., No. 3, Pp. 725-1102, 2009. Southern California Interdisciplinary Law Journal 18 S. Cal. Interdisc. L.J., No. 3, Spring, 2009. Stanford Environmental Law Journal 28 Stan. Envtl. L.J., No. 3, July, 2009. Tulsa Law Review 44 Tulsa L. Rev., No. 2, Winter, 2008. UMKC Law Review 77 UMKC L. Rev., No. 4, Summer, 2009. Washburn Law Journal 48 Washburn L.J., No. 3, Spring, 2009. Washington University Global Studies Law Review 8 Wash. U. Global Stud. L. Rev., No. 3, Pp.451-617, 2009. Washington University Journal of Law & Policy 29 Wash. U. J.L. & Pol’y, Pp. 1-401, 2009. Washington University Law Review 86 Wash. U. L. Rev., No. 6, Pp. 1273-1521, 2009. William Mitchell Law Review 35 Wm. Mitchell L. Rev., No. 4, Pp. 1235-1609, 2009. Yale Journal of International Law 34 Yale J. Int’l L., No. 2, Summer, 2009. Yale Journal on Regulation 26 Yale J. on Reg., No. 2, Summer, 2009.

1,336 citations

Journal ArticleDOI
31 Dec 2018
TL;DR: In this paper, the core problem of rational decision-making lies not in rationality per se, but in a lack of concept and/or insufficient attention to the behaviour of complex adaptive systems.
Abstract: Although effectiveness and efficiency are old comrades of public administrations, they still often cause unintended consequences. The relation between (absent) effectiveness and (overly emphasised) efficiency remains unresolved. The paper shows that effectiveness and efficiency are still used interchangeably, and despite the presence of negative effects, it comes as a surprise that important documents still address these terms without procedure or methodology to provide the content whereby they could be more clearly elaborated. Not only is the goal to achieve clearer meaning, but to accomplish results with the fewest possible negative effects. Alongside different management reforms, decision-makers must not lose sight of the whole; all reforms are only specific answers to inadequate previous ones, and it could be valuable to take a step back to see how/why different reforms emerge. The paper addresses the success/failure of reforms and the outcomes thereof. It claims the core problem of rational decision-making lies not in rationality per se, but in a lack of concept and/or insufficient attention to the behaviour of complex adaptive systems. With the help of complex adaptive systems, cybernetics, and combinations of effectiveness and efficiency, the paper presents the essential elements for adaptive (human) decision-making (such as diversity, variation, selection, adaptation, and integration) as the framework whereby unintended, reverse, and neutral effects can be reduced. New rules/decisions should be based on different levels of planning and adaptation, and on moving from the general to the more specific, in accordance with context specificity and unplanned, emergent things. It seems the hardest thing to address is the human character that does not (want to) recognise a situation as the situation in which some things must be spotted, evaluated, and changed if needed.

11 citations


Cites methods from "The Rule of Probabilities: A Practi..."

  • ...Numerical processing can be performed with the use of Bayes theorem (Ayres & Nalebuff, 2015; Carrier, 2012; Edwards et al., 2007; Finkelstein & Fairley, 1970; Howard, 1965) as one of the clearest mathematical examples whereby a computation of belief and/or evidence can be carried out; it is the…...

    [...]

Journal ArticleDOI
TL;DR: In this paper, the authors explore the implications of asymmetry in a model of litigation with endogenous effort, and show that asymmetric stakes do not create any distortion, because the prospect of ex post (post-judgment) settlement makes the litigants behave as if the stakes are symmetric.
Abstract: Private antitrust litigation often involves a dominant firm being accused of exclusionary conduct by a smaller rival. In such cases, the defendant generally has a much larger financial stake in the outcome. We explore the implications of this asymmetry in a model of litigation with endogenous effort. Asymmetric stakes lead antitrust defendants to invest systematically more resources into litigation, causing a downward bias in the plaintiff's success probability---a distortion that carries over to ex ante settlements. Enhanced damages cannot prevent this systematic bias. We show that, in most private litigation contexts, asymmetric stakes do not create any distortion, because the prospect of ex post (post-judgment) settlement makes the litigants behave as if the stakes are symmetric. But this does not occur in antitrust, because it proscribes certain ex post settlements. We consider how courts might mitigate the distortion by altering the plaintiff's evidentiary burden.

7 citations


Cites background or methods from "The Rule of Probabilities: A Practi..."

  • ...Many papers attempt to derive various optimal legal standards by relying in part on Bayesian inference, such as Kaplan (1967), Posner (1998), Ayres and Nalebuff (2015), and Salop (2015)....

    [...]

  • ...Many papers attempt to derive various optimal legal standards by relying in part on Bayesian inference, such as Kaplan (1967), Posner (1998), Ayres and Nalebuff (2015), and Salop (2015).6 In Section 4, we similarly rely on Bayesian inference in considering judicial reliance on presumptions to modify plaintiffs’ evidentiary burden....

    [...]

Journal ArticleDOI
TL;DR: From the likelihood ratio, probabilistic statements about the ground truth cannot be made, and that therefore this will also not be possible from a weaker statement such as the LR exceeding some threshold value, and it is explained how to estimate that value from the obtained empirical distributions.
Abstract: Over the last years, several papers have been published that presented likelihood ratio distributions in kinship cases. These data are useful to assess the power of the discussed technology for certain types of kinship investigation, since they tell us what range of likelihood ratios we can expect given the ground truth of the relationship between the investigated individuals. However, in some publications the fraction of (in)correctly classified pairs (when based on a likelihood ratio threshold), are presented as accuracy or error rate, with the interpretational pitfall looming that these can be seen as probabilities that are generally applicable to the investigated type of kinship, on the investigated loci and with the obtained allele frequencies. In this publication we warn against such interpretations. We point out that from the likelihood ratio, probabilistic statements about the ground truth cannot be made, and that therefore this will also not be possible from a weaker statement such as the LR exceeding some threshold value. The statement that the LR exceeds a threshold in itself does has evidential value, and we will explain how to estimate that value from the obtained empirical distributions. We also explain that the concept of error does not apply to the likelihood ratio, but only to decisions. If one takes decisions based on a LR threshold, then it is possible to define error rates, but these are predictive and conditional. They tell us how often we will make wrong classifications in each group (the related pairs and the unrelated pairs) if we apply a LR threshold. They do not tell us how likely it is, once we have made a decision, that this decision is the one that we wanted to make had we known the true relationship. In order to make such a statement we need to have more information. We illustrate the points we address with examples that we have taken from the literature.

7 citations

Journal ArticleDOI
26 Mar 2021
TL;DR: This paper presents a new type of discretion, named employee discretion, and shows a promising approach towards the idea that officials or judges can decide similarly in similar matters, despite their differences in personal backgrounds, cognitive capabilities or emotional variances, commonly known as pre-existing preferences.
Abstract: This paper presents a new form of discretion that deals with subliminal (personal) preferences, which are present in discretionary decision-making (where the mental, cognitive functions of public servants, mixed with their character and "dressed" with sophistic, logically well-explained and legally allowed reasons are present). This paper presents employee discretion that could be a denominator of the public employees' will to do or not to do something, to give lesser or greater weight to something. The power to choose is hence not only possible in legal frameworks but also outside of them. So far, informal power has been viewed in the law as the illegal one, although there are many informal, especially personal elements involved in the legal decision-making that are never brought to light. This paper offers a promising approach to how decisions can be similar in similar matters, despite their differences in personal backgrounds, cognitive capabilities or emotional variances. This can be done if Bayes’ theorem is used. Probability can here be established based on how much we believe something after we have seen the evidence; this depends not only on what the evidence shows but also on our pre-existing preferences (pre-investigation, prior probability or just a prior) or weights that affect our view on evidence or how much we believed in the evidence from the start. By assessing priors, decision-makers can become more comfortable with probability and uncertainty, and at the same time, the "echo chambers" of unfounded claims can be avoided. This way, subjective preferences could be known to others, while the principles of equality and equity could be raised to a higher level. Further development of employee discretion is based on the same grounds as this type of discretion – on our personal (in)actions.

6 citations


Cites background from "The Rule of Probabilities: A Practi..."

  • ...BT is useful to assess other evidence against statistical and scientific evidence (Kadane, 2008) or to analyse DNA and other evidence (Ayres & Nalebuff, 2015; Finkelstein & Fairley, 1970; 1971), but it can serve also for a better elaboration of ED....

    [...]

References
More filters
01 Jan 2014
TL;DR: In this paper, Cardozo et al. proposed a model for conflict resolution in the context of bankruptcy resolution, which is based on the work of the Cardozo Institute of Conflict Resolution.
Abstract: American Bankruptcy Institute Law Review 17 Am. Bankr. Inst. L. Rev., No. 1, Spring, 2009. Boston College Law Review 50 B.C. L. Rev., No. 3, May, 2009. Boston University Public Interest Law Journal 18 B.U. Pub. Int. L.J., No. 2, Spring, 2009. Cardozo Journal of Conflict Resolution 10 Cardozo J. Conflict Resol., No. 2, Spring, 2009. Cardozo Public Law, Policy, & Ethics Journal 7 Cardozo Pub. L. Pol’y & Ethics J., No. 3, Summer, 2009. Chicago Journal of International Law 10 Chi. J. Int’l L., No. 1, Summer, 2009. Colorado Journal of International Environmental Law and Policy 20 Colo. J. Int’l Envtl. L. & Pol’y, No. 2, Winter, 2009. Columbia Journal of Law & the Arts 32 Colum. J.L. & Arts, No. 3, Spring, 2009. Connecticut Public Interest Law Journal 8 Conn. Pub. Int. L.J., No. 2, Spring-Summer, 2009. Cornell Journal of Law and Public Policy 18 Cornell J.L. & Pub. Pol’y, No. 1, Fall, 2008. Cornell Law Review 94 Cornell L. Rev., No. 5, July, 2009. Creighton Law Review 42 Creighton L. Rev., No. 3, April, 2009. Criminal Law Forum 20 Crim. L. Forum, Nos. 2-3, Pp. 173-394, 2009. Delaware Journal of Corporate Law 34 Del. J. Corp. L., No. 2, Pp. 433-754, 2009. Environmental Law Reporter News & Analysis 39 Envtl. L. Rep. News & Analysis, No. 7, July, 2009. European Journal of International Law 20 Eur. J. Int’l L., No. 2, April, 2009. Family Law Quarterly 43 Fam. L.Q., No. 1, Spring, 2009. Georgetown Journal of International Law 40 Geo. J. Int’l L., No. 3, Spring, 2009. Georgetown Journal of Legal Ethics 22 Geo. J. Legal Ethics, No. 2, Spring, 2009. Golden Gate University Law Review 39 Golden Gate U. L. Rev., No. 2, Winter, 2009. Harvard Environmental Law Review 33 Harv. Envtl. L. Rev., No. 2, Pp. 297-608, 2009. International Review of Law and Economics 29 Int’l Rev. L. & Econ., No. 1, March, 2009. Journal of Environmental Law and Litigation 24 J. Envtl. L. & Litig., No. 1, Pp. 1-201, 2009. Journal of Legislation 34 J. Legis., No. 1, Pp. 1-98, 2008. Journal of Technology Law & Policy 14 J. Tech. L. & Pol’y, No. 1, June, 2009. Labor Lawyer 24 Lab. Law., No. 3, Winter/Spring, 2009. Michigan Journal of International Law 30 Mich. J. Int’l L., No. 3, Spring, 2009. New Criminal Law Review 12 New Crim. L. Rev., No. 2, Spring, 2009. Northern Kentucky Law Review 36 N. Ky. L. Rev., No. 4, Pp. 445-654, 2009. Ohio Northern University Law Review 35 Ohio N.U. L. Rev., No. 2, Pp. 445-886, 2009. Pace Law Review 29 Pace L. Rev., No. 3, Spring, 2009. Quinnipiac Health Law Journal 12 Quinnipiac Health L.J., No. 2, Pp. 209-332, 2008-2009. Real Property, Trust and Estate Law Journal 44 Real Prop. Tr. & Est. L.J., No. 1, Spring, 2009. Rutgers Race and the Law Review 10 Rutgers Race & L. Rev., No. 2, Pp. 441-629, 2009. San Diego Law Review 46 San Diego L. Rev., No. 2, Spring, 2009. Seton Hall Law Review 39 Seton Hall L. Rev., No. 3, Pp. 725-1102, 2009. Southern California Interdisciplinary Law Journal 18 S. Cal. Interdisc. L.J., No. 3, Spring, 2009. Stanford Environmental Law Journal 28 Stan. Envtl. L.J., No. 3, July, 2009. Tulsa Law Review 44 Tulsa L. Rev., No. 2, Winter, 2008. UMKC Law Review 77 UMKC L. Rev., No. 4, Summer, 2009. Washburn Law Journal 48 Washburn L.J., No. 3, Spring, 2009. Washington University Global Studies Law Review 8 Wash. U. Global Stud. L. Rev., No. 3, Pp.451-617, 2009. Washington University Journal of Law & Policy 29 Wash. U. J.L. & Pol’y, Pp. 1-401, 2009. Washington University Law Review 86 Wash. U. L. Rev., No. 6, Pp. 1273-1521, 2009. William Mitchell Law Review 35 Wm. Mitchell L. Rev., No. 4, Pp. 1235-1609, 2009. Yale Journal of International Law 34 Yale J. Int’l L., No. 2, Summer, 2009. Yale Journal on Regulation 26 Yale J. on Reg., No. 2, Summer, 2009.

1,336 citations

Posted Content
TL;DR: The issue of what the frequencies associated with DNA evidence do and do not mean and whether likelihood ratios belong in court and whether they may not if experts and jurors do not understand them are discussed.
Abstract: Part I of this paper discusses the issue of what the frequencies associated with DNA evidence do and do not mean. Part II describes an alternate way of presenting DNA statistics in court based on Bayesian likelihood ratios. This part also addresses issues associated with identifying the hypothesis of interest and characterizing the evidence. Part III offers a likelihood ratio and estimates its magnitude based on proficiency test data that were analyzed by Koehler, Chia, and Lindsey. Part IV identifies and rebuts arguments that have been offered for ignoring the possibility of error when computing DNA likelihood ratios. Part V considers whether likelihood ratios belong in court and concludes that they may not if experts and jurors do not understand them. Part VI reports an empirical study that compares mock jurors' reactions to various numerical representations of DNA evidence. The results of the study indicate that there are good reasons to suspect that likelihood ratio presentations will confuse jurors.

79 citations

Journal ArticleDOI
TL;DR: The widely held view that DNA evidence is weaker when it results from a database search seems to be based on a rationale that leads to absurd conclusions in some examples and is inconsistent with the principle, which enjoys substantial support, that evidential weight should be measured by likelihood ratios.
Abstract: The paper is concerned with the strength of DNA evidence when a suspect is identified via a search through a database of the DNA profiles of known individuals. Consideration of the appropriate likelihood ratio shows that in this setting the DNA evidence is (slightly) stronger than when a suspect is identified by other means, subsequently profiled, and found to match. The recommendation of the 1992 report of the US National Research Council that DNA evidence that is used to identify the suspect should not be presented at trial thus seems unnecessarily conservative. The widely held view that DNA evidence is weaker when it results from a database search seems to be based on a rationale that leads to absurd conclusions in some examples. Moreover, this view is inconsistent with the principle, which enjoys substantial support, that evidential weight should be measured by likelihood ratios. The strength of DNA evidence is shown also to be slightly increased for other forms of search procedure. While the DNA evidence is stronger after a database search, the overall case against the suspect may not be, and the problems of incorporating the DNA with the non-DNA evidence can be particularly important in such cases.

68 citations

Journal ArticleDOI
31 Dec 2018
TL;DR: In this paper, the core problem of rational decision-making lies not in rationality per se, but in a lack of concept and/or insufficient attention to the behaviour of complex adaptive systems.
Abstract: Although effectiveness and efficiency are old comrades of public administrations, they still often cause unintended consequences. The relation between (absent) effectiveness and (overly emphasised) efficiency remains unresolved. The paper shows that effectiveness and efficiency are still used interchangeably, and despite the presence of negative effects, it comes as a surprise that important documents still address these terms without procedure or methodology to provide the content whereby they could be more clearly elaborated. Not only is the goal to achieve clearer meaning, but to accomplish results with the fewest possible negative effects. Alongside different management reforms, decision-makers must not lose sight of the whole; all reforms are only specific answers to inadequate previous ones, and it could be valuable to take a step back to see how/why different reforms emerge. The paper addresses the success/failure of reforms and the outcomes thereof. It claims the core problem of rational decision-making lies not in rationality per se, but in a lack of concept and/or insufficient attention to the behaviour of complex adaptive systems. With the help of complex adaptive systems, cybernetics, and combinations of effectiveness and efficiency, the paper presents the essential elements for adaptive (human) decision-making (such as diversity, variation, selection, adaptation, and integration) as the framework whereby unintended, reverse, and neutral effects can be reduced. New rules/decisions should be based on different levels of planning and adaptation, and on moving from the general to the more specific, in accordance with context specificity and unplanned, emergent things. It seems the hardest thing to address is the human character that does not (want to) recognise a situation as the situation in which some things must be spotted, evaluated, and changed if needed.

11 citations

Journal ArticleDOI
TL;DR: In this paper, the authors explore the implications of asymmetry in a model of litigation with endogenous effort, and show that asymmetric stakes do not create any distortion, because the prospect of ex post (post-judgment) settlement makes the litigants behave as if the stakes are symmetric.
Abstract: Private antitrust litigation often involves a dominant firm being accused of exclusionary conduct by a smaller rival. In such cases, the defendant generally has a much larger financial stake in the outcome. We explore the implications of this asymmetry in a model of litigation with endogenous effort. Asymmetric stakes lead antitrust defendants to invest systematically more resources into litigation, causing a downward bias in the plaintiff's success probability---a distortion that carries over to ex ante settlements. Enhanced damages cannot prevent this systematic bias. We show that, in most private litigation contexts, asymmetric stakes do not create any distortion, because the prospect of ex post (post-judgment) settlement makes the litigants behave as if the stakes are symmetric. But this does not occur in antitrust, because it proscribes certain ex post settlements. We consider how courts might mitigate the distortion by altering the plaintiff's evidentiary burden.

7 citations

Frequently Asked Questions (6)
Q1. What contributions have the authors mentioned in the paper "The rule of probabilities: a practical approach for applying bayes' rule to the analysis of dna evidence" ?

This Article shows that a correct application of Bayes ' rule should lead factfinders and litigants to focus on the size of two variables that influence the source probability: the probability that a nonsource in the DNA database would have an alibi, and the probability that the source of the DNA is included in the database. This Article suggests practical means of estimating these two variables and argues that, as a legal matter, these parameters as well as the Bayesian posterior source probability are admissible in court, 

It might, for example, be possible that African Americans, because of disparities in policing, are overrepresented in the database. 

To calculate the correct probability that there would be another couple that matches, it would be necessary to average the two numbers, weighting the 100% result by the chance the couple is innocent and the The author- (I - r)T probability by the chance the couple is guilty. 

With a 30% hit rate based on 100 trawls, the authors show that with 99% confidence one can reject the conclusion that the true source probability is anything below 19.7%. 

2. Empiricizing the prior probabilityCourts have been particularly reluctant to introduce evidence about the prior probability of an individual defendant's being the source of forensic DNA 1 34because the very process of constructing a prior probability seems inconsistent with the notion of a fair trial. 

In this case, the chance that the source match is in the database is unchanged at p, but the expected number of innocent matches has gone up by r, Now the result of a trawl from a larger database is less convincing:Source Probability [S The authorM = 11= p(1 p + (1- p)r(D+1)(1- a)An increase in D to D + 1, holding p constant, increases the denominator and reduces the posterior source probability.