scispace - formally typeset
Open AccessJournal ArticleDOI

Fault tolerance in networks of bounded degree

Cynthia Dwork, +2 more
- 01 Oct 1988 - 
- Vol. 17, Iss: 5, pp 975-988
Reads0
Chats0
TLDR
It is believed that in forseeable technologies the number of faults will grow with the size of the network while the degree will remain practically fixed, and the question whether it is possible to avoid the connectivity requirements by slightly lowering the authors' expectations is raised.
Abstract
Achieving processor cooperation in the presence of faults is a major problem in distributed systems. Popular paradigms such as Byzantine agreement have been studied principally in the context of a complete network. Indeed, Dolev [J. Algorithms, 3 (1982), pp. 14–30] and Hadzilacos [Issues of Fault Tolerance in Concurrent Computations, Ph.D. thesis, Harvard University, Cambridge, MA, 1984] have shown that $\Omega (t)$ connectivity is necessary if the requirement is that all nonfaulty processors decide unanimously, where t is the number of faults to be tolerated. We believe that in forseeable technologies the number of faults will grow with the size of the network while the degree will remain practically fixed. We therefore raise the question whether it is possible to avoid the connectivity requirements by slightly lowering our expectations. In many practical situations we may be willing to “lose” some correct processors and settle for cooperation between the vast majority of the processors. Thus motivated, ...

read more

Content maybe subject to copyright    Report

Claremont Colleges
Scholarship @ Claremont
(( 0(/30(& /&+*.*!"."- %  0(/3 %+(-.%&,

Fault Tolerance in Networks of Bounded Degree
Cynthia Dwork
IBM Almaden Research Center
David Peleg
IBM Almaden Research Center
Nicholas Pippenger
Harvey Mudd College
Eli Upfal
IBM Almaden Research Center
5&.-/& ("&.-+0$%//+3+0#+-#-""*!+,"* "..3/%" 0(/3 %+(-.%&,/ %+(-.%&,(-")+*//%.""* ",/"!#+-&* (0.&+*
&*(( 0(/30(& /&+*.*!"."- %3*0/%+-&4"!!)&*&./-/+-+# %+(-.%&,(-")+*/+-)+-"&*#+-)/&+*,("." +*/ /
. %+(-.%&, 0  (-")+*/"!0
" +))"*!"!&//&+*
3*/%&2+-'1&!"("$& %+(.&,,"*$"-*!(&,#(0(/+("-* "&*"/2+-'.+#+0*!"!"$-""+ &"/3#+-
*!0./-&(*!,,(&"!/%")/& .+0-*(+*+),0/&*$

SIAM
J.
COMPUT.
Vol.
17,
No.
5,
October
1988
(C)
1988
Society
for
Industrial
and
Applied
Mathematics
009
FAULT
TOLERANCE
IN
NETWORKS
OF
BOUNDED
DEGREE*
CYNTHIA
DWORK,
DAVID
PELEGt,
NICHOLAS
PIPPENGERt,
AND
ELI
UPFAL?
Abstract.
Achieving
processor
cooperation
in
the
presence
of
faults
is
a
major
problem
in
distributed
systems.
Popular
paradigms
such
as
Byzantine
agreement
have
been
studied
principally
in
the
context
of
a
complete
network.
Indeed,
Dolev
[J.
Algorithms,
3
(1982),
pp.
14-30]
and
Hadzilacos
[Issues
of
Fault
Tolerance
in
Concurrent
Computations,
Ph.D.
thesis,
Harvard
University,
Cambridge,
MA,
1984]
have
shown
that
fl(t)
connectivity
is
necessary
if
the
requirement
is
that
all
nonfaulty
processors
decide
unanlmously,
where
is
the
number
of
faults
to
be
tolerated.
We
believe
that
in
forseeable
technologies
the
number
of
faults
will
grow
with
the
size
of
the
network
while
the
degree
will
remain
practically
fixed.
We
therefore
raise
the
question
whether
it
is
possible
to
avoid
the
connectivity
requirements
by
slightly
lowering
our
expectations.
In
many
practical
situations
we
may
be
willing
to
"lose"
some
correct
processors
and
settle
for
cooperation
between
the
vast
majority
of
the
processors.
Thus
motivated,
we
present
a
general
simulation
technique
by
which
vertices
(processors)
in
almost
any
network
of
bounded
degree
can
simulate
an
algorithm
designed
for
the
complete
network.
The
simulation
has
the
property
that
although
some
correct
processors
may
be
cut
off
from
the
majority
of
the
network
by
faulty
processors,
the
vast
majority
of
the
correct
processors
will
be
able
to
communicate
among
themselves
undisturbed
by
the
(arbitrary)
behavior
of
the
faulty
nodes.
We
define
a
new
paradigm
for
distributed
computing,
almost-everywhere
agreement,
in
which
we
require
only
that
almost
all
correct
processors
reach
consensus.
Unlike
the
traditional
Byzantine
agreement
problem,
almost-everywhere
agreement
can
be
solved
on
networks
of
bounded
degree.
Specifically,
we
can
simulate
any
sufficiently
resilient
Byzantine
agreement
algorithm
on
a
network
of
bounded
degree
using
our
communi-
cation
scheme
described
above.
Although
we
"lose"
some
correct
processors,
effectively
treating
them
as
faulty,
the
vast
majority
of
correct
processors
decide
on
a
common
value.
Key
words,
fault
tolerance,
communication,
bounded-degree
network,
expander
graph
AMS(MOS)
subject
classifications.
68M10,
68M15,
68R10
1.
Preliminaries.
In
1982
Dolev
[D]
published
the
following
damning
result
for
distributed
computing:
"Byzantine
agreement
is
achievable
only
if
the
number
of
faulty
processors
in
the
system
is
less
than
one-half
of
the
connectivity
of
the
system’s
network."
Even
in
the
absence
of
malicious
failures
connectivity
+
1
is
required
to
achieve
agreement
in
the
presence
of
faulty
processors
[H].
The
results
are
viewed
as
damning
because
of
the
fundamental
nature
of
the
Byzantine
agreement
problem.
In
this
problem
each
processor
begins
with
an
initial
value
drawn
from
some
domain
V
of
possible
values.
At
some
point
during
the
computation,
during
which
processors
repeatedly
exchange
messages
and
perform
local
computations,
each
processor
must
irreversibly
decide
on
a
value,
subject
to
two
conditions.
No
two
correct
processors
may
decide
on
different
values,
and
if
all
correct
processors
begin
with
the
same
value
v,
then
v
must
be
the
common
decision
value.
(See
[F]
for
a
survey
of
related
problems.)
The
ability
to
achieve
this
type
of
coordina-
tion
is
important
in
a
wide
range
of
applications,
such
as
database
management,
fault-tolerant
analysis
of
sensor
readings,
and
coordinated
control
of
multiple
agents.
A
simple
corollary
of
the
results
of
Dolev
and
Hadzilacos
is
that
in
order
for
a
system
to
be
able
to
reach
Byzantine
agreement
in
the
presence
of
up
to
faulty
processors,
every
processor
must
be
directly
connected
to
at
least
fl(t)
others.
Such
high
connectivity,
while
feasible
in
a
small
system,
cannot
be
implemented
at
reasonable
cost
in
a
large
system.
As
technology
improves,
increasingly
large
distributed
systems
and
parallel
com-
puters
will
be
constructed.
However,
in
any
forthcoming
technology,
the
number
of
*
Received
by
the
editors
June
17,
1986;
accepted
for
publication
(in
revised
form)
November
3,
1987.
f
IBM
Almaden
Research
Center,
San
Jose,
California
95120-6099.
975

976
C.
DWORK,
D.
PELEG,
N.
PIPPENGER,
AND
E.
UPFAL
faulty
processors
in
a
given
system
will
grow
with
the
size
of
the
system,
whereas
the
degree
of
the
interconnection
network
by
which
the
processors
communicate
will,
for
all
practical
purposes,
remain
fixed.
Despite
these
negative
results,
distributed
systems
are
widely
used
and
parallel
computers
are
being
built.
This
suggests
that
the
correctness
conditions
for
Byzantine
agreement
are
too
stringent
to
reflect
practical
situations.
In
particular,
Byzantine
agreement
guarantees
coordination
among
all
correct
processors,
by
necessarily
omit-
ting
up
to
faulty
processors.
In
many
situations
it
may
suffice
to
guarantee
agreement
among
all
but
O(t)
processors.
In
other
situations
a
simple
majority
consensus
may
suffice.
Similarly,
in
clock
synchronization,
or
in
firing
squad
synchronization,
it
may
suffice
for
a
vast
majority
of
the
correct
processors
to
be
synchronized.
In
the
traditional
paradigm
for
distributed
computing
described
above,
the
correct-
ness
conditions
describe
the
states
of
all
nonfaulty
processors.
In
this
paper
we
propose
a
new
paradigm
for
fault-tolerant
computing
in
which
correctness
conditions
are
relaxed
by
"giving
up
for
lost"
those
correct
processors
whose
communication
paths
to
the
remainder
of
the
network
are
excessively
corrupted
by
faulty
processors.
Such
a
processor
is
called
poor.
While
any
network
of
bounded
degree
must
contain
some
poor
processors,
in
this
paper
we
show
that
their
number
can
often
be
kept
quite
small,
even
in
networks
of
constant
degree.
Further,
we
argue
that
this
type
of
cooperation
may
fit
well
most
applications
of,
say,
Byzantine
agreement.
All
known
algorithms
guarantee
only
that
if
at
most
f-<_
<
n
/
3
processors
fail
then
at
least
k
->
n
-f
processors
will
mutually
agree
on
a
value.
Our
results
show
that
we
can
eliminate
the
costly
connectivity
condition
requiring
f(nt)
edges
by
employing
an
appropriately
chosen
bounded-degree
network
of
n
+
O(t)
processors
and
still
guarantee
agreement
among
n
correct
processors.
Our
paradigm
admits
deterministic
solutions
in
networks
of
small
constant
degree
to
such
fundamental
problems
as
atomic
broadcast,
Byzantine
agree-
ment,
and
clock
synchronization.
We
present
a
general
simulation
technique
by
which
for
almost
any
regular
graph
G,
the
vertices
(processors)
of
G
can
simulate
an
algorithm
designed
for
a
complete
network
in
such
a
way
that
the
number
of
poor
processors
in
G
is
small.
The
crux
of
the
simulation
is
a
transmission
scheme
for
simulating
the
point:to-point
transmissions
of
the
complete
network
by
sending
messages
along
several
paths
of
G
in
such
a
way
that
there
will
always
be
a
large
set
of
correct
processors
capable
of
communicating
among
themselves
as
if
they
comprise
a
fully
connected
subnetwork,
independent
of
the
behavior
of
the
faulty
processors.
For
consensus
problems
we
can
often
do
better
than
in
the
general
simulation
by
employing
a
compression
procedure
based
on
the
existence
of
compressor
graphs
[P].
This
procedure
is
iterative
and
local
in
nature,
and
cannot
by
itself
guarantee
agreement.
However,
it
can
be
used
to
"sharpen"
dichotomies
in
that
if
a
sufficiently
large
majority
(e.g.,
all
but
O(t
log
t))
of
the
correct
processors
have
the
same
value,
then
the
procedure
converges
and
strengthens
this
majority
(e.g.,
to
all
but
+
1).
Our
model
of
computation
is
identical
to
that
commonly
used
in
the
Byzantine
literature.
Specifically,
each
processor
can
be
thought
of
as
a
(possibly
infinite)
state
machine
with
special
registers
for
communication
with
the
outside
world.
The
pro-
cessors
communicate
by
means
of
point-to-point
links,
which
are
assumed
to
be
completely
reliable.
The
entire
system
is
synchronous,
and
can
be
thought
of
as
controlled
by
a
common
clock.
At
each
pulse
of
the
common
clock
a
processor
may
send
a
message
on
each
of
its
incident
communication
links
(possibly
different
messages
on
different
links).
Messages
sent
at
one
clock
pulse
are
delivered
before
the
next
pulse.
For
each
of
our
transmission
schemes
there
is
a
specific
lower
bound
b
on
the
number
of
clock
pulses
needed
to
simulate
one
complete
round
of
message
exchange

FAULT
TOLERANCE
IN
NETWORKS
OF
BOUNDED
DEGREE
977
in
the
simulated
network.
For
simplicity
we
assume
that
the
common
clock
sends
a
"super-pulse"
every
b
rounds.
A
processor
simulates
round
r
of
the
original
algorithm
at
the
rth
super-pulse.
Since
we
cannot
hope
to
solve
the
Byzantine
agreement
problem
exactly
on
networks
of
bounded
degree,
we
introduce
the
notion
of
almost-everywhere
agreement
(denoted
a.e.
agreement),
in
which
all
but
a
small
number
of
the
correct
processors
must
choose
a
common
decision
value.
More
precisely,
a
protocol
P
is
said
to
achieve
t-resilient
X
agreement,
where
X
is
any
term,
if
in
every
execution
of
P
in
which
at
most
processors
fail
all
but
X
of
the
correct
processors
eventually
decide
on
a
common
value.
Moreover,
if
all
the
correct
processors
share
the
same
initial
value
then
that
must
be
the
value
chosen.
Note
that
the
traditional
Byzantine
agreement
problem
is
just
0
agreement.
A
protocol
solves
a.e.
agreement
if
it
solves
X
agreement
for
some
X
such
that
X/(n-t)-O
as
n-*.
Our
first
result
applies
only
to
fail
stop,
omission,
or
authenticated
Byzantine
faults.
TIqEOREM
1.
For
all
r>-5
there
exists
a
constant
e
e(r)
such
that
for
all
<
en
almost
all
r-regular
graphs
(i.e.,
all
but
a
vanishingly
small
fraction
of
such
graphs)
admit
a
t-resilient
algorithm
for
O(t)
agreement.
The
remaining
results
apply
to
unauthenticated
Byzantine
failures.
THEOREM
2.
For
all
r
>-5,
almost
all
r-regular
graphs
admit
a
t-resilient
algorithm
for
O(t)
agreement,
where
<-
n
1-,
for
some
constant
e
e(r),
where
e(r)-O
as
r-
n.
The
next
theorems
describe
explicit
graphs
for
which
the
set
of
poor
processors
is
small.
TIEOREM
3.
The
n
node
butterfly
network
(degree
4;
see
2.3
for
definition)
admits
a
t-resilient
O(t
log
t)-agreement
algorithm
for
<-_
cn/log
n
for
some
constant
c.
The
result
of
Theorem
3
can
be
improved
for
a
family
of
networks
obtained
by
superimposing
a
compressor
of
degree
5
on
a
butterfly
network.
THEOREM
4.
There
exists
a
constant
c
and
a
network
of
degree
9
that
admits
a
t-resilient
O(
t)-agreement
algorithm
for
<-
cn/log
n.
In
the
case
of
unauthenticated
Byzantine
failures,
we
achieve
O(t)
agreement
only
for
<=
cn/log
n.
If
>
O(n/log
n)
then
it
is
easy
to
show
that
the
number
of
poor
processors
is
linear
in
n.
The
existence
problem
for
an
O(t)-agreement
algorithm
in
this
case
remains
open.
However,
we
solve
this
problem
on
graphs
of
unbounded
but
still
relatively
small
degree.
THEOREM
5.
For
every
0
<
e
<
1
there
exist
a
constant
c
c(
e
),
graphs
G
of
degree
0
(n
),
and
t-resilient
0
t)-agreement
algorithms
for
<=
en.
Finally
we
present
a
purely
combinatorial
characterization
of
networks
which
admit
p(t)
agreement
for
any
function
p.
When
p(t)
0
our
characterization
coincides
exactly
with
the
(2t
+
1)-connectivity
requirement
for
the
traditional
Byzantine
agree-
ment
cited
above
[D].
2.
Simulation
results.
In
2.1
we
describe
a
general
strategy
for
simulating
on
one
network
any
algorithm
designed
for
another
network,
describing
what
we
mean
by
"simulation."
In
2.2
we
discuss
a
general
scheme
for
implementing
our
strategy,
and
in
2.3
we
make
all
of
this
more
concrete
by
presenting
the
simulation
of
a
complete
network
by
a
butterfly
network.
In
2.4
we
show
that
our
general
scheme
can
be
implemented
on
almost
all
regular
graphs
of
bounded
degree.
Finally,
2.5
briefly
discusses
our
results
under
more
restrictive
fault
models.
2.1.
The
general
simulation.
For
simplicity,
we
take
the
simulated
network
to
be
completely
connected.
Let
A
be
an
algorithm
designed
for
a
fully
connected
network
H.
Consider
an
arbitrary
network
G
over
the
same
set
of
vertices
(processors)
as
in

978
C.
DWORK,
D.
PELEG,
N.
PIPPENGER,
AND
E.
UPFAL
H,
and
suppose
we
wish
to
simulate
A
on
G.
We
need
only
specify
the
simulation
of
communication
between
processors;
a
direct
message
from
a
processor
u
to
its
neighbor
v
in
H
can
be
simulated
in
G
by
sending
the
message
from
u
to
v
through
various
paths,
and
supplying
v
with
a
method
for
determining
the
correct
value
of
the
message,
e.g.,
by
taking
the
value
appearing
in
the
majority
of
the
paths.
Taken
together,
the
particular
choice
of
paths
and
the
supplied
decision
method
constitute
a
transmission
scheme.
Of
course,
even
if
u
and
v
are
correct
processors
the
faulty
processors
may
be
so
placed
that
all
or
most
of
the
paths
from
u
to
v
are
corrupted.
Thus,
even
if
a
processor
is
correct,
it
may
be
unable
to
properly
communicate
with
the
other
processors.
Let
a
transmission
scheme
for
G
be
fixed.
Let
T
be
a
subset
of
the
vertices
of
G
(think
of
.T
as
the
set
of
faulty
processors).
A
pair
of
nodes
(u,
v)
G
is
successful
with
respect
to
T
if,
whenever
all
the
processors
not
in
T
follow
the
transmission
scheme
correctly,
the
simulation
of
a
message
transmission
from
u
to
v
always
succeeds
(i.e.,
v
decides
correctly
on
the
value
sent
by
u).
Let
POOR
(G,
T)
be
a
minimal
set
of
correct
nodes
such
that
every
pair
of
nodes,
u,
v
T
t_J
POOR
(G,
T)
is
successful
with
respect
to
T.
(Note
that
this
set
need
not
be
unique.)
Let
p(G,
t)=
max
{[POOR
(G,
T)[
such
that
T
V,
IT[
t}.
As
we
will
show
in
Theorem
2.1
there
is
a
p(G,
t)-agreement
algorithm
resilient
to
failures
for
every
graph
G
and
suitable
choice
of
t.
We
are
therefore
interested
in
finding
graphs
G
for
which
p(G,
t)
is
small.
Such
graphs
are
the
subject
of
2.3
and
2.4.
THEOREM
2.1.
Let
A
be
an
algorithm
for
the
traditional
Byzantine
agreementproblem
designed
for
network
H,
let
G
be
a
graph
with
the
same
number
of
vertices
as
H,
and
let
TS
be
a
transmission
scheme
for
simulating
on
G
message
transmissions
in
H.
Let
A(TS)
be
the
simulation
of
A
on
G
using
the
transmission
scheme
TS
to
simulate
messages
sent
on
H.
For
every
t,
if
A
is
guaranteed
to
work
correctly
on
H
in
the
presence
of
at
most
+
p(G,
t)
faults,
then
A(TS)
achieves
p(G,
t)
agreement
on
G
in
the
presence
of
up
to
faults.
Proofi
Let
T
be
a
set
of
faulty
processors
in
G.
A(TS)
simulates
the
execution
of
A
on
H
by
simulating
the
processors
in
a
one-one
fashion.
By
definition,
the
processors
not
in
POOR
(G,
T)
can
communicate
among
themselves
as
if
they
comprised
a
fully
connected
subnetwork,
so
the
simulated
communication
among
this
set
of
processors
is
successful.
The
behavior
of
the
correct
processors
in
POOR
(G,
T)
may
appear
to
be
faulty.
These
are
the
processors
we
give
up
for
lost.
Since
A
is
guaranteed
to
work
correctly
even
in
the
presence
of
t+p(G,
t)>-[POOR
(G,
T)I
failures
we
are
done.
71
In
order
to
use
the
transmission
schemes
described
here
the
processors
must
have
some
knowledge
of
the
topology
of
the
system.
The
amount
of
knowledge
needed,
and
how
this
quantity
depends
on
the
types
of
faults
considered,
are
subjects
for
further
research.
2.2.
A
class
of
transmission
schemes.
We
now
describe
in
more
detail
a
specific
class
of
transmission
schemes,
called
three-phase
transmission
schemes.
Let
G
be
any
network
in
which
we
may
specify
the
following
sets.
For
every
node
v
we
specify
sets
of
processors
Fin(/)),
Font(V)
_..
V,
each
of
fixed
(but
not
necessarily
constant)
size
s.
For
each
node
w
in
Fin(V
(Font(/)))
a
path
from
w
to
v
(v
to
w)
is
specified.
In
addition,
for
each
ordered
pair
of
nodes
(u,
v)
we
specify
s
vertex-disjoint
paths
from
Font(U)
to
Fin(/)
The
transmission
of
a
message
x
from
u
to
v
consists
of
three
phases.
In
the
first
phase
the
message
is
broadcast
from
u
to
every
node
in
Fout(U
through
the
specified
paths.
Thus,
at
the
end
of
the
first
phase
a
copy
of
x
is
received
by
all
nodes
of
Font(U).

Citations
More filters
Journal ArticleDOI

Perfectly secure message transmission

TL;DR: These are the first algorithms for secure communication in a general network to simultaneously achieve the three goals of perfect secrecy, perfect resiliency, and worst-case time linear in the diameter of the network.
Journal ArticleDOI

Local majorities, coalitions and monopolies in graphs: a review

TL;DR: This paper provides an overview of recent developments concerning the process of local majority voting in graphs, and its basic properties, from graph theoretic and algorithmic standpoints.
Book

Design and Analysis of Distributed Algorithms

TL;DR: The aim of this monograph is to provide a history of distributed computing in the context of elections, as well as some of the techniques used to design and implement these networks.
Proceedings ArticleDOI

Perfectly secure message transmission

TL;DR: These are the first algorithms for secure communication in a general network to achieve simultaneously the goals of perfect secrecy, perfect resiliency, and a worst case time which is linear in the diameter of the network.
Journal ArticleDOI

Dynamic Monopolies of Constant Size

TL;DR: In this article, it was shown that a set W 0 of vertices is a dynamic monopoly or dynamo if starting the game with the vertices of W 0 colored white, the entire system is white after a finite number of rounds.
References
More filters
Book ChapterDOI

The Byzantine generals problem

TL;DR: In this article, a group of generals of the Byzantine army camped with their troops around an enemy city are shown to agree upon a common battle plan using only oral messages, if and only if more than two-thirds of the generals are loyal; so a single traitor can confound two loyal generals.
Journal ArticleDOI

Reaching Agreement in the Presence of Faults

TL;DR: It is shown that the problem is solvable for, and only for, n ≥ 3m + 1, where m is the number of faulty processors and n is the total number and this weaker assumption can be approximated in practice using cryptographic methods.
Journal ArticleDOI

The Byzantine Generals strike again

TL;DR: The results obtained in the present paper prove that unanimity is achievable in any distributed system if and only if the number of faulty processors in the system is less than one third of the total number of processors and less than half of the connectivity of the system''s network.
Journal ArticleDOI

Explicit constructions of linear-sized superconcentrators

TL;DR: A direct way to construct a family of l inear concentrators using Pinsker’s linear concentrators and disproved a conjecture that superconcentrators require more than a linear number of edges.
Frequently Asked Questions (5)
Q1. What have the authors contributed in "Fault tolerance in networks of bounded degree" ?

Popular paradigms such as Byzantine agreement have been studied principally in the context of a complete network. Thus motivated, the authors present a general simulation technique by which vertices ( processors ) in almost any network of bounded degree can simulate an algorithm designed for the complete network. Specifically, the authors can simulate any sufficiently resilient Byzantine agreement algorithm on a network ofbounded degree using their communication scheme described above. In 1982 Dolev [ D ] published the following damning result for distributed computing: `` Byzantine agreement is achievable only if the number of faulty processors in the system is less than one-half of the connectivity of the system ’ s network. The authors believe that in forseeable technologies the number of faults will grow with the size of the network while the degree will remain practically fixed. 

THEOREM 5. For every 0 < e < 1 there exist a constant c c( e ), graphs G ofdegree 0(n ), and t-resilient 0 t)-agreement algorithms for <= en. 

Keeping p fixed and looking at all possible senders u whose paths to Fout(U) contain p the authors see that p can block at most 1/2 of the outbound paths for its 2 "distance i" neighbors. 

TIqEOREM 1. For all r>-5 there exists a constant e e(r) such that for all < en almost all r-regular graphs (i.e., all but a vanishingly smallfraction ofsuch graphs) admit a t-resilient algorithm for O(t) agreement. 

Thus p can corrupt at most r(r- 1)a+3 elements in sets F(u) for vertices u at distance from p. Summing up for all distances _-< d and all faulty processors the authors see that the faulty processors can corrupt at most tdr(r-1)d+3 paths in total.