scispace - formally typeset
Open AccessProceedings ArticleDOI

Multiparty unconditionally secure protocols

TLDR
It is shown that any reasonable multiparty protocol can be achieved if at least 2n/3 of the participants are honest and the secrecy achieved is unconditional.
Abstract
Under the assumption that each pair of participants can communicate secretly, we show that any reasonable multiparty protocol can be achieved if at least 2n/3 of the participants are honest. The secrecy achieved is unconditional. It does not rely on any assumption about computational intractability.

read more

Content maybe subject to copyright    Report

MULTIPARTY UNCONDITIONALLY SECURE
PROTOCOLS
(Extended Abstract)
David Chaum’
Claude Crt!peaut
Ivan Damgdr&
*Cenlre for Mathenkdcs and Conrpufer Science (C. W.I.)
Kruidaan 413,1098 S.J Amsterdam. The Ndwlamis
tL.aboratory for Computer
Science,
M.I.T.
545 Technology Square. Cambridge, MA 02139, USA.
*Marematisk
In.rlitti,
Aarhus
Universitel,
Ny Mwkegade, DK 8000 Aarhus C, Denmark
Abstract
Under the assumption that each pair of participants
em communieatc secretly, we show that any
reasonable multiparty protwol can be achieved if at
least Q of the Participants am honest. The secrecy
achieved is unconditional, It does not rely on any
assumption about computational intractability.
1. Introduction
In this paper, we show that essentially any
general multiparty protocol problem can be solved,
in such a way that each party’s secrets is uncondi-
tionally secure, assuming the existence of authenti-
cated secrecy channels between each pair of partici-
pants. In general, an input value Xi is uncondition-
ally secure if gaining information about Xi is impos-
sible beyond that available from z (so long as no
more than
+ of the participants cheat, in our model).
The problem of multiparty function computation is
as follows: n participants PI, Pz, . . . . P, agree on a
multivariable function F and wish to compute and
reveal to each participant r=F (x1, x2, . ..,
x,, ), where
Xi is a Secret input provided by P;. The goal is to
preserve the maximum privacy of the xi’s and to
simultaneously guarantee the correctness of the com-
mon result 2. (An intrinsic property of any solution
to this problem is that for a non-trivial function F,
the value of 2 reveals some information about the
secret Xi
‘S.)
This is stronger than the notion of cryptographic
security that is often used for cryptographic proto-
cols. Under that definition, xi is cryptographically
secure if gaining information about it, other than that
available from z is thought to be computationally
hard.
As explained below, in our model no more
than i of the participants may deviate from the pro-
tocol. Since our solution will tolerate up to this
number of participants who cheat, it is therefore
optimal.
1.1. Related Work
t supported in part by an NSIXC Postgraduate Scholarship
supported in part by DARPA grant NOC@1483KO125
research conducted at the C.W.I.
* resemh canduckcl at the
C.W.I.
Permission to copy without fee all or part of this material is granted
provided that the copies are not made or distributed for direct
commercial advantage, the ACM copyright notice and the title of
the publication and its date appear. and notice is given that copying
is by permission of the Association for Computing Machinery. To
copy ofherwise, or to republish. requires a fee and/or specfic
permission.
Other work has been able to provide uncondi-
tional privacy in multiparty protocols, but only for
specific problems. One poker protocol of BArAny
and Furedi [BF] used a model similar to ours, but
was unable to tolerate active cheaters. The dining
cryptographers problem of Chaum [Ch2], was also
based on a similar model, and provided uncondi-
tional untraceability of messages even in the pres-
ence of active disruption.
The general problem of achieving secure mul-
tiparty function computation was first posed by Yao
0 1988 ACM-O-89791-264-0/88/0005/0011 $1.50
11

[Ya], in a public key cryptographic setting. In this;
paper, he suggested that a general solution exists in
his particular model. Goldreich, Micali and Wigder-
son showed in [GMW] that very general multiparty
protocols (or mental games) could be achieved in a
model where security is based on the notion of zero-
knowledge protocols. Their ~solution, based on the
existence of trapdoor one-way permutations,
involves a “compiler” that transforms any mental
game into a multiparty cryptogmphically secure pro-
tocol.
Chaum, Damgtid and Van de Graaf presented
a more direct and practical solution [CDG] based on
the existence of “unconditionally secure blobs” (see
[BCCI) and trapdoor one-way functions. This solu-
tion was the first one to raise the hope that
such
pro-
tocols could be implemented in an unconditionally
secure way. That paper showed how general mul-
tiparty protocols provide uncondilional privacy IC
one participant and that it is the
most that can be
achieved in that model. Our result stems from that
paper.
The major limitation of these results is due tc
the model they use: a setting where only public com-
munications are possible. All these general con-
structions rely on trapdoor one-way functions, and
therefore
must
assume essentially that public key
cryptography is possible.
A much weaker assumption is to assert the
existence of authenticated secrecy channels, i.e. a
way of communicating in which the identity of the
sender is known (authentic:ation) and the data
transferred is revealed only to the single person it iz
mcanl for (secrecy). Such channels arc very practical
and can be implemented easily: for example, by
writing down messages on pieces of paper and phy
sically handing them out to the other parties. They
can also be implemented using conventional cryp-
tography (secret key systems).
Our work has drawn inspiration from and
relies on a number of other earlier contributions. The
Byzantine Generals problem proposed and solved by
Lamport, Shostak and Pease @-PSI cart be thought of
as underlying our work and Iprovided a foundation
for our model. So called secret sharing schemes pro-
posed by Blakely [Bl] and !;hamir [Sh] are basic
building blocks. A very clever extension of these
schemes was proposed by .McEliece and Sarvawate
[MS] that provides some fault-tolerance to active
cheaters. The more specific notion of verifiable
secret sharing (VSS) schemes was introduced to
cryptography by Chor, Goldwasser, Micali and
Awerbuch [CGMA].
The usefulness of the
homomorphic structure of Shamir’s secret sharing
scheme
was observed by Benaloh [Be], who pro-
posed techniques very similar to ours in is own ver-
sion of a VSS scheme.
Some concurrent and independent work
[BGW] has been performed on the topic of our
paper: during discussions with Shati Goldwasser
and Avi Wigderson, we learned that they were work-
ing with Michael Ben-Or on results similar to ours.
At that time, all of us had results in a very early
stage. By the time of submission to this conference,
both groups had found almost identical results by
quite different means.
1.2. Algorithm
The gcncral structure of the algorithms is simi-
lar to the ones of [CDG] & [GMWI in the sense that
it takes place in two steps: Commitment and Compu-
tation. First the participants enter a stage in which
they commit to their inputs. This commitment is per-
formed by means of a new non-cryptographic
verifiable secret sharing (VSS) scheme. Up to now,
all previous implementation of VSS schemes have
relied on public key cryptography. We introduce the
first scheme that does not rely on
such
assumptions.
If some participants try to commit to some-
thing improper or simply do not cooperate, this tirst
phase will identify them and the remaining partici-
pants will take the appropriate action. This is the
very best we can hope for. What else can you do
with someone who does not want to participate?
Once every one commits to his input, and that every
participant gets a share of everybody else’s secret,
they enter the second phase in which they evaluate
the function. The computation is performed locally
by
each
participant on the
shares
he receives from
the others.
Our construction satisfies the following pro-
pWt.ieS:
l
Unconditional Secrecy: In both stages, it is
impossible for any subset of participants of
size less than
+ to gain any information about
anyone else’s inputs.
l
Built-In Fault Tolerance: In the second phase,
no such subset can prevent the honest partici-
pants from correctly evaluating the function.
12

Again, our solution does not depend on res-
tricting the computing power of the participants. Ear-
lier solutions relied
on
cryptographic assumptions
for both secrecy of the inputs and correctness of the
computation. Even if these assumptions turned out
to be
we,
the secrecy and correctness would still be
dependent on the limitations in computing power of
the participants.
2.
The Model
For convenience, the number of participants
will be called II, which can always be written as
n =3d+a,wherea=1,2or3.
Our assumptions about at least 2d+a of the partici-
pants are that:
l
they do not leak secret information to other
participants; and
l
they send the correct messages defined by the
protocol.
We call a participant satisfying the above properties
reliable.
At the start of the protocol, it is of course
not generally agreed which participants are reliable.
Let
PI, Pa, -. .
,
P,
be the participants. Our basic
assumptions about the communication between reli-
able participants
PA
and
Pe are that:
l
when
PA
sends
a
message to
Ps,
nobody else
can leam anything about its content;
l
when
PB
receives a message from
PA, PB can
be certain that nobody but
PA
could have sent
the message; and
l
messages sent will be received in a timely
manner.
Finally, we complete our model by assuming the fol-
lowing:
l
all participants agree on the protocols to be
followed; and
l
participants can determine whether messages
sent to them were sent before deadlines set in
the protocol.
Our protocols ensure that all reliable partici-
pants obtain the correct result. It is proved construc-
tively in [LPSI, under a model like ours, that a
necessary and sufficient condition for all reliable
participants to agree on a message--such as the
result of a protocol-is that at least 2d+a of the par-
ticipants am reliable. Hence, our two-thirds
assumption is optimal. A polynomial algorithm
solving this problem is presented in [DS]. Their
construction allows us to obtain an efficient “broad-
cast” channel: a means allowing any participant to
make a message known to all participants, in such a
way that all reliable participants will obtain the same
value
of the message. (Assuming a broadcast chan-
nel, moreover, would not enable us to to
weaken
our
other requirements.)
For simplicity in the following descriptions,
we use the terminology of information theory
because we make the assumption that the channels
are unconditionally secure. Notice however that in
fact we get protocols as strong as the secrecy and
authentication of the channels used. If the channels
were not unconditionally secure, for example, the
protocol would not be unconditionally secure for all
participants but its correctness would still be
guaranteed.
3. Implementing Blobs using Secret Sharing
In
[BCC], a fundamental protocol primitive is
described:
the blob.
The purpose of blobs is to
allow a participant
PA
to commit to a bit in such a
way that she cannot later change her mind about the
bit, but nobody else can discover it without her help.
The defining properties of blobs are as follows:
(i)
PA
can obtain blobs representing 1 and blobs
representing 0.
(ii) When presented with a blob, nobody can tell
which bit it represents.
(iii)
PA
can open blobs by showing the other parti-
cipants the single bit each represents; there is
no blob she is able to “open” both as 0 and as
1.
(iv) Any other participant can at will obtain blobs
representing 0 and 1. Moreover, these blobs
must look exactly like the blobs obtained by
PA.
To implement blobs in our model, we use a
variation on Shamir’s secret sharing scheme [Sh].
This variation was proposed by Blakely [BI], who
independently discovered secret sharing schemes,
and it is more efficient than Shamir’s original con-
struction.
For our purposes, the scheme may be
described as follows: a polynomial
f
of degree at
13

most d over
GF (2’)
is chosen uniformly, where k i;r
an integer such that 2k >n . The secrtzt to be shared i;r
defined for convenience as the value off at 0. The
protocol also assigns a distinct non-zero point is in
the field to each participant
Ps . The
secret
can
now
be divided among the II participants by providing
each
PB
with the
value
off (&a).
It is not hard to see
that more than d shares completely determine
f
, and
therefore the secret, while no Shannon information
about the secret is revealed by any number of share:%
not exceeding d.
We generalize slightly by allowing blobs to
represent any value in
GF
(29. Blobs are now
readily achieved:
(9
(ii)
(iii)
(iv)
To obtain a blob representing the
value
v , par-
ticipant PA chooses uniformly a polynomial f’
with deg (f ) I
d,
such lbat
f
(0)
= v.
She then
calculates n shares as above and distributes
one to each participant. Using the subprotocol
described below, she convinces the other parti-
cipants that she has distributed a consistent set
of shares.
Since the number of unreliable participants is
smaller
than d,
no collusion will gain an:/
information in the Shannon sense about th(:
value
represented by a blob.
To open a blob,
PA
first broadcasts what its
shares should be ((is
IllSB In )).
Then each
participant broadcasts a message stating
whether they agree with their share that was
broadcast by
PA.
If a participant does net
agree, he is said to
be complaining about PA.
It is required that at least 2d+a of the partici-
pants do not complain. By the remarks below,
this condition ensures that
PA
can only open .a
blob to reveal the single value it represents.
Any participant can choose a polynomial and
distribute shares of it, whence it is impossible
to tell from a blob who generated it.
By distributing inconsistent shares to reliabl:
participants, a coalition of unreliable participants
could allow
PA
to open a bllob in two or more dif-
ferent ways. The following proof, which we infor-
mally call a “cut-and-choose procedure” (and is
similar to the construction of [Be]) enables us to
remove this inconsistency. Let the original bldb
chosen by
PA
be p. Then the cut-and-choose works
as follows:
(a)
PA
establishes a new indcpcndcntly chosen
blob 6.
(b) One of the other participants Hips a coin and
asks PA
tLr
-openS,orto
- open Stp, where St/3 denotes the blob
defined by the sum of corresponding shares of
Sandj3.
(c) Steps (a) and (b) are repeated until no com-
plaints have occurred in M consecutive
rounds, or until more than
d
participants have
comphdned about PA. In the first case the
proof is accepted, otherwise it is rejected.
The participants take turns in
executing
step
(b). By assumption, this means that
PA will be
2d+a
unable to predict the coinflips at least n
of the
time.
Note that the proof will always terminate: even
if all unreliable participants work against an honest
PA,
they cannot enlarge the number of rounds by
morethanmd.
When
p is later opened, the shares held by
complaining participants arc of course ignored.
If the proof is accepted, then the following
holds with probability exponentially close to 1 in m :
all reliable participants who did not complain (of
which there are at least
d+a)
have shares consistent
with one polynomial of degree at most
d .
Thus, with very high probability,
PA
cannot
convincingly claim that her blob contains anything
but the secret determined by the
d-t-u
valid shares
guarantied by the fact above, since otherwise the
condition in step (iii) would be violated.
To see why this is satisfied, it suffices to con-
sider the behavior of reliable participants,
corresponding to the worst case assumption that all
unreliable participants will try to help
PA
by always
agreeing with her. For any blob y, consider a poly-
nomial consistent with a maximal number of shares
of y, and let C(y) be the number of remaining shares
held by reliable and non complaining participants.
Thus C(y) may vary over time, In other words, no
matter how
PA tries
to
open y, at
last C(y) partici-
pants will complain. The case
where PA
created y
correctly corresponds of course to C(y) = 0.
In any of the rounds of the subprotocol above,
it is easy to see that because the sum of S and 6e@ is
14

just j3, C (S)+C (S+p) 2 C(p) must hold.
So if at any
point C (p) > 0, then I’,, cannot go through
m
rounds
without complaints unless she can predict roughly
+J. coinflips.
In LBCC]. it is shown how one can construct,
using only blobs, efficient
minimum disclosure
proofs for membership in a very large class of
languages, including NP and BPP. Since we can
construct blobs in our model, we can also perform all
such proofs directly.
4. VSS and Fault Tolerant Blobs
When opening a blob, PA was to broadcast the
shares she distributed in creating it. If PA is trying to
prove some statement using the techniques of
[BCC], the previous section’s results imply that it is
in PA’S interest to create and broadcast the shares
properly. But in other cases, communication failures
or a change of heart, for example, might keep
PA
from ultimately broadcasting the shares. Even if the
other participants were to make
PA
‘s shares public in
efforts to open the blob without
PA’S
help, they
would be left with a computational problem: unreli-
able participants might make public false values for
their shares, and finding the value represented may
require searching the exponentially many subsets of
shares of size 2d-t~ for one consistent with one
polynomial of degree smaller than
d.
Even worse, if
PA
was already cheating when she created the blob,
the majority of complainers could be reliable. In
such cases, unreliable participants could choose at
the time of opening between broadcasts that would
leave no unique solution for the secret or other
broadcast that would yield a particular value unarn-
biguously.
This is where the secret sharing scheme
becomes insufficient and a VSS is needed. To avoid
the problems mentioned above, and assist with things
to be presented later, we provide for the “sharing of
the shares of a blob” (as was done for similar rea-
sons in [Ch]). Thus, to create a
double
blob 6,
PA
proceeds as follows:
(1) She creates an ordinary blob in the same way
as in the previous section. This blob is called
the top level blob, and contains the secret she
commits to.
(2) For each participant
PB. the
following is
done: suppose
PA
sent rhe share se of her
(3)
original bIob to PB.
Then PB
CrCalCS a .su/>-
blob,
i.c. hc creates a blob 8~ containing his
share &.
By the remarks in the previous section, all par-
ticipants are now committed to their share ol
the top-level blob. A cut-and-choose pro-
cedure is now used to check that everybody
has committed to the proper share:
PA
creates
a number of additional double blobs
Sl,SL. * * .
,6, (for which each participant
creates his own sub-blobs), and according to
coin flips made by other participants, either all
shares of the new double blob are made public
or the sum of corresponding shares of the new
and the original double blob are broadcast.
Thus in each round, every participant opens a
sub-blob of his own (either a new one or a
sum) to confirm his agreement or disagrec-
ment with
PA
on what she Sent him originally.
In order for the proof to be accepted, a subset
consisting of at least 2d+a participants must
agree with
PA
in all rounds. If a participant
disagrees with
PA
at any point, then his share
and sub-blob will be ignored when the original
double blob is later opened.
11 is easily seen that if the proof in (3) above is
accepted, then the following holds with probability
exponentially close to 1 in the number of coin flips:
all sub-blobs accepted by the cut-and-choose
contain a uniquely defined share of the top-
level blob; and
*
all these shares are consistent with one polyno-
mial.
To open a double blob, all participants broad-
cast their shares of the top level blob as well as all
shares of their sub-blobs. The result of the opening
is uniquely and easily determined, since in this case
the effect of the sub-blobs is to prevent unreliable
participants from issuing improper shares of the top
level blob: if a participant cannot confirm his share
by opening his sub-blob correctly, it will just be
ignored.
5. Multiparty Computations
This section considers general multiparty com-
putations. These may involve secret input from each
participant, and a single output which should become
known to all reliable participants.
1s

Citations
More filters
Journal ArticleDOI

L-diversity: Privacy beyond k-anonymity

TL;DR: This paper shows with two simple attacks that a \kappa-anonymized dataset has some subtle, but severe privacy problems, and proposes a novel and powerful privacy definition called \ell-diversity, which is practical and can be implemented efficiently.
Proceedings ArticleDOI

L-diversity: privacy beyond k-anonymity

TL;DR: This paper shows with two simple attacks that a \kappa-anonymized dataset has some subtle, but severe privacy problems, and proposes a novel and powerful privacy definition called \ell-diversity, which is practical and can be implemented efficiently.
Book ChapterDOI

Non-Interactive and Information-Theoretic Secure Verifiable Secret Sharing

TL;DR: It is shown how to distribute a secret to n persons such that each person can verify that he has received correct information about the secret without talking with other persons.
Proceedings Article

Completeness Theorems for Non-Cryptographic Fault-Tolerant Distributed Computation (Extended Abstract)

TL;DR: The above bounds on t, where t is the number of players in actors, are tight!
Proceedings ArticleDOI

Completeness theorems for non-cryptographic fault-tolerant distributed computation

TL;DR: In this article, the authors show that every function of n inputs can be efficiently computed by a complete network of n processors in such a way that if no faults occur, no set of size t can be found.
References
More filters
Journal ArticleDOI

How to share a secret

TL;DR: This technique enables the construction of robust key management schemes for cryptographic systems that can function securely and reliably even when misfortunes destroy half the pieces and security breaches expose all but one of the remaining pieces.
Book ChapterDOI

The Byzantine generals problem

TL;DR: In this article, a group of generals of the Byzantine army camped with their troops around an enemy city are shown to agree upon a common battle plan using only oral messages, if and only if more than two-thirds of the generals are loyal; so a single traitor can confound two loyal generals.
Proceedings ArticleDOI

How to play ANY mental game

TL;DR: This work presents a polynomial-time algorithm that, given as a input the description of a game with incomplete information and any number of players, produces a protocol for playing the game that leaks no partial information, provided the majority of the players is honest.
Proceedings ArticleDOI

Protocols for secure computations

TL;DR: This paper describes three ways of solving the millionaires’ problem by use of one-way functions (i.e., functions which are easy to evaluate but hard to invert) and discusses the complexity question “How many bits need to be exchanged for the computation”.
Proceedings Article

Completeness Theorems for Non-Cryptographic Fault-Tolerant Distributed Computation (Extended Abstract)

TL;DR: The above bounds on t, where t is the number of players in actors, are tight!
Frequently Asked Questions (1)
Q1. What are the contributions mentioned in the paper "Multiparty unconditionally secure protocols" ?

Under the assumption that each pair of participants em communieatc secretly, the authors show that any reasonable multiparty protwol can be achieved if at least Q of the Participants am honest.