scispace - formally typeset
Open AccessProceedings ArticleDOI

Leakage-Resilient Cryptography

Reads0
Chats0
TLDR
In this article, a stream-cipher S whose implementation is secure even if a bounded amount of arbitrary (adversarially chosen) information on the internal state ofS is leaked during computation is presented.
Abstract
We construct a stream-cipher S whose implementation is secure even if a bounded amount of arbitrary (adversarially chosen) information on the internal state ofS is leaked during computation. This captures all possible side-channel attacks on S where the amount of information leaked in a given period is bounded, but overall can be arbitrary large. The only other assumption we make on the implementation of S is that only data that is accessed during computation leaks information. The stream-cipher S generates its output in chunks K1, K2, . . . and arbitrary but bounded information leakage is modeled by allowing the adversary to adaptively chose a function fl : {0,1}* rarr {0, 1}lambda before Kl is computed, she then gets fl(taul) where taul is the internal state ofS that is accessed during the computation of Kg. One notion of security we prove for S is that Kg is indistinguishable from random when given K1,..., K1-1,f1(tau1 ),..., fl-1(taul-1) and also the complete internal state of S after Kg has been computed (i.e. S is forward-secure). The construction is based on alternating extraction (used in the intrusion-resilient secret-sharing scheme from FOCS'07). We move this concept to the computational setting by proving a lemma that states that the output of any PRG has high HILLpseudoentropy (i.e. is indistinguishable from some distribution with high min-entropy) even if arbitrary information about the seed is leaked. The amount of leakage lambda that we can tolerate in each step depends on the strength of the underlying PRG, it is at least logarithmic, but can be as large as a constant fraction of the internal state of S if the PRG is exponentially hard.

read more

Content maybe subject to copyright    Report

Leakage-Resilient Cryptography
Stefan Dziembowski
University of Rome
La Sapienza
Krzysztof Pietrzak
CWI Amsterdam
Abstract
We construct a stream-cipher S whose implementation is
secure even if a bounded amount of arbitrary (adversarially
chosen) information on the internal state of S is leaked dur-
ing computation. This captures all possible side-channel
attacks on S where the amount of information leaked in a
given period is bounded, but overall can be arbitrary large.
The only other assumption we make on the implementation
of S is that only data that is accessed during computation
leaks information.
The stream-cipher S generates its output in chunks
K
1
, K
2
, . . ., and arbitrary but bounded information leak-
age is modeled by allowing the adversary to adaptively
chose a function f
: {0, 1}
{0, 1}
λ
before K
is
computed, she then gets f
(τ
) where τ
is the internal
state of S that is accessed during the computation of K
.
One notion of security we prove for S is that K
is in-
distinguishable from random when given K
1
, . . . , K
1
,
f
1
(τ
1
), . . . , f
1
(τ
1
) and also the complete internal state
of S after K
has been computed (i.e. S is forward-secure).
The construction is based on alternating extraction
(used in the intrusion-resilient secret-sharing scheme from
FOCS’07). We move this concept to the computational set-
ting by proving a lemma that states that the output of any
PRG has high HILL pseudoentropy (i.e. is indistinguishable
from some distribution with high min-entropy) even if arbi-
trary information about the seed is leaked. The amount of
leakage λ that we can tolerate in each step depends on the
strength of the underlying PRG, it is at least logarithmic,
but can be as large as a constant fraction of the internal
state of S if the PRG is exponentially hard.
1. Introduction
When analyzing the security of a cryptosystem, we can
either think of the system as a mathematical object, exactly
specifying what kind of access to the functionality a poten-
tial adversary has, or try to analyze the security of an actual
implementation. Traditionally, cryptographers have mostly
considered the former view and analyzed the security of
the mathematical object, and it is generally believed that
our current knowledge of cryptography suffices to construct
schemes that, when modeled in this way, are extremely se-
cure. On a theoretical side, we know how to construct se-
cure primitives under quite weak complexity-theoretic as-
sumptions, for example secret-key encryption can be based
on any one-way function [17]. Also from the practical per-
spective, the currently used constructions have very strong
security properties, e.g. after 30 years of intensive cryptana-
lytic efforts still the most practical attack on the DES cipher
is exhaustive key search.
Side-Channel Attacks. The picture is much more
gloomy when the security of real-life implementations is
considered. This is because, when considering an imple-
mentation of a cryptosystem, one must take into account the
possibility of side-channels, which refers to leakage of any
kind of information from the cryptosystem during its execu-
tion which cannot be efficiently derived from access to the
mathematical object alone. In the last decade many attacks
against cryptosystems (still assumed to by sound as math-
ematical objects) have been found exploiting side-channels
like running-time [22], electromagnetic radiation [30, 15],
power consumption [23], fault detection [4, 3] and many
more (see e.g. [29, 27]).
A typical countermeasure against this type of attacks is
to design hardware that minimizes the leakage of secret data
(e.g. by shielding any electromagnetic emissions), or to look
for an algorithm-specific solution, for example by masking
intermediate variables using randomization (see [27] for a
list of relevant papers). The problem with hardware-based
solutions is that protection against all possible types of leak-
age is very hard to achieve [1], if not impossible. On the
other hand, most algorithm-specific methods proposed so
far are only heuristic and do not offer any formal security
proof (we mention some exceptions in Sect.1.1). Moreover,
they are ad-hoc in the sense that they protect only against
some specific attacks that are known at the moment, instead
of providing security against a large well-defined class of
attacks. This raises the following, natural question: is there

a systematic method of designing cryptographic schemes so
that already their mathematical description guarantees that
they are provably-secure, even if they are implemented on
hardware that may be subject to a side-channel attack be-
longing to a large well-defined class of attacks? Ideally,
one would like to develop a theory that (1) provides precise
definition of such a class of attacks, and (2) shows how to
construct systems that are secure in this model (under the
assumptions that are as weak as possible). This should be
viewed as moving the task of constructing cryptosystems
secure against side-channel attacks from the realm of engi-
neering or security research to cryptography, which over the
last 3 decades was extremely successful in defining security
models, and constructing cryptosystems that are provably-
secure in those models.
General Model for Leakage Resilience. We propose a
model for cryptographic computation where the class of
possible side-channel attacks is extremely broad, yet sim-
ple and natural. Models similar to ours have been pro-
posed before, in particular Micali and Reyzin [25] explicitly
stated the “only computation leaks” assumption we will use.
The only other assumption on the implementation we make
is the (trivially necessary) requirement that the amount of
leakage in each round is bounded. This approach is inspired
by the bounded-storage and bounded-retrieval models and
has to best of our knowledge never been used in this con-
text. We stress however, that the main contribution of this
paper is not the definition of the model, but the construc-
tion of an actual cryptosystem (a stream-cipher) which is
provably secure in this model. Details follow.
Consider a cryptosystems CS, let M denote its mem-
ory and M
0
denote the data initially on M (i.e. the secret
key). Clearly the most general side-channel attack against
a cryptosystem CS(M
0
) is one in which the adversary can
choose any polynomial-time computable leakage function f
and retrieve f(M
0
) from the cryptographic machine.
1
Of
course no security is achievable in this setting, as defining
f(M
0
) = M
0
the adversary learns the complete random
key. Thus a necessary restriction we must make on f is that
its output range is bounded to {0, 1}
λ
where λ |M
0
|.
We assume that the adversary can apply this attack many
times throughout the lifetime of the device. Technically,
this will be done by dividing the execution of the algorithm
implementing CS into rounds, and allowing the adversary
to evaluate a function on the internal state of CS in each
of those rounds (let f
j
denote the leakage function that she
chooses in the jth round, for j = 1, 2...). In particular, in
this paper we construct a stream cipher which in each round
outputs a few bits.
1
Without loss of generality we can assume that the leakage function is
applied only to M
0
since all the other internal variables used in computa-
tion are deterministic functions of M
0
.
Let q be the number of rounds we want our cryptosys-
tem CS to run, and let M
0
be the secret key that is used in
the scheme. At first sight one may think that to hope for
any security we would need to assume that q · λ <
¯
¯
M
0
¯
¯
,
as otherwise the adversary can learn the entire M
0
, by just
retrieving in every round λ different bits of it. This trivial
attack does not work any more if we consider cryptosys-
tems which occasionally update their state. For this let M
j
denote the state of CS after round j.
Unfortunately, no security is possible even if we allow
CS to update its state (i.e. when M
j
is not necessarily equal
to M
j+1
) if we allow any (poly-time computable) f
j
, to see
this let t = ⌈|M| and consider f
j
, j t where each
f
j
outputs different λ bits of M
t
(note that the function
f
j
, j t can compute the future state M
t
from the current
state M
j
). After the tth round the adversary has learned the
complete state M
t
, and no security is possible beyond this
point. We call this the key-precomputation attack.
Hence, we have to somehow restrict the leakage func-
tion if we want security even when the total amount of
leaked information is (much) larger than the internal state.
The restriction that we will use is that in each round, the
leakage function f
j
only gets as input the part of the state
M
j
that is actually accessed in the jth round by CS. This
translates into a requirement on the implementation: we
assume that only computation leaks information, and the
“untouched memory cells” are completely secure. As illus-
trated in Fig. 1, in our construction of a stream-cipher, M
will consists of just three parts M
0
, M
1
and O (where O
is the output tape), and in the jth round CS (and thus the
leakage function f
j
) will access only M
j mod 2
and O. We
give the leakage function (in the jth round) access to the
complete M
j mod 2
, O, even if the computation of CS only
access a small part of it. Thus in an actual implementation,
one only must ensure that in the jth round M
j+1 mod 2
does
not leak. This requirement should easily be realizable by
an actual implementation having M
0
and M
1
use different
static memory cells (here “static” refers to the fact that this
memory needs not to be refreshed, and thus should not leak
any kind of radiation when not used).
2
Let us mention that the above restriction is not the only
natural restriction that one could make on the leakage func-
tions to avoid the key-precomputation attack. One other op-
2
Let us mention that this model also covers the case where (the not
accessed) M
j+1 mod 2
does leak in round j, as long as this leakage is
independent of the leakage of (the accessed) M
j mod 2
(i.e. when we
consider an adversary Q
who can in round j choose two functions f
j
and f
′′
j
and then gets f
j
(M
j mod 2
) and also f
′′
j
(M
j+1 mod 2
)). The
reason is that we can simulate Q
by an adversary Q who just chooses
one function f
j
which outputs f
j
(M
j mod 2
) and also f
′′
j+1
(M
j mod 2
)
(thus Q in round j simply precomputes the information that Q
will learn
in round j + 1 on the non-leaking part). Note that it’s not a problem that
Q
might compute f
′′
j+1
adaptively as a function of the information leaked
in round j, as the leakage function f
j
has this information too, and thus
can compute the f
′′
j+1
that Q
would have chosen.

tion might be to allow the state to be refreshed using ex-
ternal randomness. This option might be difficult to han-
dle for many cryptosystems including ciphers for sev-
eral reasons. For example one must make sure that all le-
gitimate parties get the randomness in each refresh cycle,
which means that parties have to be often “online” to keep
their key valid, even if they almost never actually use it.
Another option is to require that the leakage function is in
some very weak complexity class not including the function
used for key evolution.
3
Leakage Resilient Stream-Cipher. The main contribu-
tion of this paper is the construction of a stream cipher S
which is provably secure in the model described above. Let
τ
denote the data on Ss memory which is accessed in the
th round, and let K
denote the output written by S on its
output tape O in the th round.
The classical security notion for stream ciphers implies
that one cannot distinguish K
from a random string given
K
1
, . . . , K
1
, of course our construction satisfies this no-
tion. But we prove much more, namely that K
is in-
distinguishable from random even when not only given
K
0
, . . . , K
1
, but additionally Λ
1
, . . . , Λ
1
where Λ
j
=
f
j
(τ
j
) and each f
j
is a function with range {0, 1}
λ
chosen
adaptively (as a function of K
1
, . . . , K
j1
, Λ
1
, . . . , Λ
j1
)
by an adversary. If the adversary also gets Λ
, we cannot
hope that K
is indistinguishable from random any more,
as f
could for example simply output the λ first bits of K
.
The best we can hope for in this case, is that K
is unpre-
dictable (or equivalently, has high HILL-pseudoentropy), in
the full version of this paper [14] we will show that for our
construction this indeed is the case.
Forward Security. In many settings, it is not enough that
K
is indistinguishable (or unpredictable) given the view
of the adversary after round 1 as just described, but it
should stay indistinguishable even if S leaks some infor-
mation in the future. In our construction such “forward-
security” comes up naturally, as the key K
is almost in-
dependent (in a computational sense) from the state of S
after K
was output. Precise security definitions are given
is Sect. 2.
Our Construction. The starting point of our construction
is the concept of alternating extraction previously used in
the intrusion-resilient secret-sharing scheme from [13]. We
move this concept to the computational setting by proving a
lemma that states that the output of any PRG has high HILL
3
Interestingly, that would probably be the first case of a real-life crypto-
graphic application where it makes sense to assume that the computational
power of the adversary (in some parts of the attack scenario) is smaller
than the computational power needed to execute the scheme.
pseudoentropy (i.e. is indistinguishable from some distri-
bution with high min-entropy) even if arbitrary information
about the seed is leaked. Our construction can be instanti-
ated with any pseudorandom-generator, and the amount of
leakage λ that we can tolerate in each step depends on the
strength of the underlying PRG, it is at least logarithmic, but
can be as large as a constant fraction of the internal state of
S if the PRG is exponentially secure. The impatient reader
might want to skip ahead to Section 2.2 and have a look at
the actual the definition.
On (Non-)Uniformity. Throughout, we always consider
non-uniform adversaries.
4
In particular, our stream-cipher
is secure against non-uniform adversaries, and we require
the PRG used in the construction to be secure against non-
uniform adversaries. The only step in the security proof
where it matters that we are in a non-uniform setting, is
in Section 5, where we use a theorem due to Barak et al.
[2] which shows that two notions of pseudoentropy (called
HILL and metric-type) are equivalent for circuits. In [2]
this equivalence is also proved in a uniform setting, and one
could use this to get a stream-cipher secure against uniform
adversaries from any PRG secure against uniform adver-
saries. We will not do so, as for one thing the non-uniform
setting is the more interesting one in our context, and more-
over the exact security we could get in the uniform setting is
much worse (due to the security loss in the reduction from
[2] in the uniform setting).
1.1. Related work
A general theory of side-channel attacks was put forward
by Micali and Reyzin [25], who propose a number of “ax-
ioms” on which such a theory should be based. In partic-
ular they formulate and motivate the assumption that “only
computation leaks information”, used subsequently in e.g.
[16, 28] and also in this work. As mentioned in the intro-
duction, most published work on securing cryptosystems
against side-channel attacks are ad-hoc solutions trying to
prevent some particular attack or heuristics coming without
security proofs, we mention some notable exceptions below.
Exposure-resilient functions [5, 9, 20] are functions
whose output remains secure, even if an adversary can learn
the value of some input bits, this model has been extensively
investigated and very strong results have been obtained.
Ishai et al. [19, 18] consider the more general case of
making circuits provably secure [19] and even tamper resis-
tant [18] against adversaries who can read/tamper the value
4
Recall that a uniform adversary can be modelled as a Turing-machine
which as input gets a security parameter, whereas (more powerful) non-
uniform adversaries will, for each security parameter, additionally get a
different polynomial-length advice string. Equivalently, we can model
non-uniform adversaries as a sequence of circuits (indexed by the secu-
rity parameter).

of a bounded number of arbitrary wires in the circuit (and
not just the input bits). It is interesting to compare the re-
sult from this paper with the approach of Ishai et al. On one
hand, their results are generic, in the sense that they provide
a method to transform any cryptosystem given as a circuit
C into another circuit C
t
that is secure against an adversary
that can read-off up to t wires, whereas we only construct
a particular primitive (a stream-cipher). On the other hand,
we prove security against any side-channel attack, whereas
Ishai et al. consider the particular case where the adversary
can read-off the values of a few individual wires. Moreover
Ishai et al. require special gates that can generate random
bits, we do not assume any special hardware.
Canetti et al. [6] consider the possibility of secure com-
putation in a setting where perfect deletion of most of the
memory is not possible. Although the goal is different,
their model is conceptually very similar to ours: non-perfect
deletion of X is modelled by giving an adversary f(X) for
a sufficiently compressing function f of its choice. In their
setting, the assumption that parts of the state can be per-
fectly erased is well motivated, unfortunately in our context
this would translate to the very unrealistic requirement that
some computations can be done perfectly leakage free.
The idea to define the set of leakage functions by re-
stricting the length of function’s output is taken from the
bounded-retrieval model [8, 11, 10, 7, 13], which in turn
was inspired by the bounded-storage model [24].
5
Finally
let us mention that some constructions of ciphers secure
against general leakages were also proposed in the litera-
ture, however, their security proofs rely on very strong as-
sumptions like the ideal-cipher model [28], or one-way per-
mutations which do not leak any information at all [25].
1.2. Probability-theoretic preliminaries
We denote with U
n
the random variable with distri-
bution uniform over {0, 1}
n
. With X Y we de-
note that X and Y have the same distribution. Let
random variables X
0
, X
1
be distributed over some set
X and let Y be a random variable distributed over Y.
Define the statistical distance between X
0
and X
1
as
δ(X
0
; X
1
) = 1/2
P
x∈X
|P
X
0
(x) P
X
1
(x)|. Moreover
let δ(X
0
; X
1
|Y ) := d(X
0
, Y ; X
1
, Y ) be the statistical dis-
5
The bounded-storage model is limited in its usability by the fact that
the secret key must be larger than the memory of a potential adversary,
which means in the range of terabytes. In the bounded-retrieval model,
the key must only be larger than the amount of data adversary can re-
trieve without being detected (say, by having a computer-virus send the
data from an infected machine), which means in the range of Mega- or
Gigabytes. Whereas in our setting the key length depends on the amount
of side-channel information that leaks (in one round) form the cryptosys-
tem considered, which (given a reasonable construction) we can assume
to be as small as a few (or a few hundred) bits. In particular, unlike the
bounded-storage and bounded-retrieval models, our keys need not to be
made artificially huge.
tance between X
0
and X
1
conditioned on Y . If X is dis-
tributed over {0, 1}
n
then let d(X) := δ(X; U
n
) denote the
statistical distance of X from a uniform distribution (over
{0, 1}
n
), and let d(X|Y ) := δ(X; U
n
|Y ) denote the sta-
tistical distance of X from a uniform distribution, given Y .
If d(X) ǫ then we will say that X is ǫ-close to uniform.
We will say that a variable X has min-entropy k, denoted
H
(X) = k, if max
x
Pr[X = x] = 2
k
.
Definition 1 (Extractor) A function ext : {0, 1}
k
ext
×
{0, 1}
r
{0, 1}
m
ext
is an (ǫ
ext
, n
ext
) extractor if for any
X with H
(X) n
ext
and K U
k
ext
we have that
d((ext(K, X), K) ǫ
ext
.
2. A Leakage-Resilient Stream-Cipher
We will now formally define our security notions which
we already informally discussed and motivated in Sect. 1.
Initialization. The secret key of our stream cipher S con-
sists of the three variables A, B {0, 1}
r
and K
0
{0, 1}
k
. The values A, B, K
0
should be sampled uniformly
at random, but only A, B must be secret, K
0
must not, one
can think of K
0
as the first k bits of output of S. In an
implementation, the memory of S is assumed to be split in
three parts, M
0
, M
1
, O, and for j > 0 we denote with
M
j1
0
, M
j1
1
, O
j1
the contents of M
0
, M
1
, O at the be-
ginning of the jth round, in particular the initial state is
M
0
0
= A, M
0
1
= B and O
0
= K
0
.
Computation. As illustrated in Fig. 1, in the th round S
does only access (which means reads and possible rewrites)
M
mod 2
and the output tape O. Let τ
denote the values
(on either M
0
or M
1
) that is accessed in the th round, and
τ
the value which is not accessed, i.e.
τ
def
= M
1
mod 2
τ
def
= M
1
+1 mod 2
(1)
We will refer to the output of the th round (i.e. the value
O
on the output tape O at the end of this round) as K
.
Adversary. As illustrated in Fig. 1, we consider adver-
saries Q which in the th round can adaptively choose a
function f
with range {0, 1}
λ
, and at the end of the round
gets the output K
and
Λ
def
= f
(τ
)
i.e. the output of f
on input the data accessed by S in this
round. We denote with A
λ
adversaries as just described
restricted to choose leakage functions with range {0, 1}
λ
.
Let view
denote the view of the adversary after K
has
been computed, i.e.
view
= [K
0
, . . . , K
, Λ
1
, . . . , Λ
].

Indistinguishability. The security notion we consider re-
quires that K
is indistinguishable from random, even when
given view
1
.
A K
0
B
M
0
0
O
0
= K
0
M
0
1
Q
eval
S
M
1
0
O
1
= K
1
M
1
1
Q
eval
S
M
2
0
O
2
= K
2
M
2
1
Q
eval
S
M
3
0
O
3
= K
3
M
3
1
Q
f
1
τ
1
f
1
(τ
1
)
f
2
τ
2
f
2
(τ
2
)
f
3
τ
3
f
3
(τ
3
)
Figure 1. General structure of the random ex-
periment S(A, K
0
, B)
3
à Q (the evaluation of S
is black, the attack related part is gray).
We denote with S(A, B, K
0
)
à Q the random ex-
periment where an adversary Q A
λ
attacks S (initial-
ized with a key A, B, K
0
) for rounds (cf. Fig. 1), and
with view(S(A, B, K
0
)
à Q) we denote the view view
of Q at the end of the attack. For any circuit D
ind
:
{0, 1}
{0, 1} (with one bit output), we denote with
AdvInd(D
ind
, Q, S, ) the advantage of D
ind
in distinguish-
ing K
from random given view(S
1
à Q), formally
AdvInd(D
ind
, Q, S, ) = |p
real
p
rand
| where
p
rand
def
= Pr
A,B,K
0
[D
ind
(view(S(A, B, K
0
)
1
à Q), U
k
) = 1]
p
real
def
= Pr
A,B,K
0
[D
ind
(view(S(A, B, K
0
)
1
à Q), K
) = 1]
In the full version of this paper, we will also consider the
case where the distinguisher also gets Λ
, i.e. we assume
that information leaks also in round . Although then we
can’t hope for K
to be indistinguishable from random (as
Λ
could for example be the first λ bits of K
), we still can
require that K
is unpredictable.
Forward Security of S. As motivated in the introduction,
we’ll also consider “forward-secure” notions of the above
definition. Informally, we’d like to extend the definitions
AdvInd just given, but additionally give the attacker D
ind
the
complete state M
0
, O
, M
1
of S after K
has been com-
puted. Of course then K
= O
cannot be secure in any
way as it is given to D
ind
entirely. We could simply not give
O
to D
ind
, but then we cannot claim that we leaked the
state of S completely, as in our construction O
is needed
to compute the future outputs of S. There are at least two
ways around this problem. We could relax our requirement
on forward security, and not leak the state after round , but
only after round + 1 (in terms of the implementation, this
would mean that the output K
is indistinguishable, if in
rounds and + 1 nothing leaked, even given the complete
state of S after round + 1).
Another possibility, which we’ll use, is to split the value
K
into two parts K
= K
nxt
kK
out
, such that only the K
nxt
part is actually used by S to compute the future state. We
then require that K
out
(and not the entire K
) is indistin-
guishable from random if in round nothing leaked, even
when given the state of S after round , where K
out
is not
considered to be part of the state.
Let state
def
= [M
0
, K
nxt
, M
1
] denote the state of S
after round (not containing K
out
as just explained). The
forward secure indistinguishability notion is given by
AdvIndFwd(D
ind
, Q, S, ) = |p
fwd
real
p
fwd
rand
|
where p
fwd
rand
and p
fwd
real
are the probabilities
Pr
A,B
K
0
[D
ind
(view(S(A, B, K
0
)
1
à Q, state
), U
|K
out
|
) = 1]
Pr
A,B
K
0
[D
ind
(view(S(A, B, K
0
)
1
à Q, state
), K
out
) = 1]
respectively. The only difference to AdvInd is that now D
ind
additionally gets state
, and we only require K
out
(and
not the whole K
) to be indistinguishable. Thus one gets
forward security at the prize of discarding the K
nxt
part of
Ss output K
. In our construction, K
nxt
will be just a ran-
dom seed for an extractor, using existing constructions we
can make this part logarithmic in the total length of K
, thus
the efficiency loss one has to pay to get forward security is
marginal.
2.1. The Ingredients
The main ingredients of our construction is the concept
of alternating extraction introduced in the intrusion-resilient
secret-sharing scheme of [13] (which again was based on
ideas from the bounded storage model [12, 24, 31]), com-
bined with the concept of HILL-pseudoentropy (cf. Def. 3,
Sect. 5) which we use to get a computational version of al-
ternating extraction.
Alternating Extraction. Let ext : {0, 1}
k
ext
× {0, 1}
r
{0, 1}
k
be an (ǫ
ext
, n
ext
)-extractor (cf. Def. 1). Consider
some uniformly random A, B {0, 1}
r
and some ran-
dom K
0
{0, 1}
k
. As illustrated in Fig. 3 in Sect. 4, let

Citations
More filters
Book ChapterDOI

A Unified Framework for the Analysis of Side-Channel Key Recovery Attacks

TL;DR: In this paper, the authors propose a framework for the analysis of cryptographic implementations that includes a theoretical model and an application methodology based on commonly accepted hypotheses about side-channels that computations give rise to.
Journal ArticleDOI

Introduction to differential power analysis

TL;DR: This paper examines how information leaked through power consumption and other side channels can be analyzed to extract secret keys from a wide range of devices and introduces approaches for preventing DPA attacks and for building cryptosystems that remain secure even when implemented in hardware that leaks.
Book ChapterDOI

Simultaneous Hardcore Bits and Cryptography against Memory Attacks

TL;DR: The public-key encryption scheme of Regev, and the identity-basedryption scheme of Gentry, Peikert and Vaikuntanathan are remarkably robust against memory attacks where the adversary can measure a large fraction of the bits of the secret-key, or more generally, can compute an arbitrary function of thesecret-key of bounded output length.
Book ChapterDOI

Provably secure higher-order masking of AES

TL;DR: In this paper, the authors proposed a generic dth-order masking scheme for AES with a provable security and a reasonable software implementation overhead, which can be efficiently implemented in software on any general-purpose processor.
Proceedings ArticleDOI

Separating succinct non-interactive arguments from all falsifiable assumptions

TL;DR: In this article, it was shown that black-box reductions cannot be used to prove the security of any SNARG construction based on any falsifiable cryptographic assumption, including one-way functions, trapdoor permutations, DDH, RSA, LWE etc.
References
More filters
Book ChapterDOI

Differential Power Analysis

TL;DR: In this paper, the authors examine specific methods for analyzing power consumption measurements to find secret keys from tamper resistant devices. And they also discuss approaches for building cryptosystems that can operate securely in existing hardware that leaks information.
Book ChapterDOI

Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS, and Other Systems

TL;DR: By carefully measuring the amount of time required to perform private key operalions, attackers may be able to find fixed Diffie-Hellman exponents, factor RSA keys, and break other cryptosystems.
Journal ArticleDOI

Zur Theorie der Gesellschaftsspiele

Journal ArticleDOI

A Pseudorandom Generator from any One-way Function

TL;DR: It is shown how to construct a pseudorandom generator from any one-way function, and it is shown that there is a Pseudorandom Generator if and only ifthere is a one- way function.
Book ChapterDOI

Differential Fault Analysis of Secret Key Cryptosystems

TL;DR: This work states that this attack is applicable only to public key cryptosystems such as RSA, and not to secret key algorithms such as the Data Encryption Standard (DES).
Frequently Asked Questions (1)
Q1. What contributions have the authors mentioned in the paper "Leakage-resilient cryptography" ?

One notion of security the authors prove for S is that Kl is indistinguishable from random when given K1,..., Kl−1, f1 ( τ1 ),..., fl−1 ( τl−1 ) and also the complete internal state of S after Kl has been computed ( i. e. S is forward-secure ). The amount of leakage λ that the authors can tolerate in each step depends on the strength of the underlying PRG, it is at least logarithmic, but can be as large as a constant fraction of the internal state of S if the PRG is exponentially hard.