scispace - formally typeset
Open Access

Software grand exposure: SGX cache attacks are practical

TLDR
In this article, the authors demonstrate the effectiveness of cache timing attacks against RSA and other cryptographic operations, such as genomic processing, and analyze countermeasures and show that none of the known defenses eliminates the attack.
Abstract
Intel SGX isolates the memory of security-critical applications from the untrusted OS. However, it has been speculated that SGX may be vulnerable to side-channel attacks through shared caches. We developed new cache attack techniques customized for SGX. Our attack differs from other SGX cache attacks in that it is easy to deploy and avoids known detection approaches. We demonstrate the effectiveness of our attack on two case studies: RSA decryption and genomic processing. While cache timing attacks against RSA and other cryptographic operations can be prevented by using appropriately hardened crypto libraries, the same cannot be easily done for other computations, such as genomic processing. Our second case study therefore shows that attacks on noncryptographic but privacy sensitive operations are a serious threat. We analyze countermeasures and show that none of the known defenses eliminates the attack.

read more

Content maybe subject to copyright    Report

Software Grand Exposure: SGX Cache Attacks Are Practical
Ferdinand Brasser
1
, Urs M
¨
uller
2
, Alexandra Dmitrienko
2
, Kari Kostiainen
2
, Srdjan Capkun
2
, and
Ahmad-Reza Sadeghi
1
1
System Security Lab, Technische Universit
¨
at Darmstadt, Germany
{ferdinand.brasser,ahmad.sadeghi}@trust.tu-darmstadt.de
2
Institute of Information Security, ETH Zurich, Switzerland
muurs@student.ethz.ch, {alexandra.dmitrienko,kari.kostiainen,srdjan.capkun}@inf.ethz.ch
Abstract
Intel SGX isolates the memory of security-critical ap-
plications from the untrusted OS. However, it has been
speculated that SGX may be vulnerable to side-channel
attacks through shared caches. We developed new cache
attack techniques customized for SGX. Our attack differs
from other SGX cache attacks in that it is easy to deploy
and avoids known detection approaches. We demon-
strate the effectiveness of our attack on two case studies:
RSA decryption and genomic processing. While cache
timing attacks against RSA and other cryptographic op-
erations can be prevented by using appropriately hard-
ened crypto libraries, the same cannot be easily done for
other computations, such as genomic processing. Our
second case study therefore shows that attacks on non-
cryptographic but privacy sensitive operations are a seri-
ous threat. We analyze countermeasures and show that
none of the known defenses eliminates the attack.
1 Introduction
Intel Software Guard Extension (SGX) [14, 23] en-
ables execution of security-critical application code,
called enclaves, in isolation from the untrusted system
software. SGX provides protections in the processor to
prevent a malicious OS from directly reading or mod-
ifying enclave memory at runtime. The architecture is
especially useful in cloud computing applications, where
data and computation can be outsourced to an external
computing infrastructure without having to fully trust the
cloud provider and the entire software stack.
However, researchers have recently demonstrated that
SGX isolation can be attacked by exploiting information
leakage through various (software) side-channels. One
type of information leakage is based on page faults: In
SGX, memory management (including paging) is left to
the untrusted OS [14]. Consequently, the OS can force
page faults at any point of enclave execution and from
the requested pages learn the secret-dependent enclave
control flow or data access patterns [53]. Another type
of information leakage is based on overseeing caches
shared between the enclave and the untrusted software,
as pointed out in [14] and by Intel [27, p. 35]. Cache at-
tacks have been studied extensively independent of SGX
[43, 39, 30, 36, 54, 21, 20]. Recently, a number of cache-
based attacks targeted SGX platforms [44, 38, 19].
To tackle the information leakage problem in SGX,
different countermeasures have also been proposed. A
promising system-level approach is to detect when the
OS is intervening with enclave execution as done in T-
SGX [46] and D
´
ej
´
a Vu [10]. These solutions detect page
faults and allow the enclave to defend itself from a possi-
ble attack (i.e., to stop its execution). Another approach
against information leakage is obviously hardware re-
design as taken by Sanctum [15]. Although new hard-
ware design like Sanctum is out of our scope, we will
elaborate on it in Section 6.
Our goals and contributions. First, we explore novel
cache attack techniques customized for SGX that are eas-
ier to deploy than other SGX cache side-channel attacks
and are significantly harder to detect/prevent, particularly
by the recently proposed defenses [46, 10] mentioned
above. Second, we demonstrate that information leak-
age is a serious concern, since it can defeat one of the
core benefits of SGX, namely, secure computation over
sensitive data on an untrusted platform. We show this on
two case studies: first a cryptographic primitive and then
a non-cryptographic privacy-preserving algorithm.
Novel attack techniques. Our attack enables the ad-
versary to run both the victim enclave and its own pro-
cess uninterrupted in parallel, so that the victim enclave
is unaware of the attack and cannot take measures to de-
fend itself. Uninterrupted attack execution imposes tech-
nical challenges such as dealing with significant noise
in cache monitoring. To realize our attack effectively in
this setting, we needed to develop a set of novel attack
techniques. For instance, we leverage the capabilities of
the privileged adversary to assign the victim process to a
dedicated core, reduce the number of benign interrupts,
1

and perform precise cache monitoring using CPU per-
formance counters. Note that the SGX adversary model
includes the capabilities of the OS.
Our attack differs from other recently proposed cache-
based attacks on SGX [44, 38, 19] in various ways:
CacheZoom [38] interrupts the victim repeatedly and
can therefore be easily detected by the above men-
tioned countermeasures T-SGX [46] and D
´
ej
´
a Vu [10].
Gotzfried et al. [19] require synchronization between the
victim enclave and the attacker. Schwartz et al. [44] im-
plement their attack on the L3 cache (i.e., cross-core at-
tack). Our attack works on the L1 cache (i.e., same-core
attack) and does not require interrupts or synchrony be-
tween the victim and the attacker, which makes it signif-
icantly harder to detect and easier to deploy in practice.
We provide a more detailed comparison in Section 7.
Case studies. We show the effectiveness of our at-
tack techniques for two different case studies. The first
case study is the canonical example of RSA decryption
where we extract 70% of the private key bits with ap-
proximately 300 repeated decryptions (70% is sufficient
to recover the entire private key efficiently).
However, cache attacks can principally be mitigated at
the application level. In particular, many recent crypto-
graphic libraries provide implementations that have been
hardened against cache monitoring. For example, the
scatter-and-gather technique [8] is a widely deployed
protection, where every secret-dependent lookup table
access is manually changed to touch memory addresses
corresponding to all monitored cache sets. Hence, the ac-
cessed table element is effectively hidden from the adver-
sary. Also the SGX SDK includes cryptographic algo-
rithm variants that use the scatter-gather protection [28].
Thus, cache attacks on cryptographic enclaves may not
be a major threat in practice.
On the other hand, a more significant concern, and a
problem that has not been studied extensively in the past,
is information leakage of various and probably more
complex computations that are not developed by secu-
rity experts. While manual defenses like scatter-gather
can effectively prevent cache attacks, they require signif-
icant expertise and effort from the developer. It seems
unrealistic to assume that every enclave developer is
aware of possible information leakage and able to man-
ually harden his implementation against cache monitor-
ing. Hence, as the second case study we demonstrate in-
formation leakage from non-cryptographic, but privacy-
sensitive enclave for a genome indexing algorithm called
PRIMEX [34] that uses a hash table to index a genome
sequence. By monitoring the genome-dependent hash
table accesses we can identify if the processed human
genome (DNA) includes particular sequences that are of-
ten used in forensic analysis and genomic fingerprint-
ing [4]. We show that the information leaked through
caches during indexing is sufficient to identify the per-
son whose DNA is processed with high probability.
We argue that large classes of SGX enclaves, and
therefore many practical cloud computing scenarios, are
vulnerable to similar information leakage. Our analy-
sis on existing countermeasures shows that none of the
known defenses effectively prevents our attack.
Contributions. To summarize, this paper makes the
following main contributions:
Novel SGX cache attack techniques. We demon-
strate that cache attacks are practical on SGX. In
particular, we develop novel cache attack tech-
niques for SGX that are easier to deploy and sig-
nificantly harder to detect/prevent.
Leakage from non-cryptographic applications.
Through a case study on a genomic processing en-
clave we show that non-cryptographic, but privacy-
sensitive applications deployed as SGX enclaves are
vulnerable to cache attacks.
Countermeasure analysis. We show that none of
the known defenses mitigates our attack in practice.
The rest of the paper is organized as follows. In Sec-
tion 2 we provide background information. Section 3 in-
troduces the system and adversary model and Section 4
explains the attack design. Section 5 summarizes our
RSA attack and details the genomic case study. We an-
alyze countermeasures in Section 6, review related work
in Section 7, and draw conclusions in Section 8.
2 Background
This section provides the necessary background on In-
tel SGX, cache architecture and performance monitoring
counters.
2.1 Intel SGX
SGX introduces a set of new CPU instructions for cre-
ating and managing isolated software components [37,
25], called enclaves, that are isolated from all software
running on the system including privileged software like
the operating system (OS) and the hypervisor. SGX as-
sumes the CPU itself to be the only trustworthy hard-
ware component of the system, i.e., enclave data is han-
dled in plain-text only inside the CPU. Data is stored un-
encrypted in the CPU’s caches and registers, however,
whenever data is moved out of the CPU, e.g., into the
DRAM, it is encrypted and integrity protected.
The OS, although untrusted, is responsible for creat-
ing and managing enclaves. It allocates memory for the
enclaves, manages virtual to physical address translation
for the enclave’s memory and copies the initial data and
code into the enclave. However, all actions of the OS are
recorded securely by SGX and can be verified by an ex-
ternal party through (remote) attestation [3]. SGX’s seal-
2

ing capability enables persistent secure storage of data.
During enclave execution the OS can interrupt and re-
sume the enclave like a normal process. To prevent in-
formation leakage, SGX handles the context saving of
enclaves in hardware and erases the register content be-
fore passing control to the OS, called asynchronous en-
clave exit (AEX). When an enclave is resumed, again the
hardware is responsible for restoring the enclave’s con-
text, preventing manipulations.
2.2 Cache Architecture
In the following we provide details of the Intel x86
cache architecture [26, 24].
1
We focus on the Intel Sky-
lake processor generation, i.e., the type of CPU we used
for our implementation and evaluation.
2
Memory caching “hides” the latency of memory ac-
cesses to the system’s dynamic random access memory
(DRAM) by keeping a copy of currently processed data
in cache. When a memory operation is performed, the
cache controller checks whether the requested data is al-
ready cached, and if so, the request is served from the
cache, called a cache hit, otherwise cache miss. Due
to higher cost (production, energy consumption), caches
are orders of magnitude smaller than DRAM and only a
subset of the memory content can be present in the cache
at any point in time. The cache controller aims to maxi-
mize the cache hit rate by predicting which data are used
next by the CPU. This prediction is based on the assump-
tion of temporal and spatial locality of memory accesses.
For each memory access the cache controller has to
check if the data are present in the cache. Sequentially
iterating through the entire cache would be very expen-
sive. Therefore, the cache is divided into cache lines
and for each memory address the corresponding cache
line can be quickly determined, the lower bits of a mem-
ory address select the cache line. Hence, multiple mem-
ory addresses map to the same cache line. Having one
cache entry per cache line quickly leads to conflicts and
the controller has to evict data from cache to replace it
with newly requested data. To minimize such conflicts
caches are often (set) associative. Multiple copies of
each cache line exist in parallel, also known as cache
sets, thus #cachesets many data from conflicting mem-
ory locations can stay in the cache simultaneously.
The current Intel CPUs have a three level hierarchy
of caches. The last level cache (LLC), also known as
level 3 (L3) cache, is the largest and slowest cache; it
is shared between all CPU-cores. Each CPU core has a
dedicated L1 and L2 cache, but they are shared between
the core’s simultaneous multi-threading (SMT) execu-
1
We will use the terminology from Intel documents [1].
2
At the time of writing Intel SGX is available only on Intel Sky-
lake and Kaby Lake CPUs. To the best of our knowledge there are no
differences in the cache architecture between Skylake and Kaby Lake.
tion units (also known as hyper-threading).
A unique feature of the L1 cache is the separation into
data and instruction cache. Code fetches only affect the
instruction cache and leave the data cache unmodified,
and vice versa. In the L2 and L3 caches code memory
and data memory compete for the available cache space.
2.3 Performance Monitoring Counters
Performance Monitoring Counters (PMC) are a fea-
ture of the CPU for recording hardware events. Their
primary goal is to give software developers insight into
their program’s effects on the hardware in order for them
to optimize their programs.
The CPU has a set of PMCs, which can be configured
to monitor different events, for instance, executed cycles,
cache hits or cache misses for the different caches, mis-
predicted branches, etc. PMCs are configured by select-
ing the event to monitor as well as the mode of opera-
tion. This is done by writing to model specific registers
(MSR), which can only be done by privileged software.
PMCs are read via the RDPMC instruction (read perfor-
mance monitoring counters), which can be configured to
be available in unprivileged mode.
Hardware events recorded by PMCs could be misused
as side-channels, e.g., to monitor cache hits or misses of
a victim process or enclave. Therefore, SGX enclaves
can disable PMCs on entry by activating a feature called
Anti Side-channel Interference” (ASCI) [26]. This sup-
presses all thread-specific performance monitoring, ex-
cept for fixed cycle counters. Hence, hardware events
triggered by an enclave cannot be monitored through the
PMC feature. For instance, cache misses of memory
loaded by an enclave will not be recorded in the PMCs.
3 System and Adversary Model
We assume a system equipped with Intel SGX, i.e., a
hardware mechanism to isolate data and execution of a
software component from the rest of the system’s soft-
ware that is considered untrusted. The resources which
are used to execute the isolated component (or enclave),
however, are shared with the untrusted software on the
system. The system’s resources are managed by un-
trusted, privileged software (operating system OS).
Figure 1 shows an abstract view of the adversary model,
an enclave executing on a system with a compromised
OS, sharing a CPU core with an attacker process.
The adversary’s objective is to learn secret information
from the enclave, e.g., a secret key generated inside the
enclave through a hardware random number generator, or
sensitive data supplied to the enclave after initialization
through a secure channel.
Adversary capabilities. The adversary is in control of
all system software, except for the software executed in-
3

Operating System (OS)
Core 0 (attack core)
Core n
Victim
Enclave
Prime+
Probe
Process
1
Process
k
Process
2
Last Level Cache (LLC)
L1/L2 Cache L1/L2 Cache
Thread 1Thread 0 Thd m
Thd m+1
Software
Stack
CPU
Figure 1: High-level view of our attack; vic-
tim and attacker’s Prime+Probe code run in
parallel on a dedicated core. The malicious
OS ensures that no other code shares that
core minimizing noise in L1/L2 cache.
cache line 0
cache line 1
cache line 2
cache line n
cache line 0
cache line 1
cache line 2
cache line n
cache line 0
cache line 1
cache line 2
cache line n
t
0
: Prime t
1
: Victim t
2
: Probe
for each cline Z
write(Z)
if (keybit[i] == 0)
read(X)
else
read(Y)
For each cline Z
read(Z)
measure_time(read)
Cache
Code
Figure 2: Prime+Probe side-channel attack technique; first the attacker
primes the cache, next the victim executes and occupies some of the
cache, afterwards the attacker probes to identify which cache lines have
been used by the victim. This information allows the attacker to draw
conclusion on secret data processed by the victim process.
side the enclave.
3
Although the attacker cannot control
the program inside the enclave, he does know the ini-
tial state of the enclave, i.e., the program code of the
enclave and its initial data. In particular, randomization
through mechanisms like address space layout random-
ization (ASLR) are visible to the adversary. The attacker
knows the mapping of memory addresses to cache lines
and can reinitialize the enclave and replay inputs, hence,
he can run the enclave arbitrarily often. Further, since
the adversary has control over the OS he controls the al-
location of resources to the enclave, including the time
of execution, and the processing unit (CPU core) the en-
clave is running on. Similarly, the adversary can con-
figure the system’s hardware arbitrarily, e.g., define the
system’s behavior on interrupts, or set the frequency of
timers. However, the adversary cannot directly access
the memory of an enclave. Moreover, he cannot retrieve
the register state of an enclave, neither during the en-
clave’s execution nor on interrupts.
Attack goals. The adversary aims to learn about the
victim’s cache usage by observing effects on the cache
availability to its own program. In particular, he lever-
ages the knowledge of the mapping of cache lines to
memory locations in order to infer information about ac-
cess patterns of the enclave to the secret-dependent mem-
ory locations, which in turn allows him to draw conclu-
sions about sensitive data processed by the victim. We
show two concrete attacks for recovering an RSA key
and identifying individuals in genome processing appli-
cations in Section 5.
4 Our Attack Design
Our attack technique is based on the Prime+Probe
cache monitoring technique [39]. We will first explain
the “classical” variant of Prime+Probe, then we discuss
our improvements beyond the basic approach.
3
Due to integrity verification, the adversary cannot modify the soft-
ware executed inside the enclave, since SGX remote attestation would
reveal tempering.
4.1 Prime+Probe
The main steps of the Prime+Probe attack are depicted
in Figure 2. First, at time t
0
, the attacker primes the
cache, i.e., the attacker accesses memory such that the
entire cache is filled with data of the attacker process.
4
Afterwards, at time t
1
, the victim executes code with
memory accesses that are dependent on the victim’s sen-
sitive data, like a cryptographic key. The victim accesses
different memory locations depending on the currently
processed key-bit. In the example in Figure 2 the key-bit
is zero, therefore address X is read. Address X is mapped
to cache line 2, hence, the data stored at X are loaded into
the cache and the data that were present in cache line 2
before get evicted. The data at address Y are not accessed
and therefore the data in cache line 0 remains unchanged.
At time t
2
the attacker probes which of his cache lines
got evicted, i.e., which cache lines were used by the vic-
tim. A common technique to check for cache line evic-
tion is to measure access times. The attacker reads from
memory mapped to each cache line and measures the ac-
cess time. If the attacker’s data are still in the cache the
read operation returns them fast, if the read operation
takes longer, the data were evicted from the cache. In
Figure 2, the attacker will observe an increased access
time for cache line 2. Since the attacker knows the code
and access pattern of the victim, he knows that address X
of the victim maps to cache line 2, and that the sensitive
key-bit must be zero. This cycle is repeated by the at-
tacker for each sensitive key-bit that is processed by the
victim until the attacker learns the entire key.
4.2 Prime+Probe for SGX
Cache monitoring techniques, like Prime+Probe, ex-
perience significant noise. Therefore, most of the pre-
viously reported attacks (that, e.g., extract a full crypto-
graphic key) require thousands and even millions of re-
4
To prime all cache sets the attacker needs to write to #cachesets
cache pages, see Section 2.2 for details.
4

peated executions to average out the noise (e.g., [55, 56]).
Our goal is build an efficient attack, i.e., one that works
with much fewer executions. The key to this is reducing
noise (or pollution) in the cache monitoring channel.
There are two main aspects that guide our selection of
possible noise reduction techniques and also distin-
guish us from most of the previous attacks. (1) Our goal
is to build an attack that cannot be easily detected us-
ing the recently proposed detection approaches [46, 10];
this requirement limits the possible noise reduction tech-
niques we can use (e.g., no interrupts). (2) In our setting
the adversary is the privileged OS; this condition enables
us to leverage new methods that were previously inac-
cessible to the attacker (e.g., performance counters).
Challenges. Given these conditions, we list the main
challenges in our attack realization.
1. Minimizing cache pollution caused by other tasks.
2. Minimizing cache pollution by the victim itself.
3. Uninterrupted victim execution to counter side-
channel protection techniques and prevent cache
pollution by the OS.
4. Reliably identifying cache evictions.
5. Performing cache monitoring at a high frequency.
Next, we describe a set of new attack techniques that
we developed to address each of the challenges above.
4.3 Noise Reduction Techniques
(1.) Isolated attack core. We isolate the attack core
from other processes in order to minimize the noise in
the side channel. Figure 1 shows our approach to isolate
the victim enclave on a dedicated CPU core, which only
executes the victim and our attacker Prime+Probe code.
By default Linux schedules all processes of a system
to run on any available CPU core, hence, impacting all
caches. The attacker cannot distinguish between cache
evictions caused by the victim and those caused by any
other process. By modifying the Linux scheduler, the
adversary can make sure that one core (we call it attacker
core) is exclusively used by the victim and the attacker
(“Core 0” in Figure 1). This way no other process can
pollute this core’s L1/L2 cache.
(2.) Self-pollution. The attacker needs to observe spe-
cific cache lines that correspond to memory locations rel-
evant for the attack. From the attacker’s point of view it
is undesirable if those cache lines are used by the victim
for any other reason than accessing these specific mem-
ory locations, e.g., by accessing unrelated data or code
that map to the same cache line.
In our attack we use the L1 cache. It has the advantage
of being divided into a data cache (L1D) and an instruc-
tion cache (L1I). Therefore, code accesses, regardless of
the memory location of the code, never map to the cache
lines of interest to the attacker. Victim accesses to unre-
lated data mapping to relevant cache lines leads to noise
in the side channel. This noise source cannot be influ-
enced by the attacker given that the memory layout of
the victim is fixed.
(3.) Uninterrupted execution. Interrupting the victim
enclave yields two relevant problems. (1) When an en-
clave is interrupted, an asynchronous enclave exit (AEX)
is performed and the operating system’s interrupt ser-
vice routine (ISR) in invoked (see Section 2.1). Both,
the AEX and the ISR use the cache, and hence, induce
noise. (2) By means of transactional memory accesses
an enclave can detect that it has been interrupted. This
feature has been used for a side channel defense mecha-
nism [46, 10]. We discuss the details in Section 6. Hence,
making the enclave execute uninterrupted ensures that
the enclave remains unaware of the side-channel attack.
In order to monitor the changes in the victim’s cache
throughout the execution, we need to access the cache
of the attack core in parallel. For this we execute the
attacker code on the same core. The victim is running on
the first SMT (simultaneous multi-threading) execution
unit while the attacker is running on the second SMT
execution unit (see Figure 1). As the victim and attacker
code compete for the L1 cache, the attacker can observe
the victim’s effect on the cache.
The attacker code is, like the victim code, executed un-
interrupted by the OS. Interrupts usually occur at a high
frequency, e.g., due to arriving network packages, user
input, etc. By default interrupts are handled by all avail-
able CPU cores, including the attack core, and thus the
victim and attacker code are likely to be interrupted.
To overcome this problem we configured the interrupt
controller such that interrupts are not delivered to the at-
tack core, i.e., it can run uninterrupted. The only ex-
ception is the timer interrupt which is delivered per-core.
Each CPU core has a dedicated timer and the interrupt
generated by the timer can only be handled by the associ-
ated core. However, we reduced the interrupt frequency
of the timer to 100 Hz, which allows victim and attacker
code to run for 10 ms uninterrupted. This time frame is
sufficiently large to run the complete attack cycle undis-
turbed (with high probability).
5
As a result, the OS is not
executed on the attack core while the attack is in progress
(depicted by the dashed-line OS-box in Figure 1). Also,
the victim is not interrupted, thus, it remains unaware of
the attack.
(4.) Monitoring cache evictions. In the previous
Prime+Probe attacks, the attacker determines the evic-
tion of a cache line by measuring the time required for ac-
cessing memory that maps to that cache line. These tim-
ing based measurements represent an additional source
5
When an interrupt occurs, by chance, the attack can be repeated. If
the time frame is too short the timer frequency can be reduced further.
5

Citations
More filters
Proceedings ArticleDOI

Spectre Attacks: Exploiting Speculative Execution

TL;DR: Spectre as mentioned in this paper is a side channel attack that can leak the victim's confidential information via side channel to the adversary. And it can read arbitrary memory from a victim's process.
Proceedings Article

Foreshadow: extracting the keys to the intel SGX kingdom with transient out-of-order execution

TL;DR: This work presents Foreshadow, a practical software-only microarchitectural attack that decisively dismantles the security objectives of current SGX implementations and develops a novel exploitation methodology to reliably leak plaintext enclave secrets from the CPU cache.
Journal ArticleDOI

A Survey of Distributed Consensus Protocols for Blockchain Networks

TL;DR: A comprehensive review and analysis on the state-of-the-art blockchain consensus protocols is presented in this article, where the authors identify five core components of a blockchain consensus protocol, namely, block proposal, block validation, information propagation, block finalization, and incentive mechanism.
Proceedings ArticleDOI

Cache Attacks on Intel SGX

TL;DR: It is shown that SGX cannot withstand its designated attacker model when it comes to side-channel vulnerabilities due to the power of root-level attackers by exploiting the accuracy of PMC, which is restricted to kernel code.
Book ChapterDOI

Malware Guard Extension: Using SGX to Conceal Cache Attacks

TL;DR: Intel SGX provides a mechanism that addresses this scenario and aims at protecting user-level software from attacks from other processes, the operating system, and even physical attackers.
References
More filters
Journal ArticleDOI

MapReduce: simplified data processing on large clusters

TL;DR: This paper presents the implementation of MapReduce, a programming model and an associated implementation for processing and generating large data sets that runs on a large cluster of commodity machines and is highly scalable.
Proceedings ArticleDOI

Fully homomorphic encryption using ideal lattices

TL;DR: This work proposes a fully homomorphic encryption scheme that allows one to evaluate circuits over encrypted data without being able to decrypt, and describes a public key encryption scheme using ideal lattices that is almost bootstrappable.
Journal ArticleDOI

Software protection and simulation on oblivious RAMs

TL;DR: This paper shows how to do an on-line simulation of an arbitrary RAM by a probabilistic oblivious RAM with a polylogaithmic slowdown in the running time, and shows that a logarithmic slowdown is a lower bound.
Proceedings ArticleDOI

Innovative instructions and software model for isolated execution

TL;DR: This paper analyzes the threats and attacks to applications, then describes the ISA extension for generating a HW based container, and describes the programming model of this container.
Posted Content

Cache attacks and Countermeasures: the Case of AES.

TL;DR: In this article, the authors describe side-channel attacks based on inter-process leakage through the state of the CPU's memory cache, which can be used for cryptanalysis of cryptographic primitives that employ data-dependent table lookups.
Related Papers (5)