scispace - formally typeset
Open AccessProceedings ArticleDOI

Adaptive data-driven service integrity attestation for multi-tenant cloud systems

Du, +2 more
- pp 1-9
Reads0
Chats0
About
This article is published in International Workshop on Quality of Service.The article was published on 2011-01-01 and is currently open access. It has received 10 citations till now. The article focuses on the topics: Service (systems architecture) & Software.

read more

Content maybe subject to copyright    Report

Adaptive Data-Driven Service Integrity Attestation
for Multi-Tenant Cloud Systems
Juan Du
Department of Computer Science
North Carolina State University
jdu@ncsu.edu
Nidhi Shah
Cisco Systems
nidshah@cisco.com
Xiaohui Gu
Department of Computer Science
North Carolina State University
gu@csc.ncsu.edu
Abstract—Cloud systems provide a cost-effective service host-
ing infrastructure for application service providers (ASPs).
However, cloud systems are often shared by multiple tenants
from different security domains, which makes them vulnerable
to various malicious attacks. Moreover, cloud systems often
host long-running applications such as massive data processing,
which provides more opportunities for attackers to exploit the
system vulnerability and perform strategic attacks. In this paper,
we present AdapTest, a novel adaptive data-driven runtime
service integrity attestation framework for multi-tenant cloud
systems. AdapTest can significantly reduce attestation overhead
and shorten detection delay by adaptively selecting attested nodes
based on dynamically derived trust scores. Our scheme treats
attested services as black-boxes and does not impose any special
hardware or software requirements on the cloud system or ASPs.
We have implemented AdapTest on top of the IBM System
S stream processing system and tested it within a virtualized
computing cluster. Our experimental results show that AdapTest
can reduce attestation overhead by up to 60% and shorten the
detection delay by up to 40% compared to previous approaches.
I. INTRODUCTION
Cloud systems [1] have recently emerged as popular re-
source leasing infrastructures. Application service providers
(ASPs) can lease a set of resources from the cloud system
to offer software as a service [3] without paying the expen-
sive cost of owning and maintaining their own computing
infrastructures. Could systems are particularly amenable for
data processing services [2], [10], [15], [21], which are often
extremely resource-intensive. In particular, our work focuses
on dataflow processing systems [5], [15], [16] that have
many real world applications such as security surveillance
and business intelligence. As shown by Figure 1, users can
feed data from various data sources into the cloud system to
perform various data processing functions and receive final
data processing results from the cloud.
However, cloud systems are often shared by multiple tenants
that belong to different security domains, which makes them
vulnerable to various malicious attacks. Moreover, data pro-
cessing services are often long-running, which provides more
opportunities for attackers to exploit the system vulnerability
and perform strategic colluding attacks. Although virtual-
ization ensures certain isolation between users, malicious
978-1-4577-0103-0/11/$26.00
c
2011 IEEE.
s
1
VM VM
s
4
VM VM
VM VM
VM VM
VM VM
s
6
s
3
s
7
s
5
Sensor Networks
User
Data Storage
s
2
Fig. 1. Integrity attack in cloud-based data processing.
attackers can still leverage the shared hardware to launch
attacks [6], [26] from the VMs they own or by compromising
VMs of benign users. One of the top security concerns
1
for
cloud users is to verify the integrity of data processing results.
For example, a malicious (or compromised) credit checking
service may provide an incorrect credit score to lead to a
wrong mortgage application decision. Note that the integrity
attack is the most prevalent, which affects both public and
private data processing.
Although previous work has proposed various remote in-
tegrity attestation techniques [7], [30], [31], existing solutions
often require trusted hardware or secure kernel to be co-existed
with the remote computing platform, which are difficult to
be deployed in cloud systems. Although traditional Byzantine
Fault Tolerance (BFT) techniques (e.g., [9], [19]) can detect
malicious behavior using replicated services, those techniques
often incur high overhead and impose certain agreement
protocol over all replicas. To this end, we explore a data-
driven integrity attestation approach that only relies on result
consistency to detect malicious attacks, which is completely
transparent to the attested services without imposing any
special software or hardware requirements.
In this paper, we present AdapTest, a novel adaptive runtime
service integrity attestation framework for large-scale cloud
systems. AdapTest builds on top of our previously developed
system RunTest [13] that performs randomized probabilistic
attestation and employs a clique-based algorithm to pin-
point malicious nodes. However, randomized attestation still
imposes significant overhead for high-throughput multi-hop
1
Note that confidentiality and user data privacy are important orthogonal
issues addressed by previous work [20], [35].

data processing services. In contrast, AdapTest dynamically
evaluates the trustiness of different services based on previous
attestation results and adaptively selects attested services
during attestation. Thus, AdapTest can significantly reduce the
attestation overhead and shorten the detection delay. Specifi-
cally, this paper makes the following major contributions:
We provide a novel adaptive multi-hop integrity attes-
tation framework based on a new weighted attestation
graph model. We derive both per-node trust scores and
pair-wise trust scores to efficiently guide probabilistic
attestation.
We have implemented AdapTest on top of the IBM
System S stream processing system [15] and tested it
on the virtual computing lab (VCL) [4], a production
virtualized computing cluster that operates in a similar
way as Amazon EC2 [1]. Our experimental results show
that AdapTest can significantly reduce attestation over-
head for reaching the 100% detection rate by up to 60%
and shorten detection time by up to 40% compared to
previous randomized attestation approaches.
The rest of the paper is organized as follows. Section II
provides a brief background about cloud service integrity
attack and an overview of our approach together with our
assumptions. Section III presents the design details. Section
IV presents the prototype implementation and experimental
results. Section V compares our work with related work.
Finally, the paper concludes in Section VI.
II. OVERVIEW
In this section, we first describe our system model for cloud-
based data processing services and service integrity attestation.
Next, we present the service integrity attack model followed
by an overview of our approach and our key assumptions.
A. System Model
Cloud systems are shared computing infrastructures consist-
ing of a set of physical hosts interconnected via networks. Each
host can run multiple virtual machines (VMs) that may belong
to different owners. The application service provider (ASP)
can lease a collection of VMs to host its software services.
Each service instance, denoted by s
i
, provides a specific data
analysis function, denoted by f
i
, such as sorting, filtering, cor-
relation, or data mining utilities. Multiple service instances can
be functionally-equivalent, providing the same service function
for load balancing or fault tolerance purposes. Moreover,
popular services naturally attract different service providers
for profit. A multi-party service provisioning infrastructure
usually employs some portal nodes [17], [28] to aggregate
different service components into composite services based
on the user’s requirements. The user accesses cloud services
by submitting input data to the portal node that will forward
the user data to different service instances for processing and
then deliver final results back to the user. Portal nodes can
authenticate users to allow only authorized users to access the
cloud services.
B. Attack Model
In a shared cloud infrastructure, malicious attackers can
pretend to be legitimate service providers to provide fake
service instances or compromise vulnerable benign service
instances by exploiting their security roles. Our work focuses
on detecting the service integrity attack where a malicious (or
compromised) service instance gives untruthful data process-
ing results.
To escape detection, malicious attackers may want to per-
form selective cheating. That is, they can misbehave on a
selective subset of received data while pretending to be benign
on other received data. Thus, the attack detection scheme must
be able to capture misbehavior that are both unpredictable
and occasional without losing scalability. Although we can
perform integrity attestation on all service instances all the
time, the overhead of integrity attestation would be very high,
especially for high throughput data processing services in
large-scale cloud systems. Thus, an effective attack detection
scheme must perform sneaky attestation, which can prevent
attackers from gaining knowledge about our attestation scheme
(i.e., when and which set of data will be attested.). Otherwise,
the attacker can compromise the integrity of selective data
processing results without being detected at all.
Furthermore, cloud computing infrastructures often com-
prise a large number of hosts running many more VMs and
application service instances. It creates new opportunities for
colluding attacks where multiple malicious attackers launch
coordinated attacks or multiple benign service instances are
simultaneously compromised and controlled by a single ma-
licious attacker. Colluders can communicate with each other
in an arbitrary way and produce the same incorrect results
on the same input. Attackers can also change their attacking
and colluding strategies arbitrarily. However, we assume that
attackers do not have knowledge of other benign service
instances that they do not interact with.
C. Approach Overview & Assumptions
Our service integrity attestation scheme has two major
design goals: 1) support runtime continuous attestation with
low overhead; and 2) pinpoint malicious (or compromised)
service instances among a large number of interacted service
instances without assuming any prior knowledge about which
service instances are trusted. AdapTest adopts a data-driven
approach to achieve the above design goals without imposing
any special hardware or software requirements over remote
attested services, illustrated by Figure 2. AdapTest leverages
the portal node to perform service integrity attestation. To
achieve non-repudiation, each service instance is required to
produce a receipt for each data it receives and sign the data it
has processed [12].
AdapTest performs attack detection using replay-based con-
sistency check [13]. The basic idea is to duplicate some
original inputs and re-send them as attestation data to dif-
ferent functionally-equivalent service instances for consistency
check. Note that attestation data and original data are made in-
distinguishable to service instances. Moreover, our attestation

s
1
s
4
s
5
User
Portal
d
1
f
2
(f
1
(d
1
))
d
1
'
f
2
(
f
1
(
d
1
'
)
)
f
1
(d
1
)=f
1
(d
1
')
f
2
(f
1
(d
1
))≠f
2
(f
1
(d
1
'))
d
1
,d
2
, ...
f
2
(f
1
(d
1
)), f
2
(f
1
(d
2
)), ...
Attestation graphs
Data source
s
3
s
2
s
6
s
7
f
1
(d
1
)
f
1
(d
1
’)
s
8
f
2
f
1
Cloud system
s
5
s
6
s
7
s
8
s
1
s
2
s
3
s
4
Fig. 2. Data-driven service integrity attestation.
scheme does not affect the original data processing. In other
words, original data can be routed as before to different service
instances for processing based on certain load balancing and
quality-of-service (QoS) management objectives. The attesta-
tion data are replayed after the portal receives the original data
processing results rather than being sent concurrently with the
original data. Thus, we can prevent two colluding attackers
from detecting attestation by comparing their received data
and thus escaping detection. Although the replay scheme may
cause delay in a single data item processing, we can overlap
the attestation and normal processing of consecutive data items
to hide the attestation delay from the user.
AdapTest leverages our previously developed clique-based
algorithm [13] to pinpoint malicious nodes, illustrated by
Figure 3. The portal node constructs an attestation graph where
nodes are functionally equivalent service instances. If two
nodes always give consistent output, we use a consistent link
between these two nodes. Otherwise, if they give inconsistent
results on at least one input data, we link them using an
inconsistent link. Since all benign nodes will always give
consistent correct results, they will form a consistency clique
in the attestation graph. In contrast, the malicious nodes will
be exposed with inconsistent links when their misbehavior
is caught by our attestation scheme. Note that colluding
malicious nodes may try to form a consistency clique by
always giving the same wrong results. However, if we assume
benign nodes are the majority, we can say a node is definitely
malicious if the node is outside of all the cliques whose sizes
are larger than half of the total nodes [13]. For example, in
Figure 3, we can see the attestation graph includes two cliques
{s
1
, s
4
, s
5
} and {s
2
, s
3
}. Since the size of the first clique is
larger than half of the total nodes, s
2
and s
3
are successfully
identified as malicious nodes even though they also try to form
a clique through colluding.
AdapTest performs adaptive attestation to quickly expose
malicious nodes. We make three key observations. First, we
should attest suspicious nodes more often in order to capture
selective cheating with minimum attestation data. Second, in
order to quickly pinpoint malicious nodes, we need to expose
as many inconsistency links as possible. Therefore, AdapTest
f
1
s
1
s
3
s
4
s
2
s
5
Malicious node
Benign node
Consistency link
Inconsistency link
Fig. 3. Clique-based malicious node pinpointing.
dynamically derives a set of trust scores for each node based
on previous attestation results and use those trust scores
to guide future attestation. Specifically, AdapTest attests the
nodes with lower trust scores with higher probability, and gives
priority to those node pairs that have not been attested before
or have been consistent before. Third, attesting multi-hop data
processing services requires additional consideration since
inconsistent intermediate processing results from upstream
hops will invalidate attestation for all downstream hops. To
address the problem, AdapTest intentionally picks good nodes
based on previous attestation results for upstream hops in order
to effectively attest downstream hops.
Note that AdapTest does not use the trust scores to directly
pinpoint malicious nodes. Without assuming the trustiness of
any nodes, the trust scores only represent the relative goodness
of different nodes. The trust score of a specific node can
dynamically change after the node is attested with different
nodes. Even if the node trust scores are stabilized, it is
very difficult, if not impossible, to pre-define a proper trust
score threshold to separate the malicious and benign nodes.
Such threshold depends on a set of unknown factors such
as the percentage of malicious nodes and the misbehaving
probability of those malicious nodes. Thus, AdapTest only
uses trust scores to guide attestation but still uses the clique-
based malicious node pinpointing algorithm to guarantee zero
false positive [13].
Assumptions. First, we assume that data processing ser-
vices are stateless and deterministic, that is, given the same
input, a benign node always produces the same output.
Many data processing functions such as projection, selection,
filtering fall into this category [15]. We can also extend our
scheme to support stateful data processing services [11], which
however is beyond the scope of this paper. Second, we assume
that benign nodes are the majority within each group of
functionally-equivalent service instances. This assumption is
the same as other common attack detection schemes [25].
Third, we assume that the portal node is trusted, which is
solely managed by the portal service provider whose goal is
to provide trust-worthy data processing services for its clients.
The portal node plays a similar role as the dispatcher used
by previous remote attestation schemes [30], which is also
assumed to be trusted. Further, the portal node can employ
authentication to easily protect itself from malicious clients or
malicious application service providers.

f
1
s
1
s
3
s
4
s
2
s
5
(
3
,
2
)
(I
i,j
, C
i,j
)
I
i,j
: Inconsistency counter
C
i,j
: Consistency counter
(
3
,
1
)
(
0
,
5
)
(
4
,
2
)
Malicious node
Benign node
Trust scores of s
1
β
1,2
0.33
β
1,3
β
1,4
β
1,5
0.4
1
0.25
α
1
0.5
Fig. 4. Weighted attestation graph.
III. DESIGN AND ALGORITHMS
In this section, we present the design and algorithm details
of the AdapTest system. We first describe a weighted attesta-
tion graph model that serves as the basis of our approach. Next,
we present the details of per-hop adaptive attestation algorithm
followed by the multi-hop adaptive attestation scheme.
A. Weighted Attestation Graph
AdapTest strives to pinpoint malicious service instances
without making any prior assumption about the trustiness of
any service instance. Moreover, malicious attackers can per-
form selective cheating during long-running data processing
services, which means the trust score of a service instance
must be continuously monitored and updated. Thus, AdapTest
employs a weighted attestation graph to aggregate previous
attestation results and dynamically derives a set of trust scores
for each service instance, illustrated by Figure 4. We formally
define the weighted attestation graph as follows.
Definition 1: A weighted attestation graph is an undirected
complete graph consisting of all functionally equivalent ser-
vice instances as nodes. The weight of each edge consists of
a pair of counters denoting the number of inconsistent results
and the number of consistent results respectively.
For example, in Figure 4, s
1
produces three inconsistent
results and two consistent results with s
5
. Two nodes are con-
nected by a consistent link only if they have zero inconsistent
result. We can derive a node trust score and a set of pair-
wise trust scores for each node from the weighted attestation
graph. The node trust score denotes how trustworthy a node
is and the pair-wise trust score denotes how well two nodes
trust each other. We formally define both the node trust score
and the pairwise trust score as follows.
Definition 2: The trust score of the node s
i
, denoted by α
i
,
is defined as the fraction of consistent results returned by the
node s
i
when attested with all the other nodes. Node trust
scores range within [0,1], and are initialized to be 1.
Definition 3: The pairwise trust score between two service
instances s
i
and s
j
, denoted by β
i,j
, is calculated by the
fraction of consistent results when s
i
is attested against s
j
. The
pairwise trust score ranges within [0, 1], and are initialized to
be -1, which means that s
i
and s
j
have not been attested with
each other yet.
The trust score of a node takes the consistency relationships
between this node with all the other nodes into consideration.
For example, in Figure 4, s
1
has a node trust score of 0.5 since
it has total 10 consistent results and 10 inconsistent results
with {s
2
, s
3
, s
4
, s
5
}. Intuitively, malicious nodes should have
higher probabilities than benign ones to be inconsistent with
the other nodes given that benign nodes are the majority. Thus,
we assign node trust scores according to how consistent a node
is with the other nodes. Nodes that are more consistent with
the others have higher trust scores and are considered to be
more trustworthy. The node trust scores can be affected by
two factors. The trust score of a node s
i
decreases if i) the
node s
i
is inconsistent with more nodes; or ii) the node s
i
is
inconsistent with other nodes more frequently.
The pairwise trust scores reflect how consistent two nodes
are and therefore how trustworthy they think each other. The
more frequently two nodes give inconsistent results, the less
pairwise trust score between them. Note that we initialize
pairwise trust scores with -1 to indicate that the two nodes
have not been attested together before. For example, in Figure
4, if the pairwise trust score between two nodes equals to 1,
we draw a solid line between them. Otherwise, if two nodes
do not always agree with each other, we use a dashed line
to represent the inconsistency relationship. The pairwise trust
score between s
1
and s
2
is 2/(4+2) = 0.33 since they produce
4 inconsistent results and 2 consistent results.
B. Per-Hop Adaptive Attestation
AdapTest leverages dynamically derived trust scores to
intelligently guide probabilistic service attestation. The goal of
our adaptive attestation scheme is to expose malicious nodes
faster. We achieve the goal by capturing more inconsistency
relationships for malicious nodes so that they can be pushed
out from the maximum consistency clique.
AdapTest expedites the exposure of inconsistency relation-
ships and therefore shorten detection time using two adaptive
node selection schemes. First, AdapTest selects suspicious
nodes that have low trust scores and attests those suspicious
nodes more frequently. The rationale is that the nodes that
have already delivered more inconsistent results have the
potential to deliver even more inconsistent results in the future
attestation. By intensively attesting suspicious service nodes,
we may have higher probabilities to find inconsistency results.
Second, AdapTest strives to attest suspicious nodes together
with benign nodes since two colluding malicious nodes will
try to avoid producing inconsistent results with each other.
Attesting a suspicious node together with a benign one is more
effective in producing inconsistent results.
For scalability, AdapTest performs probabilistic attestation
by randomly selecting a subset of input data for consistency
check. When an input data item is selected for attestation
by the portal, AdapTest first identifies a pool of suspicious
nodes based on node trust scores and randomly selects a

s
1
s
2
s
3
portal
1
.
s
e
n
d
o
r
i
g
i
n
a
l
d
1
2
.
r
e
c
e
i
v
e
f
(
d
1
)
f
3. send d
1
'
4. receive f(d
1
')
s
4
s
5
3
.
s
e
n
d
d
1
"
4
.
r
e
c
e
i
v
e
f
(
d
1
"
)
α
1
=0.9,
β
1,3
=0.7
α
3
=0.2
α
5
=0.8,
β
3,5
=1.0
α
4
=0.1
α
2
=0.6,
β
2,3
=0.3
5. f(d
1
') == f(d
1
")?
f(d
1
) == f(d
1
')?
f(d
1
) == f(d
1
")?
Benign node
Suspicious node
Fig. 5. Per-hop adaptive attestation.
suspicious node from this pool to attest. Given the assumption
that malicious nodes are no more than half of total nodes,
we rank all nodes in an increasing order of trust scores, and
mark the first N/2 nodes as the suspicious node pool B
and the rest of nodes as the benign node pool G, where N
is the total number of nodes providing the function. We then
randomly pick one node, s
i
, from the suspicious node pool,
excluding the node processing the original data, for attestation
by comparing with the original node. Note that we do not
want to always attest the node with the lowest trust score for
maintaining attestation coverage and tolerating imprecise trust
scores. Moreover, we want to avoid alerting the malicious node
by continuously attesting it. At the beginning, since all nodes
have the same initial trust scores, AdapTest will randomly pick
nodes from the whole node pool to attest.
AdapTest may send multiple attestation data to attest differ-
ent nodes concurrently. To maximize the chance of capturing
inconsistent results, we want to attest a suspicious node with
a benign node together. Thus, after picking a suspicious node
s
i
from B, AdapTest picks the other attested node from the
benign node pool G using the following rules. First, if there are
benign nodes that have not been attested with s
i
before (i.e.,
pairwise trust score equals to -1 in the weighted attestation
graph), we randomly pick one from them. Second, if all nodes
in the benign set have been attested with s
i
, we randomly pick
one from G that have always been consistent with s
i
. We
avoid attesting two nodes that have already been inconsistent
with each other, since further attestation will not result in new
inconsistency links. Thus, if all nodes in G are inconsistent
with s
i
, instead of attesting s
i
, we randomly select another
node, s
j
, from the suspicious node set to attest, and also select
a node from the benign node set according to the above rules
to pair with s
j
for attestation. Note that if all nodes in the
suspicious node set have inconsistency links with all the nodes
in the benign node set, we randomly pick s
i
, and then pick
the one node from G that has the highest pair-wise trust score
with s
i
. Our scheme can achieve both good coverage and avoid
wasting attestation traffic on those node pairs that have already
presented inconsistency relationships.
Figure 5 shows an example of adaptive per-hop attestation.
s
1
s
2
s
3
portal
1
.
s
e
n
d
o
r
i
g
i
n
a
l
d
1
f
1
3. send d
1
'
4. receive f
3
(f
2
(f
1
(d
1
'))) ,f
3
(f
2
(f
1
(d
1
")))
s
4
s
5
3
.
s
e
n
d
d
1
"
α
1
=0.9
α
3
=0.8
α
5
=0.7
s
6
s
7
s
8
Target hop
f
2
s
9
s
10
α
6
=0.9,
β
6,8
=0.7
α
8
=0.2
α
10
=0.8,
β
8,10
=1.0
α
9
=0.1
α
7
=0.6,
β
7,8
=0.3
s
11
s
12
s
13
f
3
s
14
s
15
Benign node
Suspicious node
2. receive f
3
(f
2
(f
1
(d
1
)))
Fig. 6. Multi-hop adaptive attestation.
The number associated with each protocol step indicates the
execution order. If two steps have the same number, it means
that the two steps are executed concurrently. The portal first
sends the original data d
1
to s
1
for processing. After the portal
receives the result from s
1
, it decides to perform attestation
by replaying d
1
on two service instances. The portal first
randomly picks one node from the suspicious node set {s
3
, s
4
}
to attest, say, s
3
. Then the portal picks the node with the
highest pairwise trust score with s
3
from the benign set
{s
1
, s
2
, s
5
}, say s
5
.
Note that our scheme is robust to strategic attacks. For
example, a malicious node s
3
may behave benignly at the
beginning to join the benign node pool and then start to
misbehave by colluding with its colluder s
4
in the suspicious
node pool, with an intent to hide its misbehavior by sacrificing
s
4
. However, s
3
and s
4
are not just attested against each other,
but also attested with the node that processes the original data,
which is randomly selected. Even if s
3
and s
4
are consistent
with each other, they will have inconsistency links with all
benign nodes. Thus, s
3
and s
4
will eventually be pinpointed by
our clique-based algorithm. If either s
3
or s
4
is selected as the
original node, our replay-based attestation scheme prevents it
from knowing whether it will be compared with benign nodes
or its colluding party. Thus, if s
3
tries to pretend to be benign,
it will have an inconsistency link with s
4
with high probability.
In this case, AdapTest will rarely attest s
3
and s
4
since we try
to avoid attesting two nodes that already have an inconsistency
link between each other. Note that s
3
and s
4
can help each
other to increase their trust scores and decrease other nodes’
trust scores by providing consistent wrong results. However,
as mentioned in Section II-C, AdapTest does not rely on the
trust scores to pinpoint malicious nodes. Instead, AdapTest
only considers the consistency/inconsistency links and uses
clique-based algorithm to pinpoint malicious nodes. The trust
scores are only used to expose more inconsistency links with
less attestation data.
C. Multi-Hop Adaptive Attestation
Complicated data processing services often comprise mul-
tiple data processing functions called service hops. Malicious

Citations
More filters
Proceedings ArticleDOI

Hatman: Intra-cloud Trust Management for Hadoop

TL;DR: This paper presents and evaluates Hatman: the first full-scale, data-centric, reputation-based trust management system for Hadoop clouds, which dynamically assesses node integrity by comparing job replica outputs for consistency.
Journal ArticleDOI

Scalable Distributed Service Integrity Attestation for Software-as-a-Service Clouds

TL;DR: The experimental results show that IntTest can achieve higher attacker pinpointing accuracy than existing approaches, and does not require any special hardware or secure kernel support and imposes little performance impact to the application, which makes it practical for large-scale cloud systems.
Proceedings ArticleDOI

A Survey on Cloud Computing Security

Rajeev Kanday
TL;DR: A survey of the different security and application aspects of cloud computing such as confidentiality, integrity, transparency, availability, accountability, assurance is given.
Proceedings ArticleDOI

Computation certification as a service in the cloud

TL;DR: This paper proposes a new form of Security as a Service (SECaaS) that allows untrusted, mostly serial computations inUntrusted computing environments to be independently and efficiently validated by trusted, commodity clouds to address the longstanding problem of safely executing high assurance computations on untrusting hosts.
Proceedings ArticleDOI

Result verification mechanism for MapReduce computation integrity in cloud computing

TL;DR: A new mechanism to ensure the computation integrity of MapReduce in public cloud computing environment is proposed by using replication-based voting method and reputation-based trust management system that can efficiently detect both collusive and non-collusive malicious workers and guarantee high computation accuracy with an acceptable overhead.
References
More filters
Journal ArticleDOI

The Byzantine Generals Problem

TL;DR: The Albanian Generals Problem as mentioned in this paper is a generalization of Dijkstra's dining philosophers problem, where two generals have to come to a common agreement on whether to attack or retreat, but can communicate only by sending messengers who might never arrive.
Proceedings ArticleDOI

Dryad: distributed data-parallel programs from sequential building blocks

TL;DR: The Dryad execution engine handles all the difficult problems of creating a large distributed, concurrent application: scheduling the use of computers and their CPUs, recovering from communication or computer failures, and transporting data between vertices.
Proceedings ArticleDOI

Dynamic provable data possession

TL;DR: In this article, the authors present a definitional framework and efficient constructions for dynamic provable data possession (DPDP), which extends the PDP model to support provable updates to stored data.
Proceedings ArticleDOI

Zyzzyva: speculative byzantine fault tolerance

TL;DR: In Zyzzyva, replicas respond to a client's request without first running an expensive three-phase commit protocol to reach agreement on the order in which the request must be processed.
Proceedings ArticleDOI

HAIL: a high-availability and integrity layer for cloud storage

TL;DR: The HighAvailability and Integrity Layer (HAIL) as discussed by the authors is a distributed cryptographic system that allows a set of servers to prove to a client that a stored file is intact and retrievable.
Related Papers (5)
Frequently Asked Questions (10)
Q1. How do the authors achieve the goal of exposing malicious nodes faster?

The authors achieve the goal by capturing more inconsistency relationships for malicious nodes so that they can be pushed out from the maximum consistency clique. 

Their service integrity attestation scheme has two major design goals: 1) support runtime continuous attestation with low overhead; and 2) pinpoint malicious (or compromised) service instances among a large number of interacted service instances without assuming any prior knowledge about which service instances are trusted. 

To achieve non-repudiation, each service instance is required to produce a receipt for each data it receives and sign the data it has processed [12]. 

AdapTest expedites the exposure of inconsistency relationships and therefore shorten detection time using two adaptive node selection schemes. 

The basic idea is to duplicate some original inputs and re-send them as attestation data to different functionally-equivalent service instances for consistency check. 

AdapTest strives to attest suspicious nodes together with benign nodes since two colluding malicious nodes will try to avoid producing inconsistent results with each other. 

The experimental results show that AdapTest can reduce attestation overhead by up to 60% and shorten the malicious node pinpointing delay by up to 40% compared to previous approaches. 

The integrity of the software platform is ensured by employing a remote trusted attester to challenge the trusted entity who provides an integrity evidence through some cryptographic means. 

Definition 3: The pairwise trust score between two service instances si and sj , denoted by βi,j , is calculated by the fraction of consistent results when si is attested against sj . 

When an input data item is selected for attestation by the portal, AdapTest first identifies a pool of suspicious nodes based on node trust scores and randomly selects asuspicious node from this pool to attest.