scispace - formally typeset
Open AccessProceedings ArticleDOI

Achieving Secure, Scalable, and Fine-grained Data Access Control in Cloud Computing

Reads0
Chats0
TLDR
This paper addresses the problem of simultaneously achieving fine-grainedness, scalability, and data confidentiality of access control by exploiting and uniquely combining techniques of attribute-based encryption (ABE), proxy re-encryption, and lazy re- Encryption.
Abstract
Cloud computing is an emerging computing paradigm in which resources of the computing infrastructure are provided as services over the Internet. As promising as it is, this paradigm also brings forth many new challenges for data security and access control when users outsource sensitive data for sharing on cloud servers, which are not within the same trusted domain as data owners. To keep sensitive user data confidential against untrusted servers, existing solutions usually apply cryptographic methods by disclosing data decryption keys only to authorized users. However, in doing so, these solutions inevitably introduce a heavy computation overhead on the data owner for key distribution and data management when fine-grained data access control is desired, and thus do not scale well. The problem of simultaneously achieving fine-grainedness, scalability, and data confidentiality of access control actually still remains unresolved. This paper addresses this challenging open issue by, on one hand, defining and enforcing access policies based on data attributes, and, on the other hand, allowing the data owner to delegate most of the computation tasks involved in fine-grained data access control to untrusted cloud servers without disclosing the underlying data contents. We achieve this goal by exploiting and uniquely combining techniques of attribute-based encryption (ABE), proxy re-encryption, and lazy re-encryption. Our proposed scheme also has salient properties of user access privilege confidentiality and user secret key accountability. Extensive analysis shows that our proposed scheme is highly efficient and provably secure under existing security models.

read more

Content maybe subject to copyright    Report

Achieving Secure, Scalable, and Fine-grained Data
Access Control in Cloud Computing
Shucheng Yu
, Cong Wang
,KuiRen
, and Wenjing Lou
Dept. of ECE, Worcester Polytechnic Institute, Email: {yscheng, wjlou}@ece.wpi.edu
Dept. of ECE, Illinois Institute of Technology, Email: {cong, kren}@ece.iit.edu
Abstract—Cloud computing is an emerging computing
paradigm in which resources of the computing infrastructure
are provided as services over the Internet. As promising as it is,
this paradigm also brings forth many new challenges for data
security and access control when users outsource sensitive data
for sharing on cloud servers, which are not within the same
trusted domain as data owners. To keep sensitive user data
confidential against untrusted servers, existing solutions usually
apply cryptographic methods by disclosing data decryption keys
only to authorized users. However, in doing so, these solutions
inevitably introduce a heavy computation overhead on the data
owner for key distribution and data management when fine-
grained data access control is desired, and thus do not scale
well. The problem of simultaneously achieving fine-grainedness,
scalability, and data confidentiality of access control actually still
remains unresolved. This paper addresses this challenging open
issue by, on one hand, defining and enforcing access policies based
on data attributes, and, on the other hand, allowing the data
owner to delegate most of the computation tasks involved in fine-
grained data access control to untrusted cloud servers without
disclosing the underlying data contents. We achieve this goal by
exploiting and uniquely combining techniques of attribute-based
encryption (ABE), proxy re-encryption, and lazy re-encryption.
Our proposed scheme also has salient properties of user access
privilege confidentiality and user secret key accountability. Exten-
sive analysis shows that our proposed scheme is highly efficient
and provably secure under existing security models.
I. INTRODUCTION
Cloud computing is a promising computing paradigm which
recently has drawn extensive attention from both academia and
industry. By combining a set of existing and new techniques
from research areas such as Service-Oriented Architectures
(SOA) and virtualization, cloud computing is regarded as such
a computing paradigm in which resources in the computing
infrastructure are provided as services over the Internet. Along
with this new paradigm, various business models are devel-
oped, which can be described by terminology of “X as a
service (XaaS)” [1] where X could be software, hardware,
data storage, and etc. Successful examples are Amazon’s EC2
and S3 [2], Google App Engine [3], and Microsoft Azure [4]
which provide users with scalable resources in the pay-as-you-
use fashion at relatively low prices. For example, Amazon’s S3
data storage service just charges $0.12 to $0.15 per gigabyte-
month. As compared to building their own infrastructures,
users are able to save their investments significantly by migrat-
ing businesses into the cloud. With the increasing development
of cloud computing technologies, it is not hard to imagine that
in the near future more and more businesses will be moved
into the cloud.
As promising as it is, cloud computing is also facing many
challenges that, if not well resolved, may impede its fast
growth. Data security, as it exists in many other applications,
is among these challenges that would raise great concerns
from users when they store sensitive information on cloud
servers. These concerns originate from the fact that cloud
servers are usually operated by commercial providers which
are very likely to be outside of the trusted domain of the users.
Data confidential against cloud servers is hence frequently
desired when users outsource data for storage in the cloud. In
some practical application systems, data confidentiality is not
only a security/privacy issue, but also of juristic concerns. For
example, in healthcare application scenarios use and disclosure
of protected health information (PHI) should meet the require-
ments of Health Insurance Portability and Accountability Act
(HIPAA) [5], and keeping user data confidential against the
storage servers is not just an option, but a requirement.
Furthermore, we observe that there are also cases in which
cloud users themselves are content providers. They publish
data on cloud servers for sharing and need fine-grained data
access control in terms of which user (data consumer) has the
access privilege to which types of data. In the healthcare case,
for example, a medical center would be the data owner who
stores millions of healthcare records in the cloud. It would
allow data consumers such as doctors, patients, researchers
and etc, to access various types of healthcare records under
policies admitted by HIPAA. To enforce these access policies,
the data owners on one hand would like to take advantage of
the abundant resources that the cloud provides for efficiency
and economy; on the other hand, they may want to keep the
data contents confidential against cloud servers.
As a significant research area for system protection, data
access control has been evolving in the past thirty years and
various techniques [6]–[9] have been developed to effectively
implement fine-grained access control, which allows flexibility
in specifying differential access rights of individual users. Tra-
ditional access control architectures usually assume the data
owner and the servers storing the data are in the same trusted
domain, where the servers are fully entrusted as an omniscient
reference monitor [10] responsible for defining and enforcing
access control policies. This assumption however no longer
holds in cloud computing since the data owner and cloud
servers are very likely to be in two different domains. On one
hand, cloud servers are not entitled to access the outsourced
data content for data confidentiality; on the other hand, the
data resources are not physically under the full control of
978-1-4244-5837-0/10/$26.00 ©2010 IEEE
This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE INFOCOM 2010 proceedings
This paper was presented as part of the main Technical Program at IEEE INFOCOM 2010.

the owner. For the purpose of helping the data owner enjoy
fine-grained access control of data stored on untrusted cloud
servers, a feasible solution would be encrypting data through
certain cryptographic primitive(s), and disclosing decryption
keys only to authorized users. Unauthorized users, including
cloud servers, are not able to decrypt since they do not have
the data decryption keys. This general method actually has
been widely adopted by existing works [11]–[14] which aim
at securing data storage on untrusted servers. One critical issue
with this branch of approaches is how to achieve the desired
security goals without introducing a high complexity on key
management and data encryption. These existing works, as
we will discuss in section V-C, resolve this issue either by
introducing a per file access control list (ACL) for fine-grained
access control, or by categorizing files into several filegroups
for efficiency. As the system scales, however, the complexity
of the ACL-based scheme would be proportional to the number
of users in the system. The filegroup-based scheme, on
the other hand, is just able to provide coarse-grained data
access control. It actually still remains open to simultaneously
achieve the goals of fine-grainedness, scalability, and data
confidentiality for data access control in cloud computing.
In this paper, we address this open issue and propose a
secure and scalable fine-grained data access control scheme
for cloud computing. Our proposed scheme is partially based
on our observation that, in practical application scenarios each
data file can be associated with a set of attributes which are
meaningful in the context of interest. The access structure of
each user can thus be defined as a unique logical expression
over these attributes to reflect the scope of data files that
the user is allowed to access. As the logical expression can
represent any desired data file set, fine-grainedness of data
access control is achieved. To enforce these access structures,
we define a public key component for each attribute. Data files
are encrypted using public key components corresponding to
their attributes. User secret keys are defined to reflect their
access structures so that a user is able to decrypt a ciphertext
if and only if the data file attributes satisfy his access structure.
Such a design also brings about the efficiency benefit, as
compared to previous works, in that, 1) the complexity of
encryption is just related the number of attributes associated
to the data file, and is independent to the number of users
in the system; and 2) data file creation/deletion and new user
grant operations just affect current file/user without involving
system-wide data file update or re-keying. One extremely
challenging issue with this design is the implementation of
user revocation, which would inevitably require re-encryption
of data files accessible to the leaving user, and may need
update of secret keys for all the remaining users. If all these
tasks are performed by the data owner himself/herself, it would
introduce a heavy computation overhead on him/her and may
also require the data owner to be always online. To resolve
this challenging issue, our proposed scheme enables the data
owner to delegate tasks of data file re-encryption and user
secret key update to cloud servers without disclosing data
contents or user access privilege information. We achieve our
design goals by exploiting a novel cryptographic primitive,
namely key policy attribute-based encryption (KP-ABE) [15],
and uniquely combine it with the technique of proxy re-
encryption (PRE) [16] and lazy re-encryption [11].
Main contributions of this paper can be summarized as
follows. 1) To the best of our knowledge, this paper is the first
that simultaneously achieves fine-grainedness, scalability and
data confidentiality for data access control in cloud computing;
2) Our proposed scheme enables the data owner to delegate
most of computation intensive tasks to cloud servers without
disclosing data contents or user access privilege information;
3) The proposed scheme is provably secure under the standard
security model. In addition, our proposed scheme is able to
support user accountability with minor extension.
The rest of this paper is organized as follows. Section II
discusses models and assumptions. Section III reviews some
technique preliminaries pertaining to our construction. Section
IV presents our construction. In section V, we analyze our
proposed scheme in terms of its security and performance.
We conclude this paper in Section VI.
II. M
ODELS AND ASSUMPTIONS
A. System Models
Similar to [17], we assume that the system is composed of
the following parties: the Data Owner, many Data Consumers,
many Cloud Servers, and a Third Party Auditor if necessary.
To access data files shared by the data owner, Data Consumers,
or users for brevity, download data files of their interest from
Cloud Servers and then decrypt. Neither the data owner nor
users will be always online. They come online just on the
necessity basis. For simplicity, we assume that the only access
privilege for users is data file reading. Extending our proposed
scheme to support data file writing is trivial by asking the data
writer to sign the new data file on each update as [12] does.
From now on, we will also call data files by files for brevity.
Cloud Servers are always online and operated by the Cloud
Service Provider (CSP). They are assumed to have abundant
storage capacity and computation power. The Third Party
Auditor is also an online party which is used for auditing every
file access event. In addition, we also assume that the data
owner can not only store data files but also run his own code
on Cloud Servers to manage his data files. This assumption
coincides with the unified ontology of cloud computing which
is recently proposed by Youseff et al. [18].
B. Security Models
In this work, we just consider Honest but Curious Cloud
Servers as [14] does. That is to say, Cloud Servers will follow
our proposed protocol in general, but try to find out as much
secret information as possible based on their inputs. More
specifically, we assume Cloud Servers are more interested
in file contents and user access privilege information than
other secret information. Cloud Servers might collude with a
small number of malicious users for the purpose of harvesting
file contents when it is highly beneficial. Communication
channel between the data owner/users and Cloud Servers are
assumed to be secured under existing security protocols such
as SSL. Users would try to access files either within or outside
the scope of their access privileges. To achieve this goal,
This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE INFOCOM 2010 proceedings
This paper was presented as part of the main Technical Program at IEEE INFOCOM 2010.

unauthorized users may work independently or cooperatively.
In addition, each party is preloaded with a public/private key
pair and the public key can be easily obtained by other parties
when necessary.
C. Design Goals
Our main design goal is to help the data owner achieve
fine-grained access control on files stored by Cloud Servers.
Specifically, we want to enable the data owner to enforce a
unique access structure on each user, which precisely des-
ignates the set of files that the user is allowed to access.
We also want to prevent Cloud Servers from being able to
learn both the data file contents and user access privilege
information. In addition, the proposed scheme should be able
to achieve security goals like user accountability and support
basic operations such as user grant/revocation as a general
one-to-many communication system would require. All these
design goals should be achieved efficiently in the sense that
the system is scalable.
III. T
ECHNIQUE PRELIMINARIES
A. Key Policy Attribute-Based Encryption (KP-ABE)
KP-ABE [15] is a public key cryptography primitive for
one-to-many communications. In KP-ABE, data are associated
with attributes for each of which a public key component is
defined. The encryptor associates the set of attributes to the
message by encrypting it with the corresponding public key
components. Each user is assigned an access structure which
is usually defined as an access tree over data attributes, i.e.,
interior nodes of the access tree are threshold gates and leaf
nodes are associated with attributes. User secret key is defined
to reflect the access structure so that the user is able to decrypt
a ciphertext if and only if the data attributes satisfy his access
structure. A KP-ABE scheme is composed of four algorithms
which can be defined as follows:
Setup This algorithm takes as input a security parameter κ
and the attribute universe U = {1, 2,...,N} of cardinality
N. It defines a bilinear group G
1
of prime order p with a
generator g, a bilinear map e : G
1
× G
1
G
2
which has the
properties of bilinearity, computability, and non-degeneracy.
It returns the public key PK as well as a system master key
MK as follows
PK =(Y,T
1
,T
2
,...,T
N
)
MK =(y, t
1
,t
2
,...,t
N
)
where T
i
G
1
and t
i
Z
p
are for attribute i, 1 i N , and
Y G
2
is another public key component. We have T
i
= g
t
i
and Y = e(g, g)
y
, y Z
p
. While PK is publicly known to
all the parties in the system, MK is kept as a secret by the
authority party.
Encryption This algorithm takes a message M, the public key
PK, and a set of attributes I as input. It outputs the ciphertext
E with the following format:
E =(I,
˜
E,{E
i
}
iI
)
where
˜
E = MY
s
, E
i
= T
s
i
, and s is randomly chosen from
Z
p
.
Key Generation This algorithm takes as input an access tree
T , the master key MK, and the public key PK. It outputs
a user secret key SK as follows. First, it defines a random
polynomial p
i
(x) for each node i of T in the top-down manner
starting from the root node r. For each non-root node j,
p
j
(0) = p
parent(j)
(idx(j)) where parent(j) represents j’s
parent and idx(j) is js unique index given by its parent. For
the root node r, p
r
(0) = y. Then it outputs SK as follows.
SK = {sk
i
}
iL
where L denotes the set of attributes attached to the leaf
nodes of T and sk
i
= g
p
i
(0)
t
i
.
Decryption This algorithm takes as input the ciphertext E
encrypted under the attribute set I, the user’s secret key
SK for access tree T , and the public key PK.Itfirst
computes e(E
i
,sk
i
)=e(g, g)
p
i
(0)s
for leaf nodes. Then, it
aggregates these pairing results in the bottom-up manner using
the polynomial interpolation technique. Finally, it may recover
the blind factor Y
s
= e(g, g)
ys
and output the message M if
and only if I satisfies T .
Please refer to [15] for more details on KP-ABE algorithms.
[19] is an enhanced KP-ABE scheme which supports user
secret key accountability.
B. Proxy Re-Encryption (PRE)
Proxy Re-Encryption (PRE) is a cryptographic primitive in
which a semi-trusted proxy is able to convert a ciphertext
encrypted under Alice’s public key into another ciphertext
that can be opened by Bob’s private key without seeing the
underlying plaintext. More formally, a PRE scheme allows the
proxy, given the proxy re-encryption key rk
ab
, to translate
ciphertexts under public key pk
a
into ciphertexts under public
key pk
b
and vise versa. Please refer to [16] for more details
on proxy re-encryption schemes.
IV. O
UR PROPOSED SCHEME
A. Main Idea
In order to achieve secure, scalable and fine-grained access
control on outsourced data in the cloud, we utilize and
uniquely combine the following three advanced cryptograh-
phic techniques: KP-ABE, PRE and lazy re-encryption. More
specifically, we associate each data file with a set of attributes,
and assign each user an expressive access structure which is
defined over these attributes. To enforce this kind of access
control, we utilize KP-ABE to escort data encryption keys of
data files. Such a construction enables us to immediately enjoy
fine-grainedness of access control. However, this construc-
tion, if deployed alone, would introduce heavy computation
overhead and cumbersome online burden towards the data
owner, as he is in charge of all the operations of data/user
management. Specifically, such an issue is mainly caused by
the operation of user revocation, which inevitabily requires
the data owner to re-encrypt all the data files accessible to
the leaving user, or even needs the data owner to stay online
to update secret keys for users. To resolve this challenging
issue and make the construction suitable for cloud computing,
we uniquely combine PRE with KP-ABE and enable the
This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE INFOCOM 2010 proceedings
This paper was presented as part of the main Technical Program at IEEE INFOCOM 2010.

Owner
Cloud servers
O
u
t
s
o
u
r
c
e
e
n
c
r
y
p
t
e
d
fi
l
e
A
c
c
e
s
s
Illness: diabetes
Hospital: A
Race: asian
…...
Dummy attribute
dummy attribute
AND
AND
OR
Race: asian
Hospital:A
Illiness:diabetes
Attributes of a file
user
access
structure
User
Race: white
Fig. 1: An examplary case in the healthcare scenario
data owner to delegate most of the computation intensive
operations to Cloud Servers without disclosing the underlying
file contents. Such a construction allows the data owner to
control access of his data files with a minimal overhead
in terms of computation effort and online time, and thus
fits well into the cloud environment. Data confidentiality is
also achieved since Cloud Servers are not able to learn the
plaintext of any data file in our construction. For further
reducing the computation overhead on Cloud Servers and thus
saving the data owner’s investment, we take advantage of
the lazy re-encryption technique and allow Cloud Servers to
“aggregate” computation tasks of multiple system operations.
As we will discuss in section V-B, the computation complexity
on Cloud Servers is either proportional to the number of
system attributes, or linear to the size of the user access
structure/tree, which is independent to the number of users
in the system. Scalability is thus achieved. In addition, our
construction also protects user access privilege information
against Cloud Servers. Accoutability of user secret key can
also be achieved by using an enhanced scheme of KP-ABE.
B. Definition and Notation
For each data file the owner assigns a set of meaningful
attributes which are necessary for access control. Different
data files can have a subset of attributes in common. Each
attribute is associated with a version number for the purpose
of attribute update as we will discuss later. Cloud Servers
keep an attribute history list AHL which records the version
evolution history of each attribute and PRE keys used. In
addition to these meaningful attributes, we also define one
dummy attribute, denoted by symbol Att
D
for the purpose of
key management. Att
D
is required to be included in every
data file’s attribute set and will never be updated. The access
structure of each user is implemented by an access tree.
Interior nodes of the access tree are threshold gates. Leaf nodes
of the access tree are associated with data file attributes. For
the purpose of key management, we require the root node
to be an AN D gate (i.e., n-of-n threshold gate) with one
child being the leaf node which is associated with the dummy
attribute, and the other child node being any threshold gate.
The dummy attribute will not be attached to any other node in
the access tree. Fig.1 illustrates our definitions by an example.
In addition, Cloud Servers also keep a user list UL which
records IDs of all the valid users in the system. Fig.2 gives
the description of notation to be used in our scheme.
Notation Description
PK,MK system public key and master key
T
i
public key component for attribute i
t
i
master key component for attribute i
SK user secret key
sk
i
user secret key component for attribute i
E
i
ciphertext component for attribute i
I attribute set assigned to a data file
DEK symmetric data encryption key of a data file
P user access structure
L
P
set of attributes attached to leaf nodes of P
Att
D
the dummy attribute
UL the system user list
AHL
i
attribute history list for attribute i
rk
ii
proxy re-encryption key for attribute i from
its current version to the updated version i
δ
O,X
the data owner’s signature on message X
Fig. 2: Notation used in our scheme description
C. Scheme Description
For clarity we will present our proposed scheme in two
levels: System Level and Algorithm Level. At system level,
we describe the implementation of high level operations, i.e.,
System Setup, New File Creation, New User Grant, and User
Revocation, File Access, File Deletion, and the interaction
between involved parties. At algorithm level, we focus on the
implementation of low level algorithms that are invoked by
system level operations.
1) System Level Operations: System level operations in our
proposed scheme are designed as follows.
System Setup In this operation, the data owner chooses a
security parameter κ and calls the algorithm level interface
ASetup(κ), which outputs the system public parameter PK
and the system master key MK. The data owner then signs
each component of PK and sends PK along with these
signatures to Cloud Servers.
New File Creation Before uploading a file to Cloud Servers,
the data owner processes the data file as follows.
select a unique ID for this data file;
randomly select a symmetric data encryption key
DEK
R
←K, where K is the key space, and encrypt the
data file using DEK;
define a set of attribute I for the data file and en-
crypt DEK with I using KP-ABE, i.e., (
˜
E,{E
i
}
iI
)
AEncrypt(I,DEK,PK).
header

body

ID I,
˜
E,{E
i
}
iI
{DataF ile}
DEK
Fig. 3: Format of a data file stored on the cloud
Finally, each data file is stored on the cloud in the format
as is shown in Fig.3.
New User Grant When a new user wants to join the system,
the data owner assigns an access structure and the correspond-
ing secret key to this user as follows.
This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE INFOCOM 2010 proceedings
This paper was presented as part of the main Technical Program at IEEE INFOCOM 2010.

// to revoke user v
// stage 1: attribute update.
The Data Owner Cloud Servers
1. D AMinimalSet(P ), where P is vs access structure; remove v from the system user list UL;
2. for each attribute i in D for each attribute i D
(t
i
,T
i
,rk
ii
) AUpdateAtt(i, M K);
Att
−−−− store (i, T
i
O,(i,T
i
)
);
3. send Att =(v, D, {i, T
i
O,(i,T
i
)
,rk
ii
}
iD
). add rk
ii
to is history list AHL
i
.
// stage 2: data file and user secret key update.
Cloud Servers User(u)
1. on receiving REQ, proceed if u UL;
2. get the tuple (u, {j, sk
j
}
jL
P
\Att
D
); 1. generate data file access request REQ;
for each attribute j L
P
\Att
D
REQ
−−−−− 2. wait for the response from Cloud Servers;
sk
j
AUpdateSK(j, sk
j
,AHL
j
);
for each requested file f in REQ 3. on receiving RESP , verify each δ
O,(j,T
j
)
for each attribute k I
f
RESP
−−−−−− and sk
j
; proceed if all correct;
E
k
AUpdateAtt4File(k, E
k
,AHL
k
); 4. replace each sk
j
in SK with sk
j
;
3. send RESP = ({j, sk
j
,T
j
O,(j,T
j
)
}
jL
P
\Att
D
,FL). 5. decrypt each file in FL with SK.
Fig. 4: Description of the process of user revocation
assign the new user a unique identity w and an access
structure P ;
generate a secret key SK for w, i.e., SK
AKeyGen(P, MK);
encrypt the tuple (P, SK, P K, δ
O,(P,S K,P K )
) with user
ws public key, denoting the ciphertext by C;
send the tuple (T,C,δ
O,(T,C)
) to Cloud Servers, where
T denotes the tuple (w, {j, sk
j
}
jL
P
\Att
D
).
On receiving the tuple (T,C,δ
O,(T,C)
), Cloud Servers pro-
cesses as follows.
verify δ
O,(T,C)
and proceed if correct;
store T in the system user list UL;
forward C to the user.
On receiving C, the user first decrypts it with his private
key. Then he verifies the signature δ
O,(P,S K,P K )
. If correct,
he accepts (P, SK, PK) as his access structure, secret key,
and the system public key.
As described above, Cloud Servers store all the secret key
components of SK except for the one corresponding to the
dummy attribute Att
D
. Such a design allows Cloud Servers
to update these secret key components during user revocation
as we will describe soon. As there still exists one undisclosed
secret key component (the one for Att
D
), Cloud Servers can
not use these known ones to correctly decrypt ciphertexts.
Actually, these disclosed secret key components, if given to
any unauthorized user, do not give him any extra advantage
in decryption as we will show in our security analysis.
User Revocation We start with the intuition of the user
revocation operation as follows. Whenever there is a user to
be revoked, the data owner first determines a minimal set of
attributes without which the leaving user’s access structure
will never be satisfied. Next, he updates these attributes by
redefining their corresponding system master key components
in MK. Public key components of all these updated attributes
in PK are redefined accordingly. Then, he updates user secret
keys accordingly for all the users except for the one to be
revoked. Finally, DEKs of affected data files are re-encrypted
with the latest version of PK. The main issue with this
intuitive scheme is that it would introduce a heavy computation
overhead for the data owner to re-encrypt data files and
might require the data owner to be always online to provide
secret key update service for users. To resolve this issue, we
combine the technique of proxy re-encryption with KP-ABE
and delegate tasks of data file re-encryption and user secret
key update to Cloud Servers. More specifically, we divide the
user revocation scheme into two stages as is shown in Fig.4.
In the first stage, the data owner determines the minimal set
of attributes, redefines MK and PK for involved attributes,
and generates the corresponding PRE keys. He then sends
the user’s ID, the minimal attribute set, the PRE keys, the
updated public key components, along with his signatures on
these components to Cloud Servers, and can go off-line again.
Cloud Servers, on receiving this message from the data owner,
remove the revoked user from the system user list UL,store
the updated public key components as well as the owner’s
signatures on them, and record the PRE key of the latest
version in the attribute history list AHL for each updated
attribute. AHL of each attribute is a list used to record the
version evolution history of this attribute as well as the PRE
keys used. Every attribute has its own AHL.WithAHL,
Cloud Servers are able to compute a single PRE key that
enables them to update the attribute from any historical version
to the latest version. This property allows Cloud Servers to
update user secret keys and data files in the “lazy” way as
follows. Once a user revocation event occurs, Cloud Servers
just record information submitted by the data owner as is
previously discussed. If only there is a file data access request
from a user, do Cloud Servers re-encrypt the requested files
and update the requesting user’s secret key. This statistically
saves a lot of computation overhead since Cloud Servers are
This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE INFOCOM 2010 proceedings
This paper was presented as part of the main Technical Program at IEEE INFOCOM 2010.

Citations
More filters
Proceedings ArticleDOI

Privacy-Preserving Public Auditing for Data Storage Security in Cloud Computing

TL;DR: This paper utilize and uniquely combine the public key based homomorphic authenticator with random masking to achieve the privacy-preserving public cloud data auditing system, which meets all above requirements.
Proceedings ArticleDOI

A Survey of Fog Computing: Concepts, Applications and Issues

TL;DR: The definition of fog computing and similar concepts are discussed, representative application scenarios are introduced, and various aspects of issues the authors may encounter when designing and implementing fog computing systems are identified.
Journal ArticleDOI

Scalable and Secure Sharing of Personal Health Records in Cloud Computing Using Attribute-Based Encryption

TL;DR: A novel patient-centric framework and a suite of mechanisms for data access control to PHRs stored in semitrusted servers are proposed and a high degree of patient privacy is guaranteed simultaneously by exploiting multiauthority ABE.
Journal ArticleDOI

Privacy-Preserving Multi-Keyword Ranked Search over Encrypted Cloud Data

TL;DR: This paper proposes a basic idea for the MRSE based on secure inner product computation, and gives two significantly improved MRSE schemes to achieve various stringent privacy requirements in two different threat models and further extends these two schemes to support more search semantics.
Journal ArticleDOI

Security Challenges for the Public Cloud

TL;DR: The authors outline several critical security challenges and motivate further investigation of security solutions for a trustworthy public cloud environment.
References
More filters
Journal Article

Above the Clouds: A Berkeley View of Cloud Computing

TL;DR: This work focuses on SaaS Providers (Cloud Users) and Cloud Providers, which have received less attention than SAAS Users, and uses the term Private Cloud to refer to internal datacenters of a business or other organization, not made available to the general public.
Proceedings ArticleDOI

Attribute-based encryption for fine-grained access control of encrypted data

TL;DR: This work develops a new cryptosystem for fine-grained sharing of encrypted data that is compatible with Hierarchical Identity-Based Encryption (HIBE), and demonstrates the applicability of the construction to sharing of audit-log information and broadcast encryption.
Journal ArticleDOI

Improved proxy re-encryption schemes with applications to secure distributed storage

TL;DR: Performance measurements of the experimental file system demonstrate the usefulness of proxy re-encryption as a method of adding access control to a secure file system and present new re-Encryption schemes that realize a stronger notion of security.
Book ChapterDOI

Divertible protocols and atomic proxy cryptography

TL;DR: A definition of protocol divertibility is given that applies to arbitrary 2-party protocols and is compatible with Okamoto and Ohta's definition in the case of interactive zero-knowledge proofs and generalizes to cover several protocols not normally associated with divertibility.
Journal ArticleDOI

Enabling Public Auditability and Data Dynamics for Storage Security in Cloud Computing

TL;DR: To achieve efficient data dynamics, the existing proof of storage models are improved by manipulating the classic Merkle Hash Tree construction for block tag authentication, and an elegant verification scheme is constructed for the seamless integration of these two salient features in the protocol design.
Related Papers (5)
Frequently Asked Questions (11)
Q1. What contributions have the authors mentioned in the paper "Achieving secure, scalable, and fine-grained data access control in cloud computing" ?

Cloud computing is an emerging computing paradigm in which resources of the computing infrastructure are provided as services over the Internet. This paper addresses this challenging open issue by, on one hand, defining and enforcing access policies based on data attributes, and, on the other hand, allowing the data owner to delegate most of the computation tasks involved in finegrained data access control to untrusted cloud servers without disclosing the underlying data contents. As promising as it is, this paradigm also brings forth many new challenges for data security and access control when users outsource sensitive data for sharing on cloud servers, which are not within the same trusted domain as data owners. 

In their proposed scheme, the authors exploit the technique of hybrid encryption to protect data files, i.e., the authors encrypt data files using symmetric DEKs and encrypt DEKs with KPABE. 

In addition, the proposed scheme should be able to achieve security goals like user accountability and support basic operations such as user grant/revocation as a general one-to-many communication system would require. 

User secret key is defined to reflect the access structure so that the user is able to decrypt a ciphertext if and only if the data attributes satisfy his access structure. 

In order to achieve secure, scalable and fine-grained access control on outsourced data in the cloud, the authors utilize and uniquely combine the following three advanced cryptograhphic techniques: KP-ABE, PRE and lazy re-encryption. 

Using KP-ABE, the authors are able to immediately enjoy fine-grained data access control and efficient operations such as file creation/deletion and new user grant. 

To resolve this issue, the authors combine the technique of proxy re-encryption with KP-ABE and delegate tasks of data file re-encryption and user secret key update to Cloud Servers. 

As the authors will discuss in section V-B, the computation complexity on Cloud Servers is either proportional to the number of system attributes, or linear to the size of the user access structure/tree, which is independent to the number of users in the system. 

In this operation, the data owner chooses a security parameter κ and calls the algorithm level interface ASetup(κ), which outputs the system public parameter PK and the system master key MK. 

Extending their proposed scheme to support data file writing is trivial by asking the data writer to sign the new data file on each update as [12] does. 

To resolve the challenging issue of user revocation, the authors combine the technique of proxy re-encryption with KP-ABE and delegate most of the burdensome computational task to Cloud Servers.