scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Dynamic-Hash-Table Based Public Auditing for Secure Cloud Storage

TL;DR: A novel public auditing scheme for secure cloud storage based on dynamic hash table (DHT), which is a new two-dimensional data structure located at a third parity auditor (TPA) to record the data property information for dynamic auditing.
Abstract: Cloud storage is an increasingly popular application of cloud computing, which can provide on-demand outsourcing data services for both organizations and individuals. However, users may not fully trust the cloud service providers (CSPs) in that it is difficult to determine whether the CSPs meet their legal expectations for data security. Therefore, it is critical to develop efficient auditing techniques to strengthen data owners’ trust and confidence in cloud storage. In this paper, we present a novel public auditing scheme for secure cloud storage based on dynamic hash table (DHT), which is a new two-dimensional data structure located at a third parity auditor (TPA) to record the data property information for dynamic auditing. Differing from the existing works, the proposed scheme migrates the authorized information from the CSP to the TPA, and thereby significantly reduces the computational cost and communication overhead. Meanwhile, exploiting the structural advantages of the DHT, our scheme can also achieve higher updating efficiency than the state-of-the-art schemes. In addition, we extend our scheme to support privacy preservation by combining the homomorphic authenticator based on the public key with the random masking generated by the TPA, and achieve batch auditing by employing the aggregate BLS signature technique. We formally prove the security of the proposed scheme, and evaluate the auditing performance by detailed experiments and comparisons with the existing ones. The results demonstrate that the proposed scheme can effectively achieve secure auditing for cloud storage, and outperforms the previous schemes in computation complexity, storage costs and communication overhead.
Citations
More filters
Posted Content
TL;DR: This paper defines and explores proofs of retrievability (PORs), a POR scheme that enables an archive or back-up service to produce a concise proof that a user can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety.
Abstract: In this paper, we define and explore proofs of retrievability (PORs). A POR scheme enables an archive or back-up service (prover) to produce a concise proof that a user (verifier) can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety.A POR may be viewed as a kind of cryptographic proof of knowledge (POK), but one specially designed to handle a large file (or bitstring) F. We explore POR protocols here in which the communication costs, number of memory accesses for the prover, and storage requirements of the user (verifier) are small parameters essentially independent of the length of F. In addition to proposing new, practical POR constructions, we explore implementation considerations and optimizations that bear on previously explored, related schemes.In a POR, unlike a POK, neither the prover nor the verifier need actually have knowledge of F. PORs give rise to a new and unusual security definition whose formulation is another contribution of our work.We view PORs as an important tool for semi-trusted online archives. Existing cryptographic techniques help users ensure the privacy and integrity of files they retrieve. It is also natural, however, for users to want to verify that archives do not delete or modify files prior to retrieval. The goal of a POR is to accomplish these checks without users having to download the files themselves. A POR can also provide quality-of-service guarantees, i.e., show that a file is retrievable within a certain time bound.

1,783 citations

Journal ArticleDOI
TL;DR: This paper proposes an efficient public auditing protocol with global and sampling blockless verification as well as batch auditing, where data dynamics are substantially more efficiently supported than is the case with the state of the art.
Abstract: With the rapid development of cloud computing, cloud storage has been accepted by an increasing number of organizations and individuals, therein serving as a convenient and on-demand outsourcing application However, upon losing local control of data, it becomes an urgent need for users to verify whether cloud service providers have stored their data securely Hence, many researchers have devoted themselves to the design of auditing protocols directed at outsourced data In this paper, we propose an efficient public auditing protocol with global and sampling blockless verification as well as batch auditing, where data dynamics are substantially more efficiently supported than is the case with the state of the art Note that, the novel dynamic structure in our protocol consists of a doubly linked info table and a location array Moreover, with such a structure, computational and communication overheads can be reduced substantially Security analysis indicates that our protocol can achieve the desired properties Moreover, numerical analysis and real-world experimental results demonstrate that the proposed protocol achieves a given efficiency in practice

305 citations


Cites background or methods or result from "Dynamic-Hash-Table Based Public Aud..."

  • ...Definition 3 (BLS-HVA): Given a data file that contains n blocks, F = {m1, m2, m3, · · · , mi , · · · , mn}, let G and GT be two multiplicative cyclic groups of a large prime order p, and let e : G×G → GT be a bilinear pairing....

    [...]

  • ...Specifically, our protocol is less computationally expensive both in performing single auditing task and batch auditing tasks compared to [17]....

    [...]

  • ...Inspired by [17], we introduce a novel dynamic structure composed of a doubly linked info table and a location array in our auditing protocol, making it substantially more effective....

    [...]

  • ...For verification, the auditor will simply check whether e( ∏ i∈Q h(mi ), v) = e(σ, g) holds, where Q is the set of challenged blocks and σ is the aggregated authenticator of these blocks’ BLS-HVA....

    [...]

  • ...Proof: This theorem suggests that forging an BLS-HVA is computationally infeasible....

    [...]

Journal ArticleDOI
TL;DR: The main security and privacy challenges in this field which have grown much interest among the academia and research community are presented and corresponding security solutions have been proposed and identified in literature by many researchers to counter the challenges.

221 citations

Journal ArticleDOI
TL;DR: This study contributes towards identifying a unified taxonomy for security requirements, threats, vulnerabilities and countermeasures to carry out the proposed end-to-end mapping and highlights security challenges in other related areas like trust based security models, cloud-enabled applications of Big Data, Internet of Things, Software Defined Network (SDN) and Network Function Virtualization (NFV).

152 citations

Journal ArticleDOI
TL;DR: A cloud storage auditing scheme for group users, which greatly reduces the computation burden on the user side and blind data using simple operations in the phase of data uploading and data auditing to protect the data privacy against the TPM.

102 citations

References
More filters
Book ChapterDOI
09 Dec 2001
TL;DR: A short signature scheme based on the Computational Diffie-Hellman assumption on certain elliptic and hyperelliptic curves is introduced, designed for systems where signatures are typed in by a human or signatures are sent over a low-bandwidth channel.
Abstract: We introduce a short signature scheme based on the Computational Diffie-Hellman assumption on certain elliptic and hyperelliptic curves. The signature length is half the size of a DSA signature for a similar level of security. Our short signature scheme is designed for systems where signatures are typed in by a human or signatures are sent over a low-bandwidth channel.

3,697 citations


"Dynamic-Hash-Table Based Public Aud..." refers background in this paper

  • ...unforgeable, in that BLS short signature scheme is secure with the assumption that the CDH problem is hard in bilinear groups [17]....

    [...]

Proceedings ArticleDOI
28 Oct 2007
TL;DR: The provable data possession (PDP) model as discussed by the authors allows a client that has stored data at an untrusted server to verify that the server possesses the original data without retrieving it.
Abstract: We introduce a model for provable data possession (PDP) that allows a client that has stored data at an untrusted server to verify that the server possesses the original data without retrieving it. The model generates probabilistic proofs of possession by sampling random sets of blocks from the server, which drastically reduces I/O costs. The client maintains a constant amount of metadata to verify the proof. The challenge/response protocol transmits a small, constant amount of data, which minimizes network communication. Thus, the PDP model for remote data checking supports large data sets in widely-distributed storage system.We present two provably-secure PDP schemes that are more efficient than previous solutions, even when compared with schemes that achieve weaker guarantees. In particular, the overhead at the server is low (or even constant), as opposed to linear in the size of the data. Experiments using our implementation verify the practicality of PDP and reveal that the performance of PDP is bounded by disk I/O and not by cryptographic computation.

2,238 citations

Posted Content
TL;DR: Ateniese et al. as discussed by the authors introduced the provable data possession (PDP) model, which allows a client that has stored data at an untrusted server to verify that the server possesses the original data without retrieving it.
Abstract: We introduce a model for provable data possession (PDP) that allows a client that has stored data at an untrusted server to verify that the server possesses the original data without retrieving it. The model generates probabilistic proofs of possession by sampling random sets of blocks from the server, which drastically reduces I/O costs. The client maintains a constant amount of metadata to verify the proof. The challenge/response protocol transmits a small, constant amount of data, which minimizes network communication. Thus, the PDP model for remote data checking supports large data sets in widely-distributed storage systems. We present two provably-secure PDP schemes that are more efficient than previous solutions, even when compared with schemes that achieve weaker guarantees. In particular, the overhead at the server is low (or even constant), as opposed to linear in the size of the data. Experiments using our implementation verify the practicality of PDP and reveal that the performance of PDP is bounded by disk I/O and not by cryptographic computation.

2,127 citations

Book ChapterDOI
04 May 2003
TL;DR: In this article, Boneh, Lynn, and Shacham introduced the concept of an aggregate signature, presented security models for such signatures, and gave several applications for aggregate signatures.
Abstract: An aggregate signature scheme is a digital signature that supports aggregation: Given n signatures on n distinct messages from n distinct users, it is possible to aggregate all these signatures into a single short signature. This single signature (and the n original messages) will convince the verifier that the n users did indeed sign the n original messages (i.e., user i signed message Mi for i = 1, . . . , n). In this paper we introduce the concept of an aggregate signature, present security models for such signatures, and give several applications for aggregate signatures. We construct an efficient aggregate signature from a recent short signature scheme based on bilinear maps due to Boneh, Lynn, and Shacham. Aggregate signatures are useful for reducing the size of certificate chains (by aggregating all signatures in the chain) and for reducing message size in secure routing protocols such as SBGP. We also show that aggregate signatures give rise to verifiably encrypted signatures. Such signatures enable the verifier to test that a given ciphertext C is the encryption of a signature on a given message M. Verifiably encrypted signatures are used in contract-signing protocols. Finally, we show that similar ideas can be used to extend the short signature scheme to give simple ring signatures.

1,859 citations

Posted Content
TL;DR: This paper defines and explores proofs of retrievability (PORs), a POR scheme that enables an archive or back-up service to produce a concise proof that a user can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety.
Abstract: In this paper, we define and explore proofs of retrievability (PORs). A POR scheme enables an archive or back-up service (prover) to produce a concise proof that a user (verifier) can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety.A POR may be viewed as a kind of cryptographic proof of knowledge (POK), but one specially designed to handle a large file (or bitstring) F. We explore POR protocols here in which the communication costs, number of memory accesses for the prover, and storage requirements of the user (verifier) are small parameters essentially independent of the length of F. In addition to proposing new, practical POR constructions, we explore implementation considerations and optimizations that bear on previously explored, related schemes.In a POR, unlike a POK, neither the prover nor the verifier need actually have knowledge of F. PORs give rise to a new and unusual security definition whose formulation is another contribution of our work.We view PORs as an important tool for semi-trusted online archives. Existing cryptographic techniques help users ensure the privacy and integrity of files they retrieve. It is also natural, however, for users to want to verify that archives do not delete or modify files prior to retrieval. The goal of a POR is to accomplish these checks without users having to download the files themselves. A POR can also provide quality-of-service guarantees, i.e., show that a file is retrievable within a certain time bound.

1,783 citations


"Dynamic-Hash-Table Based Public Aud..." refers background in this paper

  • ...PoRs [8] O(1) O(c) O(c) — — 1 ð1 tÞ PDP [9] O(1) O(c) O(c) — — 1 ð1 tÞ CPDP [13] O(cþs) O(cþs) O(cþs) — — 1 ð1 tÞ s DAP [14] O(c) O(c) O(c s) O(n) O(w) 1 ð1 tÞ s DPDP(skip list) [15] cO(logn) cO(logn) cO(logn) wO(logn) wO(logn) 1 ð1 tÞ DPDP(MHT) [6] cO(logn) cO(logn) cO(logn) wO(logn) wO(logn) 1 ð1 tÞ IHT-PA [16] O(cþs) O(cþs) O(cþs) O(n) O(w) 1 ð1 tÞ s DHT-PA O(c) O(c) O(c) (O(c s)) O(w) O(w) 1 ð1 tÞð1 ð1 tÞ Þ...

    [...]

  • ...[8] in 2007, which can check the correctness of data stored on the CSP and ensure data’s retrievability with the use of errorcorrecting code....

    [...]

  • ...[8], in which the verification operation is performed directly between data owners and CSPs with relatively low cost....

    [...]