Enabling Public Auditability and Data Dynamics for Storage Security in Cloud Computing
read more
Citations
Data-intensive applications, challenges, techniques and technologies: A survey on Big Data
Review: A survey on security issues in service delivery models of cloud computing
Achieving Secure, Scalable, and Fine-grained Data Access Control in Cloud Computing
PORs: Proofs of Retrievability for Large Files
Cryptographic cloud storage
References
Random oracles are practical: a paradigm for designing efficient protocols
Short Signatures from the Weil Pairing
Provable data possession at untrusted stores
Provable Data Possession at Untrusted Stores.
Error Control Coding
Related Papers (5)
Frequently Asked Questions (16)
Q2. What are the future works in "Enabling public auditability and data dynamics for storage security in cloud computing" ?
To support efficient handling of multiple auditing tasks, the authors further explore the technique of bilinear aggregate signature to extend their main result into a multiuser setting, where TPA can perform multiple auditing tasks simultaneously. For any signature query on message m, the authors can submit this message to BLS signing oracle and get ¼ HðmÞ. Therefore, the signing oracle of this new signature scheme can be simulated as 0 ¼ ymx0 ¼ ðHðmÞgmx0Þ. Finally, if there is any adversary can forge a new signature 0 ¼ ðHðm0Þum0 Þ on a message m0 that has never been queried, the authors can get a forged BLS signature on the message m0 as ¼ 0=ym0x0 ¼ Hðm0Þ.
Q3. How does the paper improve the existing proof of storage models?
To achieve efficient data dynamics, the authors improve the existing proof of storage models by manipulating the classic Merkle Hash Tree construction for block tag authentication.
Q4. What is the way to verify data integrity?
The naive way of realizing data integrity verification is to make the hashes of the original data blocks as the leaves in MHT, so the data integrity verification can be conducted without tag authentication and signature aggregation steps.
Q5. What is the importance of ensuring that the data is being correctly stored and maintained?
As clients no longer possess their data locally, it is of critical importance for the clients to ensure that their data are being correctly stored and maintained.
Q6. Why is the DPDP scheme faster than the other two?
Due to the smaller block size (i.e., 20 bytes) compared to RSA-based instantiation, their BLS-based instantiation is more than two times faster than the other two in terms of server computation time.
Q7. What is the simplest way to verify the integrity of a file?
It takes as input the public key pk, the challenge chal, and the proof P returned from the server, and outputs TRUE if the integrity of the file is verified as correct, or FALSE otherwise.
Q8. What is the way to verify the correctness of F?
To verify the correctness of F , the data owner canadopt a spot-checking approach, i.e., requesting a number ofrandomly selected blocks and their corresponding signa-tures to be returned.
Q9. How does the batch auditing technique help to reduce the number of expensive pairing operations?
To support efficient handling of multiple auditing tasks, the authors further explore the technique of bilinear aggregate signature to extend their main result into a multiuser setting, where TPA can perform multiple auditing tasks simultaneously.
Q10. How can the client or TPA verify the correctness of the cloud data?
The client or TPA can periodically challenge the storage server to ensure the correctness of the cloud data, and the original files can be recovered by interacting with the server.
Q11. What is the main problem of the problem of data integrity?
Consider the large size of the outsourced electronic data and the client’s constrained resource capability, the core of the problem can be generalized as how can the client find an efficient way to perform periodical integrity verifications without the local copy of data files.
Q12. What is the importance of ensuring that clients have the right to access their data?
That is, clients should be equipped with certain security means so that they can periodically verify the correctness of the remote data even without the existence of local copies.
Q13. What is the definition of a proof-of-retrievability protocol?
A proof-of-retrievability protocolis sound if any cheating prover that convinces theverification algorithm that it is storing a file F is actuallystoring that file, which the authors define in saying that it yields upthe file F to an extractor algorithm which interacts with itusing the proof-of-retrievability protocol.
Q14. How does the verifier sign the metadata R?
As the authors have described, in the setup phase the verifier signs the metadata R and stores it on the server to achieve stateless verification.
Q15. How much computation overhead does batch auditing save?
following the sameexperiment setting as ¼ 99% and 97%, batch auditing indeed saves TPA’s computation overhead for about 5 and 14 percent, respectively.
Q16. What is the consequence of this variance?
The consequence of this variance will lead to a serious problem: it will give the adversary more opportunities to cheat the prover by manipulating HðmiÞ or mi.