scispace - formally typeset
Open AccessPosted Content

PORs: Proofs of Retrievability for Large Files

Ari Juels, +1 more
- 01 Jan 2007 - 
- Vol. 2007, pp 243
TLDR
This paper defines and explores proofs of retrievability (PORs), a POR scheme that enables an archive or back-up service to produce a concise proof that a user can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety.
Abstract
In this paper, we define and explore proofs of retrievability (PORs). A POR scheme enables an archive or back-up service (prover) to produce a concise proof that a user (verifier) can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety.A POR may be viewed as a kind of cryptographic proof of knowledge (POK), but one specially designed to handle a large file (or bitstring) F. We explore POR protocols here in which the communication costs, number of memory accesses for the prover, and storage requirements of the user (verifier) are small parameters essentially independent of the length of F. In addition to proposing new, practical POR constructions, we explore implementation considerations and optimizations that bear on previously explored, related schemes.In a POR, unlike a POK, neither the prover nor the verifier need actually have knowledge of F. PORs give rise to a new and unusual security definition whose formulation is another contribution of our work.We view PORs as an important tool for semi-trusted online archives. Existing cryptographic techniques help users ensure the privacy and integrity of files they retrieve. It is also natural, however, for users to want to verify that archives do not delete or modify files prior to retrieval. The goal of a POR is to accomplish these checks without users having to download the files themselves. A POR can also provide quality-of-service guarantees, i.e., show that a file is retrievable within a certain time bound.

read more

Citations
More filters
Proceedings ArticleDOI

A digital forensic model for providing better data provenance in the cloud

TL;DR: A model that aims to identify the physical location of data, both where it originated and where it has been as it passes through the cloud is proposed, done through the use of data provenance.
Journal ArticleDOI

Identity-based remote data checking with a designated verifier

TL;DR: Li et al. as mentioned in this paper proposed an identity-based auditing protocol with a designated verifier, which not only avoids the introduction of certificates, but also has the desired property of only allowing specific verifier to audit.
Journal ArticleDOI

Efficient verifiable data streaming

TL;DR: In this paper, a more efficient VDS scheme is suggested that is secure under any collision-resistant hash functions and unforgeable signature schemes without random oracles, and shows that the length of a secret of a client is only O1, and thelength of a proof and the complexity of appending an element are Ologi.
Journal ArticleDOI

The Proposed Model to Increase Security of Sensitive Data in Cloud Computing

TL;DR: A new approach to security that is controlled by the IT Security Specialist (ITSS) of the company/organization is proposed, based on multiple strategies of file encryption, partitioning and distribution among multiple storage providers, resulting in increased confidentiality.
Journal ArticleDOI

Dynamic large branching hash tree based secure and efficient dynamic auditing protocol for cloud environment

TL;DR: A secure novel public auditing scheme named Dynamic Large Branching Hash Tree (DLBHT) with Homomorphic Verifiable Authenticator (HVA) based aggregate signature scheme, which performs auditing with less communication and computational overhead.
References
More filters
Journal ArticleDOI

Review: A survey on security issues in service delivery models of cloud computing

TL;DR: A survey of the different security risks that pose a threat to the cloud is presented and a new model targeting at improving features of an existing model must not risk or threaten other important features of the current model.
Journal ArticleDOI

Efficient dispersal of information for security, load balancing, and fault tolerance

TL;DR: Information Dispersal Algorithm (IDA) has numerous applications to secure and reliable storage of information in computer networks and even on single disks, to fault-tolerant and efficient transmission ofInformation in networks, and to communications between processors in parallel computers.
Posted Content

Provable Data Possession at Untrusted Stores.

TL;DR: Ateniese et al. as discussed by the authors introduced the provable data possession (PDP) model, which allows a client that has stored data at an untrusted server to verify that the server possesses the original data without retrieving it.
Book ChapterDOI

The knowledge complexity of interactive proof-systems

TL;DR: Permission to copy without fee all or part of this material is granted provided that the copies arc not made or distributed for direct commercial advantage.
Proceedings Article

Raptor codes

TL;DR: For a given integer k, and any real /spl epsiv/>0, Raptor codes in this class produce a potentially infinite stream of symbols such that any subset of symbols of size k(1 + /spl Epsiv/) is sufficient to recover the original k symbols, with high probability as mentioned in this paper.
Related Papers (5)