G
Giuseppe Ateniese
Researcher at Stevens Institute of Technology
Publications - 144
Citations - 20034
Giuseppe Ateniese is an academic researcher from Stevens Institute of Technology. The author has contributed to research in topics: Cryptography & Encryption. The author has an hindex of 50, co-authored 143 publications receiving 17685 citations. Previous affiliations of Giuseppe Ateniese include George Mason University & Sapienza University of Rome.
Papers
More filters
Proceedings ArticleDOI
Provable data possession at untrusted stores
Giuseppe Ateniese,Randal Burns,Reza Curtmola,Joseph Herring,Lea Kissner,Zachary N. J. Peterson,Dawn Song +6 more
TL;DR: The provable data possession (PDP) model as discussed by the authors allows a client that has stored data at an untrusted server to verify that the server possesses the original data without retrieving it.
Posted Content
Provable Data Possession at Untrusted Stores.
Giuseppe Ateniese,Randal Burns,Reza Curtmola,Joseph Herring,Lea Kissner,Zachary N. J. Peterson,Dawn Song +6 more
TL;DR: Ateniese et al. as discussed by the authors introduced the provable data possession (PDP) model, which allows a client that has stored data at an untrusted server to verify that the server possesses the original data without retrieving it.
Journal ArticleDOI
Improved proxy re-encryption schemes with applications to secure distributed storage
TL;DR: Performance measurements of the experimental file system demonstrate the usefulness of proxy re-encryption as a method of adding access control to a secure file system and present new re-Encryption schemes that realize a stronger notion of security.
Proceedings ArticleDOI
Scalable and efficient provable data possession
TL;DR: In this article, a provably secure storage outsourced data possession (PDP) technique based on symmetric key cryptography was proposed, which allows outsourcing of dynamic data, such as block modification, deletion and append.
Proceedings ArticleDOI
Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning
TL;DR: In this article, the authors show that any privacy-preserving collaborative deep learning model is susceptible to a powerful attack that exploits the real-time nature of the learning process that allows the adversary to train a Generative Adversarial Network (GAN) that generates prototypical samples of the targeted training set that was meant to be private (the samples generated by the GAN are intended to come from the same distribution as the training data).