scispace - formally typeset
V

Vitaly Shmatikov

Researcher at Cornell University

Publications -  153
Citations -  22828

Vitaly Shmatikov is an academic researcher from Cornell University. The author has contributed to research in topics: Anonymity & Information privacy. The author has an hindex of 64, co-authored 148 publications receiving 17801 citations. Previous affiliations of Vitaly Shmatikov include University of Texas at Austin & French Institute for Research in Computer Science and Automation.

Papers
More filters
Posted Content

Overlearning Reveals Sensitive Attributes

TL;DR: It is shown that overlearning is intrinsic for some tasks and cannot be prevented by censoring unwanted attributes, and that an overlearned model can be "re-purposed" for a different, privacy-violating task even in the absence of the original training data.
Posted Content

Towards Computationally Sound Symbolic Analysis of Key Exchange Protocols (extended abstract)

TL;DR: In this article, the authors present a cryptographically sound formal method for proving correctness of key exchange protocols based on a fragment of a symbolic protocol logic, and demonstrate that proofs of key agreement and key secrecy in this logic imply simulatability in Shoup's secure multi-party framework for key exchang e.g.
Posted Content

The Tao of Inference in Privacy-Protected Databases.

TL;DR: A new inference technique that is analytically optimal and empirically outperforms prior heuristic attacks against PRE-encrypted data, and unlike any prior technique, also infer attributes, such as incomes and medical diagnoses, protected by strong encryption.
Book ChapterDOI

Secure verification of location claims with simultaneous distance modification

TL;DR: It is demonstrated that the SDM property guarantees secure verification of location claims with a small number of verifiers even if some of them maliciously collude with the device.
Posted Content

Differential Privacy Has Disparate Impact on Model Accuracy

TL;DR: Differential privacy is a popular mechanism for training machine learning models with bounded leakage about the presence of specific points in the training data as mentioned in this paper, and it has been shown that in the neural networks trained using differentially private stochastic gradient descent (DP-SGD), accuracy of DP models drops much more for the underrepresented classes and subgroups.