scispace - formally typeset
N

Ninghui Li

Researcher at Purdue University

Publications -  266
Citations -  19897

Ninghui Li is an academic researcher from Purdue University. The author has contributed to research in topics: Access control & Differential privacy. The author has an hindex of 70, co-authored 262 publications receiving 17748 citations. Previous affiliations of Ninghui Li include New York University & National Chiao Tung University.

Papers
More filters
Book ChapterDOI

Secure anonymization for incremental datasets

TL;DR: In this paper, the authors analyze various inference channels that may exist in multiple anonymized datasets and discuss how to avoid such inferences, and then present an approach to securely anonymizing a continuously growing dataset in an efficient manner while assuring high data quality.
Journal ArticleDOI

Understanding hierarchical methods for differentially private histograms

TL;DR: This paper examines the factors affecting the accuracy of hierarchical approaches by studying the mean squared error (MSE) when answering range queries, and analyzes how the MSE changes with different branching factors, after employing constrained inference, and with different methods to allocate the privacy budget among hierarchy levels.
Journal ArticleDOI

PrivBasis: frequent itemset mining with differential privacy

TL;DR: In this article, the authors proposed an approach, called PrivBasis, which leverages a novel notion called basis sets, and introduced algorithms for privately constructing a basis set and then using it to find the most frequent itemsets.
Proceedings ArticleDOI

Automated trust negotiation using cryptographic credentials

TL;DR: A policy language is introduced that enables negotiators to specify authorization requirements that must be met by an opponent to receive various amounts of information about certified attributes and the credentials that contain it, and supports the use of uncertified attributes.
Journal ArticleDOI

Closeness: A New Privacy Measure for Data Publishing

TL;DR: It is shown that ℓ-diversity has a number of limitations and is neither necessary nor sufficient to prevent attribute disclosure, and a new notion of privacy called “closeness” is proposed that offers higher utility.