scispace - formally typeset
W

Wendy Hui Wang

Researcher at Stevens Institute of Technology

Publications -  34
Citations -  427

Wendy Hui Wang is an academic researcher from Stevens Institute of Technology. The author has contributed to research in topics: Service provider & Computer science. The author has an hindex of 7, co-authored 29 publications receiving 326 citations.

Papers
More filters
Proceedings ArticleDOI

Anonymizing moving objects: how to hide a MOB in a crowd?

TL;DR: This paper argues that in MOD, there does not exist a fixed set of quasi-identifier (QID) attributes for all the MOBs, and proposes two approaches, namely extreme-union and symmetric anonymization, to build anonymization groups that provably satisfy the proposed k-anonymity requirement, as well as yield low information loss.
Proceedings ArticleDOI

Answering XML queries using materialized views revisited

TL;DR: An original approach for materializing views is suggested which stores for every view node only the list of XML nodes necessary for computing the answer of the view, and a stack-based algorithm is designed which compactly encodes in polynomial time and space all the homomorphisms from a view to a query.
Book ChapterDOI

Privacy-Preserving Distributed Movement Data Aggregation

TL;DR: This work proposes a novel approach to privacy-preserving analytical processing within a distributed setting, and tackles the problem of obtaining aggregated information about vehicle traffic in a city from movement data collected by individual vehicles and shipped to a central server.
Proceedings ArticleDOI

Towards Fair Truth Discovery from Biased Crowdsourced Answers

TL;DR: Among the three fairness enhancing methods, FairTD produces the best accuracy with θ-disparity, and in some settings, the accuracy of FairTD is even better than truth discovery without fairness, as it removes some low-quality answers as side effects.
Posted ContentDOI

Differential Privacy Protection Against Membership Inference Attack on Machine Learning for Genomic Data

TL;DR: The results demonstrate that in addition to prevent overfitting, model sparsity can work together with DP to significantly mitigate the risk of MIA, and a smaller privacy budget provides stronger privacy guarantee with the cost of losing more model accuracy.