scispace - formally typeset
W

Wenting Zheng

Researcher at University of California, Berkeley

Publications -  25
Citations -  1517

Wenting Zheng is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Inference. The author has an hindex of 10, co-authored 20 publications receiving 1054 citations. Previous affiliations of Wenting Zheng include Massachusetts Institute of Technology & St. Jude Children's Research Hospital.

Papers
More filters
Proceedings ArticleDOI

Speedy transactions in multicore in-memory databases

TL;DR: A commit protocol based on optimistic concurrency control that provides serializability while avoiding all shared-memory writes for records that were only read, which achieves excellent performance and scalability on modern multicore machines.
Proceedings Article

Opaque: an oblivious and encrypted distributed analytics platform

TL;DR: The proposed Opaque introduces new distributed oblivious relational operators that hide access patterns, and new query planning techniques to optimize these new operators to improve performance.
Proceedings Article

Delphi: A Cryptographic Inference Service for Neural Networks

TL;DR: This work designs, implements, and evaluates DELPHI, a secure prediction system that allows two parties to execute neural network inference without revealing either party’s data, and develops a hybrid cryptographic protocol that improves upon the communication and computation costs over prior work.
Proceedings ArticleDOI

Fast databases with fast durability and recovery through multicore parallelism

TL;DR: It is shown that naive logging and checkpoints make normal-case execution slower, but that frequent disk synchronization allows us to keep up with many workloads with only a modest reduction in throughput.
Proceedings ArticleDOI

Helen: Maliciously Secure Coopetitive Learning for Linear Models

TL;DR: Helen as discussed by the authors is a system that allows multiple parties to train a linear model without revealing their data, a setting called co-competitive learning, which can achieve up to five orders of magnitude of performance improvement when compared to training using an existing state-of-the-art secure multi-party computation framework.