scispace - formally typeset
Search or ask a question

Showing papers by "Srinivas Devadas published in 2020"


Proceedings ArticleDOI
18 May 2020
TL;DR: Techniques are presented that help scale threshold signature schemes, verifiable secret sharing and distributed key generation protocols to hundreds of thousands of participants and beyond and generalize to any Lagrange-based threshold scheme, not just threshold signatures.
Abstract: The resurging interest in Byzantine fault tolerant systems will demand more scalable threshold cryptosystems. Unfortunately, current systems scale poorly, requiring time quadratic in the number of participants. In this paper, we present techniques that help scale threshold signature schemes (TSS), verifiable secret sharing (VSS) and distributed key generation (DKG) protocols to hundreds of thousands of participants and beyond. First, we use efficient algorithms for evaluating polynomials at multiple points to speed up computing Lagrange coefficients when aggregating threshold signatures. As a result, we can aggregate a 130,000 out of 260,000 BLS threshold signature in just 6 seconds (down from 30 minutes). Second, we show how "authenticating" such multipoint evaluations can speed up proving polynomial evaluations, a key step in communication-efficient VSS and DKG protocols. As a result, we reduce the asymptotic (and concrete) computational complexity of VSS and DKG protocols from quadratic time to quasilinear time, at a small increase in communication complexity. For example, using our DKG protocol, we can securely generate a key for the BLS scheme above in 2.3 hours (down from 8 days). Our techniques improve performance for thresholds as small as 255 and generalize to any Lagrange-based threshold scheme, not just threshold signatures. Our work has certain limitations: we require a trusted setup, we focus on synchronous VSS and DKG protocols and we do not address the worst-case complaint overhead in DKGs. Nonetheless, we hope it will spark new interest in designing large-scale distributed systems.

44 citations


Proceedings Article
12 Jul 2020
TL;DR: This paper proposes a method based on the sample-and-aggregate framework, which has an excess population risk of $\tilde{O}(\frac{d^3}{n\epsilon^4})$ (after omitting other factors), and provides a gradient smoothing and trimming based scheme to achieve excess population risks.
Abstract: In this paper, we consider the problem of designing Differentially Private (DP) algorithms for Stochastic Convex Optimization (SCO) on heavy-tailed data. The irregularity of such data violates some key assumptions used in almost all existing DP-SCO and DP-ERM methods, resulting in failure to provide the DP guarantees. To better understand this type of challenges, we provide in this paper a comprehensive study of DP-SCO under various settings. First, we consider the case where the loss function is strongly convex and smooth. For this case, we propose a method based on the sample-and-aggregate framework, which has an excess population risk of $\tilde{O}(\frac{d^3}{n\epsilon^4})$ (after omitting other factors), where $n$ is the sample size and $d$ is the dimensionality of the data. Then, we show that with some additional assumptions on the loss functions, it is possible to reduce the \textit{expected} excess population risk to $\tilde{O}(\frac{ d^2}{ n\epsilon^2 })$. To lift these additional conditions, we also provide a gradient smoothing and trimming based scheme to achieve excess population risks of $\tilde{O}(\frac{ d^2}{n\epsilon^2})$ and $\tilde{O}(\frac{d^\frac{2}{3}}{(n\epsilon^2)^\frac{1}{3}})$ for strongly convex and general convex loss functions, respectively, \textit{with high probability}. Experiments suggest that our algorithms can effectively deal with the challenges caused by data irregularity.

33 citations


Proceedings Article
01 Feb 2020
TL;DR: XRD is presented, a metadata private messaging system that provides cryptographic privacy, while scaling easily to support more users by adding more servers, and uses a novel technique the authors call aggregate hybrid shuffle.
Abstract: Even as end-to-end encrypted communication becomes more popular, private messaging remains a challenging problem due to metadata leakages, such as who is communicating with whom. Most existing systems that hide communication metadata either (1) do not scale easily, (2) incur significant overheads, or (3) provide weaker guarantees than cryptographic privacy, such as differential privacy or heuristic privacy. This paper presents XRD (short for Crossroads), a metadata private messaging system that provides cryptographic privacy, while scaling easily to support more users by adding more servers. At a high level, XRD uses multiple mix networks in parallel with several techniques, including a novel technique we call aggregate hybrid shuffle. As a result, XRD can support 2 million users with 251 seconds of latency with 100 servers. This is 12x and 3.7x faster than Atom and Pung, respectively, which are prior scalable messaging systems with cryptographic privacy.

26 citations


Book ChapterDOI
16 Nov 2020
TL;DR: This paper shows how to achieveantine Broadcast in expected O((n/(n − f))) rounds, and shows that even when 99% of the nodes are corrupt the authors can achieve expected constant rounds.
Abstract: Byzantine Broadcast (BB) is a central question in distributed systems, and an important challenge is to understand its round complexity. Under the honest majority setting, it is long known that there exist randomized protocols that can achieve BB in expected constant rounds, regardless of the number of nodes n. However, whether we can match the expected constant round complexity in the corrupt majority setting—or more precisely, when \(f \ge n/2 + \omega (1)\)—remains unknown, where f denotes the number of corrupt nodes. In this paper, we are the first to resolve this long-standing question. We show how to achieve BB in expected \(O((n/(n-f))^2)\) rounds. Our results hold under a weakly adaptive adversary who cannot perform “after-the-fact removal” of messages already sent by a node before it becomes corrupt. We also assume trusted setup and the Decision Linear (DLIN) assumption in bilinear groups.

21 citations


Book ChapterDOI
16 Nov 2020
TL;DR: This paper is the first to construct a BB protocol with sublinear round complexity in the corrupt majority setting, and shows how to achieve BB in ( n n−f ) 2 · poly log λ rounds with 1 − negl(λ) probability.
Abstract: The round complexity of Byzantine Broadcast (BB) has been a central question in distributed systems and cryptography. In the honest majority setting, expected constant round protocols have been known for decades even in the presence of a strongly adaptive adversary. In the corrupt majority setting, however, no protocol with sublinear round complexity is known, even when the adversary is allowed to strongly adaptively corrupt only 51% of the players, and even under reasonable setup or cryptographic assumptions. Recall that a strongly adaptive adversary can examine what original message an honest player would have wanted to send in some round, adaptively corrupt the player in the same round and make it send a completely different message instead.

13 citations


Journal ArticleDOI
01 Oct 2020
TL;DR: Taurus is presented, an efficient parallel logging scheme that uses multiple log streams, and is compatible with both data and command logging, and tracks and encodes transaction dependencies using a vector of log sequence numbers (LSNs).

9 citations


Journal ArticleDOI
TL;DR: Path oblivious RAM is an ORAM protocol that simultaneously enjoys simplicity and efficiency and holds promise to provide cryptographic-grade and practical access pattern protection in multiple application domains.
Abstract: Path oblivious RAM (ORAM) is an ORAM protocol that simultaneously enjoys simplicity and efficiency. As a result, it holds promise to provide cryptographic-grade and practical access pattern protection in multiple application domains, including but not limited to secure hardware. In this paper, we review Path ORAM’s key ideas and contribution, summarize its impact and subsequent works, and discuss future directions.

5 citations


Posted Content
TL;DR: In this article, the authors considered the problem of designing differential private algorithms for stochastic convex optimization on heavy-tailed data and provided a comprehensive study of DP-SCO under various settings.
Abstract: In this paper, we consider the problem of designing Differentially Private (DP) algorithms for Stochastic Convex Optimization (SCO) on heavy-tailed data. The irregularity of such data violates some key assumptions used in almost all existing DP-SCO and DP-ERM methods, resulting in failure to provide the DP guarantees. To better understand this type of challenges, we provide in this paper a comprehensive study of DP-SCO under various settings. First, we consider the case where the loss function is strongly convex and smooth. For this case, we propose a method based on the sample-and-aggregate framework, which has an excess population risk of $\tilde{O}(\frac{d^3}{n\epsilon^4})$ (after omitting other factors), where $n$ is the sample size and $d$ is the dimensionality of the data. Then, we show that with some additional assumptions on the loss functions, it is possible to reduce the \textit{expected} excess population risk to $\tilde{O}(\frac{ d^2}{ n\epsilon^2 })$. To lift these additional conditions, we also provide a gradient smoothing and trimming based scheme to achieve excess population risks of $\tilde{O}(\frac{ d^2}{n\epsilon^2})$ and $\tilde{O}(\frac{d^\frac{2}{3}}{(n\epsilon^2)^\frac{1}{3}})$ for strongly convex and general convex loss functions, respectively, \textit{with high probability}. Experiments suggest that our algorithms can effectively deal with the challenges caused by data irregularity.

1 citations


Posted Content
TL;DR: Taurus as mentioned in this paper tracks and encodes transaction dependencies using a vector of log sequence numbers (LSNs), which ensure that the dependencies are fully captured in logging and correctly enforced in recovery.
Abstract: Existing single-stream logging schemes are unsuitable for in-memory database management systems (DBMSs) as the single log is often a performance bottleneck. To overcome this problem, we present Taurus, an efficient parallel logging scheme that uses multiple log streams, and is compatible with both data and command logging. Taurus tracks and encodes transaction dependencies using a vector of log sequence numbers (LSNs). These vectors ensure that the dependencies are fully captured in logging and correctly enforced in recovery. Our experimental evaluation with an in-memory DBMS shows that Taurus's parallel logging achieves up to 9.9x and 2.9x speedups over single-streamed data logging and command logging, respectively. It also enables the DBMS to recover up to 22.9x and 75.6x faster than these baselines for data and command logging, respectively. We also compare Taurus with two state-of-the-art parallel logging schemes and show that the DBMS achieves up to 2.8x better performance on NVMe drives and 9.2x on HDDs.