scispace - formally typeset
Search or ask a question

Showing papers presented at "Theory of Cryptography Conference in 2020"


Book ChapterDOI
16 Nov 2020
TL;DR: In this paper, the authors seek solutions that allow a public blockchain to act as a trusted long-term repository of secret information, which enables many powerful applications, including signing statements on behalf of the blockchain, using it as the control plane for a storage system, performing decentralized program-obfuscation-as-a-service, and many more.
Abstract: Blockchains are gaining traction and acceptance, not just for cryptocurrencies, but increasingly as an architecture for distributed computing. In this work we seek solutions that allow a public blockchain to act as a trusted long-term repository of secret information: Our goal is to deposit a secret with the blockchain, specify how it is to be used (e.g., the conditions under which it is released), and have the blockchain keep the secret and use it only in the specified manner (e.g., release only it once the conditions are met). This simple functionality enables many powerful applications, including signing statements on behalf of the blockchain, using it as the control plane for a storage system, performing decentralized program-obfuscation-as-a-service, and many more.

54 citations


Book ChapterDOI
16 Nov 2020
TL;DR: In this paper, the verifier run in time sublinear in the size of the statement it is checking, a strong requirement that restricts the class of SNARKs from which PCD can be built.
Abstract: Recursive proof composition has been shown to lead to powerful primitives such as incrementally-verifiable computation (IVC) and proof-carrying data (PCD). All existing approaches to recursive composition take a succinct non-interactive argument of knowledge (SNARK) and use it to prove a statement about its own verifier. This technique requires that the verifier run in time sublinear in the size of the statement it is checking, a strong requirement that restricts the class of SNARKs from which PCD can be built. This in turn restricts the efficiency and security properties of the resulting scheme.

30 citations


Book ChapterDOI
16 Nov 2020
TL;DR: In this paper, the authors proposed a multi-key fully homomorphic encryption (FHE) scheme for the multiparty setting, where each party can individually choose a key pair and use it to encrypt its own private input.
Abstract: The notion of multi-key fully homomorphic encryption (multi-key FHE) [Lopez-Alt, Tromer, Vaikuntanathan, STOC’12] was proposed as a generalization of fully homomorphic encryption to the multiparty setting. In a multi-key FHE scheme for n parties, each party can individually choose a key pair and use it to encrypt its own private input. Given n ciphertexts computed in this manner, the parties can homomorphically evaluate a circuit C over them to obtain a new ciphertext containing the output of C, which can then be decrypted via a decryption protocol. The key efficiency property is that the size of the (evaluated) ciphertext is independent of the size of the circuit.

29 citations


Book ChapterDOI
16 Nov 2020
TL;DR: In this article, it was shown that the same task can be performed non-interactively (with setup) and in zero-knowledge in a quantum prover with zero knowledge.
Abstract: In a recent breakthrough, Mahadev constructed an interactive protocol that enables a purely classical party to delegate any quantum computation to an untrusted quantum prover. We show that this same task can in fact be performed non-interactively (with setup) and in zero-knowledge.

28 citations


Book ChapterDOI
16 Nov 2020
TL;DR: In this article, a four-round secure MPC protocol was proposed that achieves round-efficient MPC with minimal assumptions (at least w.r.t. black-box simulation).
Abstract: We construct a four round secure multip arty computation (MPC) protocol in the plain model that achieves security against any dishonest majority. The security of our protocol relies only on the existence of four round oblivious transfer. This culminates the long line of research on constructing round-efficient MPC from minimal assumptions (at least w.r.t. black-box simulation).

26 citations


Book ChapterDOI
16 Nov 2020
TL;DR: This work shows asynchronous BA protocols with (expected) subquadratic communication complexity tolerating an adaptive adversary who can corrupt f ≤ (1− )n/3 of the parties (for any > 0) and shows a secure-computation protocol in the same threat model that has o(n) communication when computing no-input functionalities with short output.
Abstract: Understanding the communication complexity of Byzantine agreement (BA) is a fundamental problem in distributed computing. In particular, for protocols involving a large number of parties (as in, e.g., the context of blockchain protocols), it is important to understand the dependence of the communication on the number of parties n. Although adaptively secure BA protocols with \(o(n^2)\) communication are known in the synchronous and partially synchronous settings, no such protocols are known in the fully asynchronous case.

26 citations


Book ChapterDOI
16 Nov 2020
TL;DR: It is shown that it is possible to encrypt classical data into a quantum ciphertext such that the recipient of the ciphertext can produce a classical string which proves to the originator that the recipients has relinquished any chance of recovering the plaintext should the key be revealed.
Abstract: Given a ciphertext, is it possible to prove the deletion of the underlying plaintext? Since classical ciphertexts can be copied, clearly such a feat is impossible using classical information alone. In stark contrast to this, we show that quantum encodings enable certified deletion. More precisely, we show that it is possible to encrypt classical data into a quantum ciphertext such that the recipient of the ciphertext can produce a classical string which proves to the originator that the recipient has relinquished any chance of recovering the plaintext should the key be revealed. Our scheme is feasible with current quantum technology: the honest parties only require quantum devices for single-qubit preparation and measurements; the scheme is also robust against noise in these devices. Furthermore, we provide an analysis that is suitable in the finite-key regime.

25 citations


Book ChapterDOI
16 Nov 2020
TL;DR: In this article, the authors give the first hardness result about the sequential-squaring conjecture in a non-generic model of computation, namely, in a quantitative version of the algebraic group model (AGM) that they call the strong AGM, they show that any speed up of sequential squaring is as hard as factoring N.
Abstract: Time-lock puzzles—problems whose solution requires some amount of sequential effort—have recently received increased interest (e.g., in the context of verifiable delay functions). Most constructions rely on the sequential-squaring conjecture that computing \(g^{2^T} \bmod N\) for a uniform g requires at least T (sequential) steps. We study the security of time-lock primitives from two perspectives: 1. We give the first hardness result about the sequential-squaring conjecture in a non-generic model of computation. Namely, in a quantitative version of the algebraic group model (AGM) that we call the strong AGM, we show that any speed up of sequential squaring is as hard as factoring N. 2. We then focus on timed commitments, one of the most important primitives that can be obtained from time-lock puzzles. We extend existing security definitions to settings that may arise when using timed commitments in higher-level protocols, and give the first construction of non-malleable timed commitments. As a building block of independent interest, we also define (and give constructions for) a related primitive called timed public-key encryption.

23 citations


Book ChapterDOI
16 Nov 2020
TL;DR: This work formally study the trade-off between PCS, concurrency, and communication overhead in the context of group ratcheting, and proves an almost matching upper bound of O(t·(1+log(n/t))), which smoothly increases from O(logn) with no concurrency to O(n) with unbounded concurrency.
Abstract: Post-Compromise Security, or PCS, refers to the ability of a given protocol to recover—by means of normal protocol operations—from the exposure of local states of its (otherwise honest) participants. While PCS in the two-party setting has attracted a lot of attention recently, the problem of achieving PCS in the group setting—called group ratcheting here—is much less understood. On the one hand, one can achieve excellent security by simply executing, in parallel, a two-party ratcheting protocol (e.g., Signal) for each pair of members in a group. However, this incurs \(\mathcal {O}(n)\) communication overhead for every message sent, where n is the group size. On the other hand, several related protocols were recently developed in the context of the IETF Messaging Layer Security (MLS) effort that improve the communication overhead per message to \(\mathcal {O}(\log n)\). However, this reduction of communication overhead involves a great restriction: group members are not allowed to send and recover from exposures concurrently such that reaching PCS is delayed up to n communication time slots (potentially even more).

22 citations


Book ChapterDOI
16 Nov 2020
TL;DR: This paper shows how to achieveantine Broadcast in expected O((n/(n − f))) rounds, and shows that even when 99% of the nodes are corrupt the authors can achieve expected constant rounds.
Abstract: Byzantine Broadcast (BB) is a central question in distributed systems, and an important challenge is to understand its round complexity. Under the honest majority setting, it is long known that there exist randomized protocols that can achieve BB in expected constant rounds, regardless of the number of nodes n. However, whether we can match the expected constant round complexity in the corrupt majority setting—or more precisely, when \(f \ge n/2 + \omega (1)\)—remains unknown, where f denotes the number of corrupt nodes. In this paper, we are the first to resolve this long-standing question. We show how to achieve BB in expected \(O((n/(n-f))^2)\) rounds. Our results hold under a weakly adaptive adversary who cannot perform “after-the-fact removal” of messages already sent by a node before it becomes corrupt. We also assume trusted setup and the Decision Linear (DLIN) assumption in bilinear groups.

21 citations


Book ChapterDOI
16 Nov 2020
TL;DR: This work improves upon Agrawal and Yamada's result by providing a new construction and proof in the standard model, and relies on the Learning With Errors (LWE) assumption and the Knowledge of OrthogonALity Assumption (KOALA) on bilinear groups.
Abstract: Broadcast Encryption with optimal parameters was a long-standing problem, whose first solution was provided in an elegant work by Boneh, Waters and Zhandry [BWZ14]. However, this work relied on multilinear maps of logarithmic degree, which is not considered a standard assumption. Recently, Agrawal and Yamada [AY20] improved this state of affairs by providing the first construction of optimal broadcast encryption from Bilinear Maps and Learning With Errors (LWE). However, their proof of security was in the generic bilinear group model. In this work, we improve upon their result by providing a new construction and proof in the standard model. In more detail, we rely on the Learning With Errors (LWE) assumption and the Knowledge of OrthogonALity Assumption (KOALA) [BW19] on bilinear groups.

Book ChapterDOI
16 Nov 2020
TL;DR: An Ω(logn) overhead lower bound for any k-server ORAM is presented that limits any PPT adversary to distinguishing advantage at most 1/4k when only one server is corrupted.
Abstract: In this work, we consider the construction of oblivious RAMs (ORAM) in a setting with multiple servers and the adversary may corrupt a subset of the servers. We present an \(\varOmega (\log n)\) overhead lower bound for any k-server ORAM that limits any PPT adversary to distinguishing advantage at most 1/4k when only one server is corrupted. In other words, if one insists on negligible distinguishing advantage, then multi-server ORAMs cannot be faster than single-server ORAMs even with polynomially many servers of which only one unknown server is corrupted. Our results apply to ORAMs that may err with probability at most 1/128 as well as scenarios where the adversary corrupts larger subsets of servers. We also extend our lower bounds to other important data structures including oblivious stacks, queues, deques, priority queues and search trees.

Book ChapterDOI
16 Nov 2020
TL;DR: In this paper, the authors extend the classical verification of quantum computations (CVQC) protocol proposed by Mahadev to make the verification efficient and obtain a result in three steps.
Abstract: In this paper, we extend the protocol of classical verification of quantum computations (CVQC) recently proposed by Mahadev to make the verification efficient. Our result is obtained in the following three steps:

Book ChapterDOI
16 Nov 2020
TL;DR: In this article, the authors proposed a hybrid multiparty reusable non-interactive secure computation (mrNISC) model, where parties publish encodings of their private inputs on a public bulletin board, once and for all.
Abstract: Reducing interaction in Multiparty Computation (MPC) is a highly desirable goal in cryptography. It is known that 2-round MPC can be based on the minimal assumption of 2-round Oblivious Transfer (OT) [Benhamouda and Lin, Garg and Srinivasan, EC 2018], and 1-round MPC is impossible in general. In this work, we propose a natural “hybrid” model, called multiparty reusable Non-Interactive Secure Computation (mrNISC). In this model, parties publish encodings of their private inputs \(x_i\) on a public bulletin board, once and for all. Later, any subset I of them can compute on-the-fly a function f on their inputs \(\varvec{x}_I = {\{x_i\}}_{i \in I}\) by just sending a single message to a stateless evaluator, conveying the result \(f(\varvec{x}_I)\) and nothing else. Importantly, the input encodings can be reused in any number of on-the-fly computations, and the same classical simulation security guaranteed by multi-round MPC, is achieved. In short, mrNISC has a minimal yet “tractable” interaction pattern.

Book ChapterDOI
16 Nov 2020
TL;DR: A construction of constant round quantum zero-knowledge argument systems for NP that guarantee security even against quantum malicious verifiers are presented; however, the soundness only holds against classical probabilistic polynomial time adversaries.
Abstract: Knowledge extraction, typically studied in the classical setting, is at the heart of several cryptographic protocols. The prospect of quantum computers forces us to revisit the concept of knowledge extraction in the presence of quantum adversaries.

Book ChapterDOI
16 Nov 2020
TL;DR: Continuous Group Key Agreement (CGKA) as mentioned in this paper is a secure group key agreement protocol that allows a long-lived group of parties to agree on a continuous stream of fresh secret key material.
Abstract: A continuous group key agreement (CGKA) protocol allows a long-lived group of parties to agree on a continuous stream of fresh secret key material. CGKA protocols allow parties to join and leave mid-session but may neither rely on special group managers, trusted third parties, nor on any assumptions about if, when, or for how long members are online. CGKA captures the core of an emerging generation of highly practical end-to-end secure group messaging (SGM) protocols.

Book ChapterDOI
16 Nov 2020
TL;DR: The round complexity of 2PC with one-sided statistical security with respect to black-box simulation is settled by obtaining the following tight results: • In a setting where only one party obtains an output, 2PC is designed in 4 rounds with statistical security against receivers and computational security against senders, and in a set where both parties obtain outputs, the results show that statistical security is achievable at no extra cost to round complexity.
Abstract: There has been a large body of work characterizing the round complexity of general-purpose maliciously secure two-party computation (\(\mathsf {2PC}\)) against probabilistic polynomial time adversaries. This is particularly true for zero-knowledge, which is a special case of \(\mathsf {2PC}\). In fact, in the special case of zero knowledge, optimal protocols with unconditional security against one of the two players have also been meticulously studied and constructed.

Book ChapterDOI
16 Nov 2020
TL;DR: This work builds upon previous work to design injective PRGs that are provably secure from the LWE assumption and designs an alternative last level testing procedure that has additional structure to prevent correctness errors.
Abstract: In a lockable obfuscation scheme [28, 39] a party takes as input a program P, a lock value \(\alpha \), a message \(\mathsf {msg}\) and produces an obfuscated program \(\tilde{P}\). The obfuscated program can be evaluated on an input x to learn the message \(\mathsf {msg}\) if \(P(x)= \alpha \). The security of such schemes states that if \(\alpha \) is randomly chosen (independent of P and \(\mathsf {msg}\)), then one cannot distinguish an obfuscation of P from a “dummy” obfuscation. Existing constructions of lockable obfuscation achieve provable security under the Learning with Errors assumption. One limitation of these constructions is that they achieve only statistical correctness and allow for a possible one sided error where the obfuscated program could output the \(\mathsf {msg}\) on some value x where \(P(x) e \alpha \).

Book ChapterDOI
16 Nov 2020
TL;DR: In particular, it remains a challenging open problem to construct a succinct argument where the prover runs in linear time and the verifier runs in polylogarithmic time.
Abstract: Minimizing the computational cost of the prover is a central goal in the area of succinct arguments. In particular, it remains a challenging open problem to construct a succinct argument where the prover runs in linear time and the verifier runs in polylogarithmic time.

Book ChapterDOI
16 Nov 2020
TL;DR: This paper is the first to construct a BB protocol with sublinear round complexity in the corrupt majority setting, and shows how to achieve BB in ( n n−f ) 2 · poly log λ rounds with 1 − negl(λ) probability.
Abstract: The round complexity of Byzantine Broadcast (BB) has been a central question in distributed systems and cryptography. In the honest majority setting, expected constant round protocols have been known for decades even in the presence of a strongly adaptive adversary. In the corrupt majority setting, however, no protocol with sublinear round complexity is known, even when the adversary is allowed to strongly adaptively corrupt only 51% of the players, and even under reasonable setup or cryptographic assumptions. Recall that a strongly adaptive adversary can examine what original message an honest player would have wanted to send in some round, adaptively corrupt the player in the same round and make it send a completely different message instead.

Book ChapterDOI
16 Nov 2020
TL;DR: It is proved that any one-round balls-in-bins ORAM that does not duplicate balls must have either Ω( √ N) bandwidth or Ω[N] client memory, where N is the number of memory slots being simulated.
Abstract: We initiate a fine-grained study of the round complexity of Oblivious RAM (ORAM). We prove that any one-round balls-in-bins ORAM that does not duplicate balls must have either \(\varOmega (\sqrt{N})\) bandwidth or \(\varOmega (\sqrt{N})\) client memory, where N is the number of memory slots being simulated. This shows that such schemes are strictly weaker than general (multi-round) ORAMs or those with server computation, and in particular implies that a one-round version of the original square-root ORAM of Goldreich and Ostrovksy (J. ACM 1996) is optimal. We prove this bound via new techniques that differ from those of Goldreich and Ostrovksy, and of Larsen and Nielsen (CRYPTO 2018), which achieved an \(\varOmega (\log N)\) bound for balls-in-bins and general multi-round ORAMs respectively. Finally we give a weaker extension of our bound that allows for limited duplication of balls, and also show that our bound extends to multiple-round ORAMs of a restricted form that include the best known constructions.

Book ChapterDOI
16 Nov 2020
TL;DR: In this article, a public-key functional encryption (FE) scheme for quadratic functions with constant-size keys and shorter ciphertexts was proposed. But the complexity of the scheme was not analyzed.
Abstract: We present simple and improved constructions of public-key functional encryption (FE) schemes for quadratic functions. Our main results are: an FE scheme for quadratic functions with constant-size keys as well as shorter ciphertexts than all prior schemes based on static assumptions; a public-key partially-hiding FE that supports NC1 computation on public attributes and quadratic computation on the private message, with ciphertext size independent of the length of the public attribute.

Book ChapterDOI
16 Nov 2020
TL;DR: In this article, a black-box security-amplifying combiner based on parallel composition of m blockchains is proposed to achieve a constant-time settlement for conflict-free transactions.
Abstract: Blockchain protocols based on variations of the longest-chain rule—whether following the proof-of-work paradigm or one of its alternatives—suffer from a fundamental latency barrier This arises from the need to collect a sufficient number of blocks on top of a transaction-bearing block to guarantee the transaction’s stability while limiting the rate at which blocks can be created in order to prevent security-threatening forks Our main result is a black-box security-amplifying combiner based on parallel composition of m blockchains that achieves \(\varTheta (m)\)-fold security amplification for conflict-free transactions or, equivalently, \(\varTheta (m)\)-fold reduction in latency Our construction breaks the latency barrier to achieve, for the first time, a ledger based purely on Nakamoto longest-chain consensus guaranteeing worst-case constant-time settlement for conflict-free transactions: settlement can be accelerated to a constant multiple of block propagation time with negligible error

Book ChapterDOI
16 Nov 2020
TL;DR: In this article, the authors propose an off-the-record (OTR) message authentication protocol that also provides plausible deniability: there is no record that can later convince a third party what messages were actually sent.
Abstract: Off-the-Record (OTR) messaging is a two-party message authentication protocol that also provides plausible deniability: there is no record that can later convince a third party what messages were actually sent. The challenge in group OTR, is to enable the sender to sign his messages so that group members can verify who sent a message (signatures should be unforgeable, even by group members). Also, we want the off-the-record property: even if some verifiers are corrupt and collude, they should not be able to prove the authenticity of a message to any outsider. Finally, we need consistency, meaning that if any group member accepts a signature, then all of them do.

Book ChapterDOI
Victor Shoup1
16 Nov 2020
TL;DR: It is shown that a slight variant of Protocol SPAKE2 + is a secure asymmetric password-authenticated key exchange protocol (PAKE), meaning that the protocol still provides good security guarantees even if a server is compromised and the password file stored on the server is leaked to an adversary.
Abstract: We show that a slight variant of Protocol \( SPAKE2 +\), which was presented but not analyzed in [17], is a secure asymmetric password-authenticated key exchange protocol (PAKE), meaning that the protocol still provides good security guarantees even if a server is compromised and the password file stored on the server is leaked to an adversary. The analysis is done in the UC framework (i.e., a simulation-based security model), under the computational Diffie-Hellman (CDH) assumption, and modeling certain hash functions as random oracles. The main difference between our variant and the original Protocol \( SPAKE2 +\) is that our variant includes standard key confirmation flows; also, adding these flows allows some slight simplification to the remainder of the protocol. Along the way, we also (i) provide the first proof (under the same assumptions) that a slight variant of Protocol \( SPAKE2 \) from [5] is a secure symmetric PAKE in the UC framework (previous security proofs were all in the weaker BPR framework [7]); (ii) provide a proof (under very similar assumptions) that a variant of Protocol \( SPAKE2 +\) that is currently being standardized is also a secure asymmetric PAKE; (iii) repair several problems in earlier UC formulations of secure symmetric and asymmetric PAKE.

Book ChapterDOI
16 Nov 2020
TL;DR: A generic compiler is given to upgrade a NIZK for all NP languages with non-adaptive zero- Knowledge to one with adaptive zero-knowledge and a generic conversion from a SNARK to a zero- knowledge SNARG is given.
Abstract: We give a construction of a non-interactive zero-knowledge (NIZK) argument for all \(\mathsf {NP}\) languages based on a succinct non-interactive argument (SNARG) for all \(\mathsf {NP}\) languages and a one-way function The succinctness requirement for the SNARG is rather mild: We only require that the proof size be \(|\pi |=\mathsf {poly}(\lambda )(|x|+|w|)^c\) for some constant \(c<1/2\), where |x| is the statement length, |w| is the witness length, and \(\lambda \) is the security parameter Especially, we do not require anything about the efficiency of the verification

Book ChapterDOI
16 Nov 2020
TL;DR: This work presents a general framework to equip a broad class of PKC primitives with an efficient watermarking scheme that accommodates the canonical ABO reduction technique to the puncturable pseudorandom function (PRF) technique, which is used to achieve watermarkable PRFs.
Abstract: Program watermarking enables users to embed an arbitrary string called a mark into a program while preserving the functionality of the program. Adversaries cannot remove the mark without destroying the functionality. Although there exist generic constructions of watermarking schemes for public-key cryptographic (PKC) primitives, those schemes are constructed from scratch and not efficient.

Book ChapterDOI
16 Nov 2020
TL;DR: The possibility of bypassing this limitation in the case where the database is a truth table of a “simple” function, such as a union of (multi-dimensional) intervals or convex shapes, a decision tree, or a DNF formula is studied.
Abstract: Information-theoretic private information retrieval (PIR) schemes have attractive concrete efficiency features. However, in the standard PIR model, the computational complexity of the servers must scale linearly with the database size.

Book ChapterDOI
16 Nov 2020
TL;DR: This work presents the first such results, where entropic security is established either under RLWE or under the Decisional Small Polynomial Ratio (DSPR) assumption which is a mild variant of the NTRU assumption.
Abstract: The hardness of the Ring Learning with Errors problem (RLWE) is a central building block for efficiency-oriented lattice-based cryptography. Many applications use an “entropic” variant of the problem where the so-called “secret” is not distributed uniformly as prescribed but instead comes from some distribution with sufficient min-entropy. However, the hardness of the entropic variant has not been substantiated thus far.

Book ChapterDOI
16 Nov 2020
TL;DR: In this article, a reusable two-round MPC protocol from the Decisional Diffie Hellman assumption (DDH) is presented, which allows reusability of the first message across multiple computations.
Abstract: We present a reusable two-round multi-party computation (MPC) protocol from the Decisional Diffie Hellman assumption (DDH). In particular, we show how to upgrade any secure two-round MPC protocol to allow reusability of its first message across multiple computations, using Homomorphic Secret Sharing (HSS) and pseudorandom functions in \(NC^1\)— each of which can be instantiated from DDH.