scispace - formally typeset
Search or ask a question

Showing papers presented at "Theory of Cryptography Conference in 2018"


Book ChapterDOI
11 Nov 2018
TL;DR: In this paper, a two-message oblivious transfer (OT) protocol without setup that guarantees statistical privacy for the sender even against malicious receivers is presented. But this protocol requires the receiver privacy is game based and relies on the hardness of learning with errors.
Abstract: We construct a two-message oblivious transfer (OT) protocol without setup that guarantees statistical privacy for the sender even against malicious receivers. Receiver privacy is game based and relies on the hardness of learning with errors (LWE). This flavor of OT has been a central building block for minimizing the round complexity of witness indistinguishable and zero knowledge proof systems, non-malleable commitment schemes and multi-party computation protocols, as well as for achieving circuit privacy for homomorphic encryption in the malicious setting. Prior to this work, all candidates in the literature from standard assumptions relied on number theoretic assumptions and were thus insecure in the post-quantum setting. This work provides the first (presumed) post-quantum secure candidate and thus allows to instantiate the aforementioned applications in a post-quantum secure manner.

51 citations


Book ChapterDOI
11 Nov 2018
TL;DR: The recent works of Garg and Srinivasan and Benhamouda and Lin essentially settle the question by showing that protocols that require only two rounds of interaction are implied by the minimal assumption that a two-round oblivious transfer protocol exists.
Abstract: We continue the study of protocols for secure multiparty computation (MPC) that require only two rounds of interaction. The recent works of Garg and Srinivasan (Eurocrypt 2018) and Benhamouda and Lin (Eurocrypt 2018) essentially settle the question by showing that such protocols are implied by the minimal assumption that a two-round oblivious transfer (OT) protocol exists. However, these protocols inherently make a non-black-box use of the underlying OT protocol, which results in poor concrete efficiency. Moreover, no analogous result was known in the information-theoretic setting, or alternatively based on one-way functions, given an OT correlations setup or an honest majority.

43 citations


Book ChapterDOI
11 Nov 2018
TL;DR: In this paper, the authors introduce a new notion of one-message zero-knowledge (1ZK) arguments that satisfy a weak soundness guarantee: the number of false statements that a polynomial-time non-uniform adversary can convince the verifier to accept is not much larger than the size of its nonuniform advice, given by a simulator that runs in (mildly) superpolynomial time.
Abstract: We introduce a new notion of one-message zero-knowledge (1ZK) arguments that satisfy a weak soundness guarantee—the number of false statements that a polynomial-time non-uniform adversary can convince the verifier to accept is not much larger than the size of its non-uniform advice. The zero-knowledge guarantee is given by a simulator that runs in (mildly) super-polynomial time. We construct such 1ZK arguments based on the notion of multi-collision-resistant keyless hash functions, recently introduced by Bitansky, Kalai, and Paneth (STOC 2018). Relying on the constructed 1ZK arguments, subexponentially-secure time-lock puzzles, and other standard assumptions, we construct one-message fully-concurrent non-malleable commitments. This is the first construction that is based on assumptions that do not already incorporate non-malleability, as well as the first based on (subexponentially) falsifiable assumptions.

36 citations


Book ChapterDOI
11 Nov 2018
TL;DR: Consider the following secret-sharing problem: to distribute a long file s between n servers such that d-subsets cannot recover the file, \((d+1)\)-subsets can recover thefile, and d- Subsets should be able to recover s if and only if they appear in some predefined list L.
Abstract: Consider the following secret-sharing problem. Your goal is to distribute a long file s between n servers such that \((d-1)\)-subsets cannot recover the file, \((d+1)\)-subsets can recover the file, and d-subsets should be able to recover s if and only if they appear in some predefined list L. How small can the information ratio (i.e., the number of bits stored on a server per each bit of the secret) be?

35 citations


Book ChapterDOI
11 Nov 2018
TL;DR: The GGH15 multilinear maps have served as the foundation for a number of cutting-edge cryptographic proposals, but none of the current indistinguishability obfuscation (iO) candidates from G GH15 have any formal security guarantees against zeroizing attacks.
Abstract: The GGH15 multilinear maps have served as the foundation for a number of cutting-edge cryptographic proposals. Unfortunately, many schemes built on GGH15 have been explicitly broken by so-called “zeroizing attacks,” which exploit leakage from honest zero-test queries. The precise settings in which zeroizing attacks are possible have remained unclear. Most notably, none of the current indistinguishability obfuscation (iO) candidates from GGH15 have any formal security guarantees against zeroizing attacks.

34 citations


Book ChapterDOI
11 Nov 2018
TL;DR: Theoretician’s approach to PRF design, where the author proposes constructions and reduces their security to a previously-studied hardness assumption, shows promising results in terms of concrete efficiency and design complexity.
Abstract: Pseudorandom functions (PRFs) are one of the fundamental building blocks in cryptography. Traditionally, there have been two main approaches for PRF design: the “practitioner’s approach” of building concretely-efficient constructions based on known heuristics and prior experience, and the “theoretician’s approach” of proposing constructions and reducing their security to a previously-studied hardness assumption. While both approaches have their merits, the resulting PRF candidates vary greatly in terms of concrete efficiency and design complexity.

33 citations


Book ChapterDOI
11 Nov 2018
TL;DR: The first positive results for watermarking were given, showing how to watermark certain pseudorandom function (PRF) families using indistinguishability obfuscation (iO) and breaking the security of the latter scheme using extraction queries.
Abstract: A software watermarking scheme can embed some information called a mark into a program while preserving its functionality. No adversary can remove the mark without damaging the functionality of the program. Cohen et al. (STOC ’16) gave the first positive results for watermarking, showing how to watermark certain pseudorandom function (PRF) families using indistinguishability obfuscation (iO). Their scheme has a secret marking procedure to embed marks in programs and a public extraction procedure to extract the marks from programs; security holds even against an attacker that has access to a marking oracle. Kim and Wu (CRYPTO ’17) later constructed a PRF watermarking scheme under only the LWE assumption. In their scheme, both the marking and extraction procedures are secret, but security only holds against an attacker with access to a marking oracle but not an extraction oracle. In fact, it is possible to completely break the security of the latter scheme using extraction queries, which is a significant limitation in any foreseeable application.

31 citations


Book ChapterDOI
11 Nov 2018
TL;DR: In this article, the authors showed that oblivious parallel RAM can be simulated with perfect security with O(log N \log \log N) blowups in parallel runtime and O(1) blowup in space relative to the original PRAM.
Abstract: We show that PRAMs can be obliviously simulated with perfect security, incurring only \(O(\log N \log \log N)\) blowup in parallel runtime, \(O(\log ^3 N)\) blowup in total work, and O(1) blowup in space relative to the original PRAM. Our results advance the theoretical understanding of Oblivious (Parallel) RAM in several respects. First, prior to our work, no perfectly secure Oblivious Parallel RAM (OPRAM) construction was known; and we are the first in this respect. Second, even for the sequential special case of our algorithm (i.e., perfectly secure ORAM), we not only achieve logarithmic improvement in terms of space consumption relative to the state-of-the-art, but also significantly simplify perfectly secure ORAM constructions. Third, our perfectly secure OPRAM scheme matches the parallel runtime of earlier statistically secure schemes with negligible failure probability. Since we remove the dependence (in performance) on the security parameter, our perfectly secure OPRAM scheme in fact asymptotically outperforms known statistically secure ones if (sub-)exponentially small failure probability is desired. Our techniques for achieving small parallel runtime are novel and we employ special expander graphs to derandomize earlier statistically secure OPRAM techniques—this is the first time such techniques are used in the constructions of ORAMs/OPRAMs.

30 citations


Book ChapterDOI
11 Nov 2018
TL;DR: A new construction of polynomial-degree multilinear maps is provided and it is shown that this scheme is provably immune to zeroizing attacks under a strengthening of the Branching Program Un-Annihilatability Assumption.
Abstract: All known multilinear map candidates have suffered from a class of attacks known as “zeroizing” attacks, which render them unusable for many applications. We provide a new construction of polynomial-degree multilinear maps and show that our scheme is provably immune to zeroizing attacks under a strengthening of the Branching Program Un-Annihilatability Assumption (Garg et al., TCC 2016-B).

28 citations


Book ChapterDOI
11 Nov 2018
TL;DR: In this paper, Ishai and Kushilevitz show that any multi-party functionality can be evaluated using a two-round protocol with perfect correctness and perfect semi-honest security, provided that the majority of parties are honest.
Abstract: We show that any multi-party functionality can be evaluated using a two-round protocol with perfect correctness and perfect semi-honest security, provided that the majority of parties are honest. This settles the round complexity of information-theoretic semi-honest MPC, resolving a longstanding open question (cf. Ishai and Kushilevitz, FOCS 2000). The protocol is efficient for \({\mathrm {NC}}^1\) functionalities. Furthermore, given black-box access to a one-way function, the protocol can be made efficient for any polynomial functionality, at the cost of only guaranteeing computational security.

22 citations


Book ChapterDOI
11 Nov 2018
TL;DR: This work presents a distinguishing attack in \(2n^{1/2}2^{3n/4}\) queries against a generalized version of the Cascaded LRW2 tweakable block cipher and proves that if every tweak value occurs at most \(2^{n/ 4}\) times, Cascading LRW1 is secure up to \(2^n\) queries.
Abstract: The Cascaded LRW2 tweakable block cipher was introduced by Landecker et al. at CRYPTO 2012, and proven secure up to \(2^{2n/3}\) queries. There has not been any attack on the construction faster than the generic attack in \(2^n\) queries. In this work we initiate the quest towards a tight bound. We first present a distinguishing attack in \(2n^{1/2}2^{3n/4}\) queries against a generalized version of the scheme. The attack is supported with an experimental verification and a formal success probability analysis. We subsequently discuss non-trivial bottlenecks in proving tight security, most importantly the distinguisher’s freedom in choosing the tweak values. Finally, we prove that if every tweak value occurs at most \(2^{n/4}\) times, Cascaded LRW2 is secure up to \(2^{3n/4}\) queries.

Book ChapterDOI
11 Nov 2018
TL;DR: A traitor tracing scheme is a public key encryption scheme for which there are many secret decryption keys and there is an efficient algorithm to trace the new key to at least one the colluders.
Abstract: A traitor tracing scheme is a public key encryption scheme for which there are many secret decryption keys. Any of these keys can decrypt a ciphertext; moreover, even if a coalition of users collude, put together their decryption keys and attempt to create a new decryption key, there is an efficient algorithm to trace the new key to at least one the colluders.

Book ChapterDOI
11 Nov 2018
TL;DR: This paper presents modes of operation for encryption and authentication that guarantee security beyond \(2^n\) encrypted/authenticated messages, as long as the adversary’s memory is restricted to be less than \(2*n\) bits.
Abstract: We initiate the study of symmetric encryption in a regime where the memory of the adversary is bounded. For a block cipher with n-bit blocks, we present modes of operation for encryption and authentication that guarantee security beyond \(2^n\) encrypted/authenticated messages, as long as (1) the adversary’s memory is restricted to be less than \(2^n\) bits, and (2) the key of the block cipher is long enough to mitigate memory-less key-search attacks. This is the first proposal of a setting which allows to bypass the \(2^n\) barrier under a reasonable assumption on the adversarial resources.

Book ChapterDOI
11 Nov 2018
TL;DR: This work introduces the notion of registration-based encryption (RBE for short) with the goal of removing the trust parties need to place in the private-key generator in an IBE scheme.
Abstract: In this work, we introduce the notion of registration-based encryption (RBE for short) with the goal of removing the trust parties need to place in the private-key generator in an IBE scheme. In an RBE scheme, users sample their own public and secret keys. There will also be a “key curator” whose job is only to aggregate the public keys of all the registered users and update the “short” public parameter whenever a new user joins the system. Encryption can still be performed to a particular recipient using the recipient’s identity and any public parameters released subsequent to the recipient’s registration. Decryption requires some auxiliary information connecting users’ public (and secret) keys to the public parameters. Because of this, as the public parameters get updated, a decryptor may need to obtain “a few” additional auxiliary information for decryption. More formally, if n is the total number of identities and \(\mathrm {\kappa }\) is the security parameter, we require the following.

Book ChapterDOI
11 Nov 2018
TL;DR: This work presents the first two-round multiparty computation protocols secure against malicious adaptive corruption in the common reference string (CRS) model, based on DDH, LWE, or QR.
Abstract: We present the first two-round multiparty computation (MPC) protocols secure against malicious adaptive corruption in the common reference string (CRS) model, based on DDH, LWE, or QR. Prior two-round adaptively secure protocols were known only in the two-party setting against semi-honest adversaries, or in the general multiparty setting assuming the existence of indistinguishability obfuscation (iO).

Book ChapterDOI
11 Nov 2018
TL;DR: This work constructs Indistinguishability Obfuscation and Functional Encryption schemes in the Turing machine model from the minimal assumption of compact \(\mathsf {FE}\) for circuits and overcome the barrier of sub-exponential loss incurred by all prior work.
Abstract: We construct Indistinguishability Obfuscation (\(\mathsf {iO}\)) and Functional Encryption (\(\mathsf {FE}\)) schemes in the Turing machine model from the minimal assumption of compact \(\mathsf {FE}\) for circuits (\(\mathsf {CktFE}\)). Our constructions overcome the barrier of sub-exponential loss incurred by all prior work. Our contributions are: 1. We construct \(\mathsf {iO}\) in the Turing machine model from the same assumptions as required in the circuit model, namely, sub-exponentially secure \(\mathsf {FE}\) for circuits. The previous best constructions [6, 41] require sub-exponentially secure \(\mathsf {iO}\) for circuits, which in turn requires sub-exponentially secure \(\mathsf {FE}\) for circuits [5, 15]. 2. We provide a new construction of single input \(\mathsf {FE}\) for Turing machines with unbounded length inputs and optimal parameters from polynomially secure, compact \(\mathsf {FE}\) for circuits. The previously best known construction by Ananth and Sahai [7] relies on \(\mathsf {iO}\) for circuits, or equivalently, sub-exponentially secure \(\mathsf {FE}\) for circuits. 3. We provide a new construction of multi-input \(\mathsf {FE}\) for Turing machines. Our construction supports a fixed number of encryptors (say k), who may each encrypt a string \(\mathbf {x}_i\) of unbounded length. We rely on sub-exponentially secure \(\mathsf {FE}\) for circuits, while the only previous construction [10] relies on a strong knowledge type assumption, namely, public coin differing inputs obfuscation.

Book ChapterDOI
11 Nov 2018
TL;DR: A simple construction of indistinguishability obfuscation for Turing machines where the time to obfuscate grows only with the description size of the machine and otherwise, independent of the running time and the space used is given.
Abstract: We give a simple construction of indistinguishability obfuscation for Turing machines where the time to obfuscate grows only with the description size of the machine and otherwise, independent of the running time and the space used. While this result is already known [Koppula, Lewko, and Waters, STOC 2015] from \(i\mathcal {O}\) for circuits and injective pseudorandom generators, our construction and its analysis are conceptually much simpler. In particular, the main technical component in the proof of our construction is a simple combinatorial pebbling argument [Garg and Srinivasan, EUROCRYPT 2018]. Our construction makes use of indistinguishability obfuscation for circuits and \(\mathrm {somewhere\, statistically\, binding\, hash\, functions}\).

Book ChapterDOI
11 Nov 2018
TL;DR: This work studies a simulation paradigm, referred to as local simulation, in garbling schemes, in which the simulator consists of many local simulators that generate different blocks of the garbled circuit.
Abstract: We study a simulation paradigm, referred to as local simulation, in garbling schemes. This paradigm captures simulation proof strategies in which the simulator consists of many local simulators that generate different blocks of the garbled circuit. A useful property of such a simulation strategy is that only a few of these local simulators depend on the input, whereas the rest of the local simulators only depend on the circuit.

Book ChapterDOI
11 Nov 2018
TL;DR: This work constructs the first fully non-interactive adaptively secure DPRF in the standard model and is proved secure under the \(\mathsf {LWE}\) assumption against adversaries that may adaptively decide which servers they want to corrupt.
Abstract: In distributed pseudorandom functions (DPRFs), a PRF secret key SK is secret shared among N servers so that each server can locally compute a partial evaluation of the PRF on some input X. A combiner that collects t partial evaluations can then reconstruct the evaluation F(SK, X) of the PRF under the initial secret key. So far, all non-interactive constructions in the standard model are based on lattice assumptions. One caveat is that they are only known to be secure in the static corruption setting, where the adversary chooses the servers to corrupt at the very beginning of the game, before any evaluation query. In this work, we construct the first fully non-interactive adaptively secure DPRF in the standard model. Our construction is proved secure under the \(\mathsf {LWE}\) assumption against adversaries that may adaptively decide which servers they want to corrupt. We also extend our construction in order to achieve robustness against malicious adversaries.

Book ChapterDOI
Charanjit S. Jutla1, Arnab Roy2
11 Nov 2018
TL;DR: It is shown that the single group element quasi-adaptive NIZK (QA-NIZK) of Jutla and Roy and Kiltz and Wee for linear subspaces can be easily extended to be computationally smooth.
Abstract: We introduce a novel notion of smooth (-verifier) non- interactive zero-knowledge proofs (NIZK) which parallels the familiar notion of smooth projective hash functions (SPHF). We also show that the single group element quasi-adaptive NIZK (QA-NIZK) of Jutla and Roy (CRYPTO 2014) and Kiltz and Wee (EuroCrypt 2015) for linear subspaces can be easily extended to be computationally smooth. One important distinction of the new notion from SPHFs is that in a smooth NIZK the public evaluation of the hash on a language member using the projection key does not require the witness of the language member, but instead just requires its NIZK proof.

Book ChapterDOI
11 Nov 2018
TL;DR: A general notion of certifiably injective doubly enhanced trapdoor functions (DECITDFs) is proposed, which provides a way of certifying that a given key defines an injective function over the domain defined by it, even when that domain is not efficiently recognizable and sampleable.
Abstract: The modeling of trapdoor permutations has evolved over the years. Indeed, finding an appropriate abstraction that bridges between the existing candidate constructions and the needs of applications has proved to be challenging. In particular, the notions of certifying permutations (Bellare and Yung, 96), enhanced and doubly enhanced trapdoor permutations (Goldreich, 04, 08, 11, Goldreich and Rothblum, 13) were added to bridge the gap between the modeling of trapdoor permutations and needs of applications. We identify an additional gap in the current abstraction of trapdoor permutations: Previous works implicitly assumed that it is easy to recognize elements in the domain, as well as uniformly sample from it, even for illegitimate function indices. We demonstrate this gap by using the (Bitansky-Paneth-Wichs, 16) doubly-enhanced trapdoor permutation family to instantiate the Feige-Lapidot-Shamir (FLS) paradigm for constructing non-interactive zero-knowledge (NIZK) protocols, and show that the resulting proof system is unsound. To close the gap, we propose a general notion of certifiably injective doubly enhanced trapdoor functions (DECITDFs), which provides a way of certifying that a given key defines an injective function over the domain defined by it, even when that domain is not efficiently recognizable and sampleable. We show that DECITDFs suffice for instantiating the FLS paradigm; more generally, we argue that certifiable injectivity is needed whenever the generation process of the function is not trusted. We then show two very different ways to construct DECITDFs: One is via the traditional method of RSA/Rabin with the Bellare-Yung certification mechanism, and the other using indistinguishability obfuscation and injective pseudorandom generators. In particular the latter is the first candidate injective trapdoor function, from assumptions other than factoring, that suffices for the FLS paradigm. Finally we observe that a similar gap appears also in other paths proposed in the literature for instantiating the FLS paradigm, specifically via verifiable pseudorandom generators and verifiable pseudorandom functions. Closing the gap there can be done in similar ways to the ones proposed here.

Book ChapterDOI
11 Nov 2018
TL;DR: A two-party coin-flipping protocol is fair if no efficient adversary can bias the output of the honest party (who always outputs a bit, even if the other party aborts) by more than ε(varepsilon) as mentioned in this paper.
Abstract: A two-party coin-flipping protocol is \(\varepsilon \)-fair if no efficient adversary can bias the output of the honest party (who always outputs a bit, even if the other party aborts) by more than \(\varepsilon \). Cleve [STOC ’86] showed that r-round o(1 / r)-fair coin-flipping protocols do not exist. Awerbuch et al. [Manuscript ’85] constructed a \(\varTheta (1/\sqrt{r})\)-fair coin-flipping protocol, assuming the existence of one-way functions. Moran et al. [Journal of Cryptology ’16] constructed an r-round coin-flipping protocol that is \(\varTheta (1/r)\)-fair (thus matching the aforementioned lower bound of Cleve [STOC ’86]), assuming the existence of oblivious transfer.

Book ChapterDOI
Kai-Min Chung1, Yue Guo2, Wei-Kai Lin2, Rafael Pass2, Elaine Shi2 
11 Nov 2018
TL;DR: It is well-understood that two-party coin toss is impossible if one of the parties can prematurely abort; further, this impossibility generalizes to multiple parties with a corrupt majority (even if the adversary is computationally bounded and fail-stop only).
Abstract: Coin toss has been extensively studied in the cryptography literature, and the well-accepted notion of fairness (henceforth called strong fairness) requires that a corrupt coalition cannot cause non-negligible bias. It is well-understood that two-party coin toss is impossible if one of the parties can prematurely abort; further, this impossibility generalizes to multiple parties with a corrupt majority (even if the adversary is computationally bounded and fail-stop only).

Book ChapterDOI
11 Nov 2018
TL;DR: The main result shows that the existence of any linear-preserving black-box reduction for basing the security of unique signatures on some bounded-round assumption implies that the assumption can be broken in polynomial time.
Abstract: We consider the question of whether the security of unique digital signature schemes can be based on game-based cryptographic assumptions using linear-preserving black-box security reductions—that is, black-box reductions for which the security loss (ie, the ratio between “work” of the adversary and the “work” of the reduction) is some a priori bounded polynomial A seminal result by Coron (Eurocrypt’02) shows limitations of such reductions; however, his impossibility result and its subsequent extensions all suffer from two notable restrictions: (1) they only rule out so-called “simple” reductions, where the reduction is restricted to only sequentially invoke “straight-line” instances of the adversary; and (2) they only rule out reductions to non-interactive (two-round) assumptions In this work, we present the first full impossibility result: our main result shows that the existence of any linear-preserving black-box reduction for basing the security of unique signatures on some bounded-round assumption implies that the assumption can be broken in polynomial time

Book ChapterDOI
11 Nov 2018
TL;DR: Information-theoretic secret-key agreement between two parties Alice and Bob is a well-studied problem that is provably impossible in a plain model with public (authenticated) communication, but is known to be possible in a model where the parties also have access to some correlated randomness.
Abstract: Information-theoretic secret-key agreement between two parties Alice and Bob is a well-studied problem that is provably impossible in a plain model with public (authenticated) communication, but is known to be possible in a model where the parties also have access to some correlated randomness. One particular type of such correlated randomness is the so-called satellite setting, where uniform random bits (e.g., sent by a satellite) are received by the parties and the adversary Eve over inherently noisy channels. The antenna size determines the error probability, and the antenna is the adversary’s limiting resource much as computing power is the limiting resource in traditional complexity-based security. The natural assumption about the adversary is that her antenna is at most Q times larger than both Alice’s and Bob’s antenna, where, to be realistic, Q can be very large.

Book ChapterDOI
11 Nov 2018
TL;DR: A commit-and-prove protocol allows a prover to commit to a value and prove a statement about this value while guaranteeing that the committed value remains hidden while only making black-box use of cryptography.
Abstract: Motivated by theoretical and practical considerations, an important line of research is to design secure computation protocols that only make black-box use of cryptography. An important component in nearly all the black-box secure computation constructions is a black-box commit-and-prove protocol. A commit-and-prove protocol allows a prover to commit to a value and prove a statement about this value while guaranteeing that the committed value remains hidden. A black-box commit-and-prove protocol implements this functionality while only making black-box use of cryptography.

Book ChapterDOI
11 Nov 2018
TL;DR: With this framework, one can argue purely classically about the quantum-security of hash functions; this is in contrast to previous proofs which are in terms of sophisticated quantum-information-theoretic and quantum-algorithmic reasoning.
Abstract: Hash functions are of fundamental importance in theoretical and in practical cryptography, and with the threat of quantum computers possibly emerging in the future, it is an urgent objective to understand the security of hash functions in the light of potential future quantum attacks. To this end, we reconsider the collapsing property of hash functions, as introduced by Unruh, which replaces the notion of collision resistance when considering quantum attacks. Our contribution is a formalism and a framework that offers significantly simpler proofs for the collapsing property of hash functions. With our framework, we can prove the collapsing property for hash domain extension constructions entirely by means of decomposing the iteration function into suitable elementary composition operations. In particular, given our framework, one can argue purely classically about the quantum-security of hash functions; this is in contrast to previous proofs which are in terms of sophisticated quantum-information-theoretic and quantum-algorithmic reasoning.

Book ChapterDOI
01 Dec 2018
TL;DR: In the most general form of secret sharing, the access structure can be any monotone language, and any qualified subset of parties can reconstruct the secret, but no unqualified subset can learn anything about the secret.
Abstract: Secret sharing is a mechanism by which a trusted dealer holding a secret “splits” the secret into many “shares” and distributes the shares to a collection of parties. Associated with the sharing is a monotone access structure, that specifies which parties are “qualified” and which are not: any qualified subset of parties can (efficiently) reconstruct the secret, but no unqualified subset can learn anything about the secret. In the most general form of secret sharing, the access structure can be any monotone \({\mathsf {NP}}\) language.

Book ChapterDOI
11 Nov 2018
TL;DR: In this paper, a new notion of information-theoretic security against a semi-honest adversary is proposed, which is strictly stronger than the standard one, and which they argue to be the best possible.
Abstract: We reconsider the security guarantee that can be achieved by general protocols for secure multiparty computation in the most basic of settings: information-theoretic security against a semi-honest adversary. Since the 1980s, we have elegant solutions to this problem that offer full security, as long as the adversary controls a minority of the parties, but fail completely when that threshold is crossed. In this work, we revisit this problem, questioning the optimality of the standard notion of security. We put forward a new notion of information-theoretic security which is strictly stronger than the standard one, and which we argue to be “best possible.” This notion still requires full security against dishonest minority in the usual sense, and adds a meaningful notion of information-theoretic security even against dishonest majority.

Book ChapterDOI
11 Nov 2018
TL;DR: Byzantine broadcast is a fundamental primitive for secure computation in a setting with n parties in the presence of an adversary controlling at most t parties, and while a lot of progress in optimizing communication complexity has been made for \(t < n/2\), little progress for the general case \(t
Abstract: Byzantine broadcast is a fundamental primitive for secure computation. In a setting with n parties in the presence of an adversary controlling at most t parties, while a lot of progress in optimizing communication complexity has been made for \(t < n/2\), little progress has been made for the general case \(t