scispace - formally typeset
Search or ask a question

Showing papers in "IACR Cryptology ePrint Archive in 2015"


Posted Content
TL;DR: This work documents the experiences in teaching smart contract programming to undergraduate students at the University of Maryland, the first pedagogical attempt of its kind.
Abstract: We document our experiences in teaching smart contract programming to undergraduate students at the University of Maryland, the first pedagogical attempt of its kind. Since smart contracts deal directly with the movement of valuable currency units between contratual parties, security of a contract program is of paramount importance. Our lab exposed numerous common pitfalls in designing safe and secure smart contracts. We document several typical classes of mistakes students made, suggest ways to fix/avoid them, and advocate best practices for programming smart contracts. Finally, our pedagogical efforts have also resulted in online open course materials for programming smart contracts, which may be of independent interest to the community.

375 citations


Posted Content
TL;DR: Lattice-based cryptography is the use of conjectured hard problems on point lattices in Rn as the foundation for secure cryptographic systems as mentioned in this paper, which is the main feature of lattice cryptography.
Abstract: Lattice-based cryptography is the use of conjectured hard problems on point lattices in Rn as the foundation for secure cryptographic systems. Attractive features of lattice cryptography include apparent resistance to quantum attacks (in contrast with most number-theoretic cryptography), high asymptotic efficiency and parallelism, security under worst-case intractability assumptions, and solutions to long-standing open problems in cryptography. This work surveys most of the major developments in lattice cryptography over the past ten years. The main focus is on the foundational short integer solution (SIS) and learning with errors (LWE) problems (and their more efficient ring-based variants), their provable hardness assuming the worst-case intractability of standard lattice problems, and their many cryptographic applications.

330 citations


Posted Content
TL;DR: In this article, the authors present hardness results for concrete instances of LWE and give concrete estimates for various families of instances, provide a Sage module for computing these estimates and highlight gaps in the knowledge about algorithms for solving the Learning with Errors problem.
Abstract: The Learning with Errors (LWE) problem has become a central building block of modern cryptographic constructions. This work collects and presents hardness results for concrete instances of LWE. In particular, we discuss algorithms proposed in the literature and give the expected resources required to run them. We consider both generic instances of LWE as well as small secret variants. Since for several methods of solving LWE we require a lattice reduction step, we also review lattice reduction algorithms and use a refined model for estimating their running times. We also give concrete estimates for various families of LWE instances, provide a Sage module for computing these estimates and highlight gaps in the knowledge about algorithms for solving the Learning with Errors problem.

295 citations


Posted Content
TL;DR: In this paper, the authors show that the selfish mining attack is not (in general) optimal and show how a miner can further amplify its gain by non-trivially composing mining attacks with network-level eclipse attacks.
Abstract: Selfish mining, originally discovered by Eyal et al. [9], is a well-known attack where a selfish miner, under certain conditions, can gain a disproportionate share of reward by deviating from the honest behavior. In this paper, we expand the mining strategy space to include novel "stubborn" strategies that, for a large range of parameters, earn the miner more revenue. Consequently, we show that the selfish mining attack is not (in general) optimal. Further, we show how a miner can further amplify its gain by non-trivially composing mining attacks with network-level eclipse attacks. We show, surprisingly, that given the attacker's best strategy, in some cases victims of an eclipse attack can actually benefit from being eclipsed!

262 citations


Posted Content
TL;DR: The U.S. National Security Agency developed the Simon and Speck families of lightweight block ciphers as an aid for securing applications in very constrained environments where AES may not be suitable.
Abstract: The U.S. National Security Agency (NSA) developed the Simon and Speck families of lightweight block ciphers as an aid for securing applications in very constrained environments where AES may not be suitable. This paper summarizes the algorithms, their design rationale, along with current cryptanalysis and implementation results.

259 citations


Posted Content
TL;DR: This work identifies three key components of Bit coin's design that can be decoupled, and maps the design space for numerous proposed modifications, providing comparative analyses for alternative consensus mechanisms, currency allocation mechanisms, computational puzzles, and key management tools.

179 citations


Posted Content
TL;DR: In this paper, the authors introduce a new language that allows application developers to program secure computations without being experts in cryptography, while enabling programmers to create abstractions such as oblivious RAM and width-limited integers, or even new protocols without needing to modify the compiler.
Abstract: Many techniques for secure or private execution depend on executing programs in a data-oblivious way, where the same instructions execute independent of the private inputs which are kept in encrypted form throughout the computation. Designers of such computations today must either put substantial effort into constructing a circuit representation of their algorithm, or use a high-level language and lose the opportunity to make important optimizations or experiment with protocol variations. We show how extensibility can be improved by judiciously exposing the nature of dataoblivious computation. We introduce a new language that allows application developers to program secure computations without being experts in cryptography, while enabling programmers to create abstractions such as oblivious RAM and width-limited integers, or even new protocols without needing to modify the compiler. This paper explains the key language features that safely enable such extensibility and describes the simple implementation approach we use to ensure security properties are preserved.

146 citations


Posted Content
TL;DR: In this article, the authors proposed a homomorphic encryption scheme to encrypt all genomic data in the database and showed how basic genomic algorithms such as Pearson Goodness-of-Fit test, the D′ and r2-measures of linkage disequilibrium, the Estimation Maximization (EM) algorithm for haplotyping, and the Cochran-Armitage Test for Trend can work on encrypted data.
Abstract: A number of databases around the world currently host a wealth of genomic data that is invaluable to researchers conducting a variety of genomic studies. However, patients who volunteer their genomic data run the risk of privacy invasion. In this work, we give a cryptographic solution to this problem: to maintain patient privacy, we propose encrypting all genomic data in the database. To allow meaningful computation on the encrypted data, we propose using a homomorphic encryption scheme. Specifically, we take basic genomic algorithms which are commonly used in genetic association studies and show how they can be made to work on encrypted genotype and phenotype data. In particular, we consider the Pearson Goodness-of-Fit test, the D′ and r2-measures of linkage disequilibrium, the Estimation Maximization (EM) algorithm for haplotyping, and the Cochran-Armitage Test for Trend. We also provide performance numbers for running these algorithms on encrypted data.

141 citations


Posted Content
TL;DR: In this paper, the verifier's dilemma is used to incentivize correct execution of certain applications, including outsourced computation, where scripts require minimal time to verify, where rational miners are well-incentivized to accept unvalidated blockchains.
Abstract: Cryptocurrencies like Bitcoin and the more recent Ethereum system allow users to specify scripts in transactions and contracts to support applications beyond simple cash transactions. In this work, we analyze the extent to which these systems can enforce the correct semantics of scripts. We show that when a script execution requires nontrivial computation effort, practical attacks exist which either waste miners’ computational resources or lead miners to accept incorrect script results. These attacks drive miners to an illfated choice, which we call the verifier’s dilemma, whereby rational miners are well-incentivized to accept unvalidated blockchains. We call the framework of computation through a scriptable cryptocurrency a consensus computer and develop a model that captures incentives for verifying computation in it. We propose a resolution to the verifier’s dilemma which incentivizes correct execution of certain applications, including outsourced computation, where scripts require minimal time to verify. Finally we discuss two distinct, practical implementations of our consensus computer in real cryptocurrency networks like Ethereum.

141 citations


Posted Content
TL;DR: Fully homomorphic encryption (FHE) has been dubbed the holy grail of cryptography, an elusive goal which could solve the IT world's problems of security and trust.
Abstract: Fully homomorphic encryption (FHE) has been dubbed the holy grail of cryptography, an elusive goal which could solve the IT world’s problems of security and trust. Research in the area exploded after 2009 when Craig Gentry showed that FHE can be realised in principle. Since that time considerable progress has been made in finding more practical and more efficient solutions. Whilst research quickly developed, terminology and concepts became diverse and confusing so that today it can be difficult to understand what the achievements of different works actually are. The purpose of this paper is to address three fundamental questions: What is FHE? What can FHE be used for? What is the state of FHE today? As well as surveying the field, we clarify different terminology in use and prove connections between different FHE notions.

136 citations


Posted Content
TL;DR: In this paper, the authors analyze three methods to detect cache-based side-channel attacks in real-time, preventing or limiting the amount of leaked information, and demonstrate that two of the three methods are based on machine learning techniques and all of them can successfully detect an attacker in about one fifth of the time required to complete the attack.
Abstract: In this paper we analyze three methods to detect cache-based side-channel attacks in real time, preventing or limiting the amount of leaked information. Two of the three methods are based on machine learning techniques and all the three of them can successfully detect an attacker in about one fifth of the time required to complete the attack. There were no false positives in our test environment. Moreover we could not measure a change in the execution time of the processes involved in the attack, meaning there is no perceivable overhead. We also analyze how the detection systems behave with a modified version of one of the spy processes. With some optimization we are confident these systems can be used in real world scenarios.

Posted Content
TL;DR: The SSE protocol is extended, adding support for range, substring, wildcard, and phrase queries, in addition to the Boolean queries supported in the original protocol, to extend the performance results of [Cash et al., NDSS’14] to these rich and comprehensive query types.
Abstract: We extend the searchable symmetric encryption (SSE) protocol of [Cash et al., Crypto’13] adding support for range, substring, wildcard, and phrase queries, in addition to the Boolean queries supported in the original protocol. Our techniques apply to the basic single-client scenario underlying the common SSE setting as well as to the more complex Multi-Client and Outsourced Symmetric PIR extensions of [Jarecki et al., CCS’13]. We provide performance information based on our prototype implementation, showing the practicality and scalability of our techniques to very large databases, thus extending the performance results of [Cash et al., NDSS’14] to these rich and comprehensive query types.

Posted Content
TL;DR: A rigorous proof that the log-unit lattice of the ring is indeed efficiently decodable, for any cyclotomic of prime-power index, and yields a quantum polynomial-time algorithm for the shortest vector problem on principal ideal lattices.
Abstract: A handful of recent cryptographic proposals rely on the conjectured hardness of the following problem in the ring of integers of a cyclotomic number field: given a basis of a principal ideal that is guaranteed to have a “rather short” generator, find such a generator. Recently, Bernstein and Campbell-Groves-Shepherd sketched potential attacks against this problem; most notably, the latter authors claimed a polynomial-time quantum algorithm. (Alternatively, replacing the quantum component with an algorithm of Biasse and Fieker would yield a classical subexponential-time algorithm.) A key claim of Campbell et al. is that one step of their algorithm—namely, decoding the log-unit lattice of the ring to recover a short generator from an arbitrary one—is classically efficient (whereas the standard approach on general lattices takes exponential time). However, very few convincing details were provided to substantiate this claim. In this work, we clarify the situation by giving a rigorous proof that the log-unit lattice is indeed efficiently decodable, for any cyclotomic of prime-power index. Combining this with the quantum algorithm from a recent work of Biasse and Song confirms the main claim of Campbell et al. Our proof consists of two main technical contributions: the first is a geometrical analysis, using tools from analytic number theory, of the standard generators of the group of cyclotomic units. The second shows that for a wide class of typical distributions of the short generator, a standard lattice-decoding algorithm can recover it, given any generator. By extending our geometrical analysis, as a second main contribution we obtain an efficient algorithm that, given any generator of a principal ideal (in a prime-power cyclotomic), finds a 2 √ -approximate shortest vector in the ideal. Combining this with the result of Biasse and Song yields a quantum polynomial-time algorithm for the 2 √ -approximate Shortest Vector Problem on principal ideal lattices. ∗Cryptology Group, CWI, Amsterdam, The Netherlands & Mathematical Institute, Leiden University, The Netherlands. Email: cramer@cwi.nl, cramer@math.leidenuniv.nl †Cryptology Group, CWI, Amsterdam, The Netherlands. Supported by an NWO Free Competition Grant. Email: ducas@cwi.nl ‡Department of Computer Science and Engineering, University of Michigan. Much of this work was done while the author was at the Georgia Institute of Technology. This material is based upon work supported by the National Science Foundation under CAREER Award CCF-1054495, by DARPA under agreement number FA8750-11-C-0096, and by the Alfred P. Sloan Foundation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation, DARPA or the U.S. Government, or the Sloan Foundation. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. §Courant Institute of Mathematical Sciences, New York University. Supported by the Simons Collaboration on Algorithms and Geometry and by the National Science Foundation (NSF) under Grant No. CCF-1320188.

Posted Content
TL;DR: This paper shows that the method proposed at CHES 2010 to do mask refreshing introduces a security flaw in the overall masking scheme, and proposes a new solution which avoids the use of mask refreshing, and proves its security.
Abstract: Masking is a widely used countermeasure to protect block cipher implementations against side-channel attacks. The principle is to split every sensitive intermediate variable occurring in the computation into d + 1 shares, where d is called the masking order and plays the role of a security parameter. A masked implementation is then said to achieve d-order security if any set of d (or less) intermediate variables does not reveal key-dependent information. At CHES 2010, Rivain and Prouff have proposed a higher-order masking scheme for AES that works for any arbitrary order d. This scheme, and its subsequent extensions, are based on an improved version of the shared multiplication processing published by Ishai et al. at CRYPTO 2003. This improvement enables better memory/timing performances but its security relies on the refreshing of the masks at some points in the algorithm. In this paper, we show that the method proposed at CHES 2010 to do such mask refreshing introduces a security flaw in the overall masking scheme. Specifically, we show that it is vulnerable to an attack of order dd/2e + 1 whereas the scheme is supposed to achieve d-order security. After exhibiting and analyzing the flaw, we propose a new solution which avoids the use of mask refreshing, and we prove its security. We also provide some implementation trick that makes our proposed solution, not only secure, but also faster than the original scheme.

Posted Content
TL;DR: In this article, the authors formalize the use of Bitcoin as a source of publicly verifiable randomness and show that any attack on this beacon would form an attack on Bitcoin itself and hence have a monetary cost that they can bound, unlike any other construction for a public randomness beacon in the literature.
Abstract: 1 Stanford University 2 Concordia University 3 Princeton University Abstract. We formalize the use of Bitcoin as a source of publiclyverifiable randomness. As a side-effect of Bitcoin’s proof-of-work-based consensus system, random values are broadcast every time new blocks are mined. We can derive strong lower bounds on the computational min-entropy in each block: currently, at least 68 bits of min-entropy are produced every 10 minutes, from which one can derive over 32 nearuniform bits using standard extractor techniques. We show that any attack on this beacon would form an attack on Bitcoin itself and hence have a monetary cost that we can bound, unlike any other construction for a public randomness beacon in the literature. In our simplest construction, we show that a lottery producing a single unbiased bit is manipulation-resistant against an attacker with a stake of less than 50 bitcoins in the output, or about US$12,000 today. Finally, we propose making the beacon output available to smart contracts and demonstrate that this simple tool enables a number of interesting applications.

Posted Content
TL;DR: In this article, the authors proposed a new signature scheme with the same features as the Camenisch-Lysyanskaya (CL) signature scheme but without the linear-size drawback.
Abstract: Digital signature is a fundamental primitive with numerous applications. Following the development of pairing-based cryptography, several taking advantage of this setting have been proposed. Among them, the Camenisch-Lysyanskaya (CL) signature scheme is one of the most flexible and has been used as a building block for many other protocols. Unfortunately, this scheme suffers from a linear size in the number of messages to be signed which limits its use in many situations. In this paper, we propose a new signature scheme with the same features as CL-signatures but without the linear-size drawback: our signature consists of only two elements, whatever the message length, and our algorithms are more efficient. This construction takes advantage of using type 3 pairings, that are already widely used for security and efficiency reasons. We prove the security of our scheme without random oracles but in the generic group model. Finally, we show that protocols using CL-signatures can easily be instantiated with ours, leading to much more efficient constructions.


Posted Content
TL;DR: Hawk is a decentralized smart contract system that does not store financial transactions in the clear on the blockchain, thus retaining transactional privacy from the public's view, and is the first to formalize the blockchain model of cryptography.
Abstract: Emerging smart contract systems over decentralized cryptocurrencies allow mutually distrustful parties to transact safely without trusted third parties. In the event of contractual breaches or aborts, the decentralized blockchain ensures that honest parties obtain commensurate compensation. Existing systems, however, lack transactional privacy. All transactions, including flow of money between pseudonyms and amount transacted, are exposed on the blockchain. We present Hawk, a decentralized smart contract system that does not store financial transactions in the clear on the blockchain, thus retaining transactional privacy from the public’s view. A Hawk programmer can write a private smart contract in an intuitive manner without having to implement cryptography, and our compiler automatically generates an efficient cryptographic protocol where contractual parties interact with the blockchain, using cryptographic primitives such as zero-knowledge proofs. To formally define and reason about the security of our protocols, we are the first to formalize the blockchain model of cryptography. The formal modeling is of independent interest. We advocate the community to adopt such a formal model when designing applications atop decentralized blockchains.

Posted Content
TL;DR: In this article, the authors introduce an open framework for the benchmarking of lightweight block ciphers on a multitude of embedded platforms, allowing a user to define a custom "figure of merit" according to which all evaluated candidates can be ranked.
Abstract: In this paper we introduce an open framework for the benchmarking of lightweight block ciphers on a multitude of embedded platforms. Our framework is able to evaluate execution time, RAM footprint, as well as (binary) code size, and allows a user to define a custom “figure of merit” according to which all evaluated candidates can be ranked. We used the framework to benchmark various implementations of 13 lightweight ciphers, namely AES, Fantomas, HIGHT, LBlock, LED, Piccolo, PRESENT, PRINCE, RC5, Robin, Simon, Speck, and TWINE, on three different platforms: 8-bit ATmega, 16-bit MSP430, and 32-bit ARM. Our results give new insights to the question of how well these ciphers are suited to secure the Internet of Things (IoT). The benchmarking framework provides cipher designers with a tool to compare new algorithms with the state-of-the-art and allows standardization bodies to conduct a fair and comprehensive evaluation of a large number of candidates.


Posted Content
TL;DR: This work shows that co-location can be achieved and detected by monitoring the last level cache in public clouds, and presents a full-fledged attack that exploits subtle leakages to recover RSA decryption keys from a colocated instance.
Abstract: It has been six years since Ristenpart et al. [29] demonstrated the viability of co-location and provided the first concrete evidence for sensitive information leakage on a commercial cloud. We show that co-location can be achieved and detected by monitoring the last level cache in public clouds. More significantly, we present a full-fledged attack that exploits subtle leakages to recover RSA decryption keys from a colocated instance. We target a recently patched Libgcrypt RSA implementation by mounting Cross-VM Prime and Probe cache attacks in combination with other tests to detect co-location in Amazon EC2. In a preparatory step, we reverse engineer the unpublished nonlinear slice selection function for the 10 core Intel Xeon processor which significantly accelerates our attack (this chipset is used in Amazon EC2). After co-location is detected and verified, we perform the Prime and Probe attack to recover noisy keys from a carefully monitored Amazon EC2 VM running the aforementioned vulnerable libgcrypt library. We subsequently process the noisy data and obtain the complete 2048-bit RSA key used during encryption. This work reaffirms the privacy concerns and underlines the need for deploying stronger isolation techniques in public clouds. Amazon EC2, Co-location Detection, RSA key recovery, Resource Sharing, Prime and Probe

Posted Content
TL;DR: GRECS (stands for GRaph EnCryption for approximate Shortest distance queries) is proposed which includes three oracle encryption schemes that are provably secure against any semi-honest server and is both computationally-efficient and achieves optimal communication complexity at the cost of a small amount of additional leakage.
Abstract: We propose graph encryption schemes that efficiently support approximate shortest distance queries on large-scale encrypted graphs. Shortest distance queries are one of the most fundamental graph operations and have a wide range of applications. Using such graph encryption schemes, a client can outsource large-scale privacy-sensitive graphs to an untrusted server without losing the ability to query it. Other applications include encrypted graph databases and controlled disclosure systems. We propose GRECS (stands for GRaph EnCryption for approximate Shortest distance queries) which includes three schemes that are provably secure against any semi-honest server. Our first construction makes use of only symmetric-key operations, resulting in a computationally-efficient construction. Our second scheme, makes use of somewhat-homomorphic encryption and is less computationally-efficient but achieves optimal communication complexity (i.e., uses a minimal amount of bandwidth). Finally, our third scheme is both computationally-efficient and achieves optimal communication complexity at the cost of a small amount of additional leakage. We implemented and evaluated the efficiency of our constructions experimentally. The experiments demonstrate that our schemes are efficient and can be applied to graphs that scale up to 1.6 million nodes and 11 million edges.

Posted Content
TL;DR: A new blockchain Byzantine consensus protocol SCP where the throughput scales nearly linearly with the computation: the more computing power available, the more blocks selected per unit time, is designed.
Abstract: In this paper, we design a new blockchain Byzantine consensus protocol SCP where the throughput scales nearly linearly with the computation: the more computing power available, the more blocks selected per unit time. SCP is also efficient that the number of messages it requires is nearly linear in the network size. The computational scalability property offers the flexibility to tune bandwidth consumption by adjusting computational parameters (e.g., proof-of-work difficulty). The key ideas lie in securely establishing identities for network participants, randomly placing them in several committees and running a classical consensus protocol within each committee to propose blocks in parallel. We further design a mechanism to allow reaching consensus on blocks without broadcasting actual block data, while still enabling efficient block verification. We prove that our protocol is secure, efficient and applicable to several case studies. We conduct scalability experiments on Amazon EC2 with upto 80 cores, and confirm that SCP matches its theoretical scaling properties.

Posted Content
TL;DR: In this paper, a method for generating parameter sets and calculating security estimates for NTRUEncrypt is described, and an analysis is provided for the standardized product-form parameter sets from IEEE 1363.1-2008.
Abstract: We describe a methods for generating parameter sets and calculating security estimates for NTRUEncrypt. Analyses are provided for the standardized product-form parameter sets from IEEE 1363.1-2008 and for the NTRU Challenge parameter sets.

Posted Content
TL;DR: Differential Fault Intensity Analysis (DFIA) as mentioned in this paper combines the principles of differential power analysis and fault injection to detect side-channel leakage and active attacks based on Fault Injection.
Abstract: Recent research has demonstrated that there is no sharp distinction between passive attacks based on side-channel leakage and active attacks based on fault injection. Fault behavior can be processed as side-channel information, offering all the benefits of Differential Power Analysis including noise averaging and hypothesis testing by correlation. This paper introduces Differential Fault Intensity Analysis, which combines the principles of Differential Power Analysis and fault injection. We observe that most faults are biased - such as single-bit, two-bit, or three-bit errors in a byte - and that this property can reveal the secret key through a hypothesis test. Unlike Differential Fault Analysis, we do not require precise analysis of the fault propagation. Unlike Fault Sensitivity Analysis, we do not require a fault sensitivity profile for the device under attack. We demonstrate our method on an FPGA implementation of AES with a fault injection model. We find that with an average of 7 fault injections, we can reconstruct a full 128-bit AES key.

Posted Content
TL;DR: A new generic framework for achieving fully secure attribute based encryption (ABE) in prime-order bilinear groups is proposed and the first generic implication from ABE for monotone span programs to ABE for branching programs is established.
Abstract: We propose a new generic framework for achieving fully secure attribute based encryption (ABE) in prime-order bilinear groups. It is generic in the sense that it can be applied to ABE for arbitrary predicate. All previously available frameworks that are generic in this sense are given only in composite-order bilinear groups, of which operations are known to be much less efficient than in prime-order ones for the same security level. These consist of the frameworks by Wee (TCC’14) and Attrapadung (Eurocrypt’14). Both provide abstractions of dual-system encryption techniques introduced by Waters (Crypto’09). Our framework can be considered as a prime-order version of Attrapadung’s framework and works in a similar manner: it relies on a main component called pair encodings, and it generically compiles any secure pair encoding scheme for a predicate in consideration to a fully secure ABE scheme for that predicate. One feature of our new compiler is that although the resulting ABE schemes will be newly defined in prime-order groups, we require essentially the same security notions of pair encodings as before. Beside the security of pair encodings, our framework assumes only the Matrix Diffie-Hellman assumption (Escala et al., Crypto’13), which is a weak assumption that includes the Decisional Linear assumption as a special case. As for its applications, we can plug in available pair encoding schemes and automatically obtain the first fully secure ABE realizations in prime-order groups for predicates of which only fully secure schemes in composite-order groups were known. These include ABE for regular languages, ABE for monotone span programs (and hence Boolean formulae) with short ciphertexts or keys, and completely unbounded ABE for monotone span programs. As a side result, we establish the first generic implication from ABE for monotone span programs to ABE for branching programs. This implies fully-secure ABE for branching programs in some new variants, namely, unbounded, short-ciphertext, and short-key. Previous schemes are bounded and require linear-size ciphertexts and keys.

Posted Content
TL;DR: The quantum paradigm implies that all the deployed public key cryptography will be completely broken by a quantum computer [Sho94] and that brute force attacks of symmetric ciphers can also be sped up by roughly a quadratic factor.
Abstract: Cyber technologies are becoming an increasingly important part of all facets of our lives. Consequently, cybersecurity is a fundamental and growing part of what it means for us to be safe. One of the most fundamental pillars of cybersecurity is cryptography. Most of the cryptography tools we use today rely on computational assumptions, such as the hardness of factoring 2048 bit numbers. These computational problems are sometimes broken (e.g. [Tut00], [BFMV84], [FMS01], [WY05]) by algorithmic advances or increased computing power. Two decades ago, we learned that the quantum paradigm implies that essentially all the deployed public key cryptography will be completely broken by a quantum computer [Sho94] and that brute force attacks of symmetric ciphers can also be sped up by roughly a quadratic factor [Gro96, BBHT98]. Most new paradigms are not initially greeted with great acceptance, as described nicely in the Max Planck quote [Kuh70]: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” There has been much skepticism about the prospect of a large scale quantum computer in the forseeable future (or ever) capable of implementing Shor’s algorithms or quantum searching. People naturally ask if they can continue to delay taking action, since there are many other urgent and serious matters at hand. Whether one can continue to delay roughly depends on three questions.

Posted Content
TL;DR: The Ed448-Goldilocks Curve as discussed by the authors is a strong elliptic curve with fields of size around 2 bits and a security level of around 128 bits, which is the same as the NIST prime-order curve.
Abstract: Many papers have proposed elliptic curves which are faster and easier to implement than the NIST prime-order curves Most of these curves have had fields of size around 2, and thus security estimates of around 128 bits Recently there has been interest in a stronger curve, prompting designs such as Curve41417 and Microsoft’s pseudo-Mersenne-prime curves Here I report on the design of another strong curve, called Ed448-Goldilocks Implementations of this curve can perform very well for its security level on many architectures As of this writing, this curve is favored by IRTF CFRG for inclusion in future versions of TLS along with Curve25519

Posted Content
TL;DR: In this paper, the authors recall why linear codes with complementary duals (LCD codes) play a role in countermeasures to passive and active side-channel analyses on embedded cryptosystems, and characterize conditions under which codes obtained by direct sum, direct product, puncturing, shortening, extending codes, or obtained by the Plotkin sum can be LCD.
Abstract: We recall why linear codes with complementary duals (LCD codes) play a role in counter-measures to passive and active side-channel analyses on embedded cryptosystems. The rate and the minimum distance of such LCD codes must be as large as possible. We recall the known primary construction of such codes with cyclic codes, and investigate other constructions, with expanded Reed-Solomon codes and generalized residue codes, for which we study the idempotents. These constructions do not allow to reach all the desired parameters. We study then those secondary constructions which preserve the LCD property, and we characterize conditions under which codes obtained by direct sum, direct product, puncturing, shortening, extending codes, or obtained by the Plotkin sum, can be LCD.

Posted Content
TL;DR: In this article, the applicability of ORAM for cloud applications such as symmetric searchable encryption (SSE) has been investigated using the Enron email corpus and the complete English Wikipedia corpus.
Abstract: Oblivious RAM (ORAM) is a tool proposed to hide access pattern leakage, and there has been a lot of progress in the efficiency of ORAM schemes; however, less attention has been paid to study the applicability of ORAM for cloud applications such as symmetric searchable encryption (SSE) Although, searchable encryption is one of the motivations for ORAM research, no in-depth study of the applicability of ORAM to searchable encryption exists as of June 2015 In this work, we initiate the formal study of using ORAM to reduce the access pattern leakage in searchable encryption We propose four new leakage classes and develop a systematic methodology to study the applicability of ORAM to SSE We develop a worst-case communication baseline for SSE We show that completely eliminating leakage in SSE is impossible We propose single keyword schemes for our leakage classes and show that either they perform worse than streaming the entire outsourced data (for a large fraction of queries) or they do not provide meaningful reduction in leakage We present detailed evaluation using the Enron email corpus and the complete English Wikipedia corpus