scispace - formally typeset
Search or ask a question

Showing papers on "Lattice-based cryptography published in 2014"


Proceedings ArticleDOI
12 Jan 2014
TL;DR: Noise-bounded sequential evaluation of high fan-in operations; Circuit sequentialization using Barrington's Theorem; and successive dimension-modulus reduction, which matches the best known hardness for "regular" lattice based public-key encryption up to the ε factor.
Abstract: We show that (leveled) fully homomorphic encryption (FHE) can be based on the hardness of O(n1.5+e)-approximation for lattice problems (such as GapSVP) under quantum reductions for any e 〉 0 (or O(n2+e)-approximation under classical reductions). This matches the best known hardness for "regular" (non-homomorphic) lattice based public-key encryption up to the e factor. A number of previous methods had hit a roadblock at quasipolynomial approximation. (As usual, a circular security assumption can be used to achieve a non-leveled FHE scheme.) Our approach consists of three main ideas: Noise-bounded sequential evaluation of high fan-in operations; Circuit sequentialization using Barrington's Theorem; and finally, successive dimension-modulus reduction.

219 citations


Journal ArticleDOI
TL;DR: The paper surveys algorithms to implement such sampling efficiently, with particular focus on the case of constrained devices with small on-board storage and without access to large numbers of external random bits.
Abstract: Modern lattice-based public-key cryptosystems require sampling from discrete Gaussian (normal) distributions. The paper surveys algorithms to implement such sampling efficiently, with particular focus on the case of constrained devices with small on-board storage and without access to large numbers of external random bits. We review lattice encryption schemes and signature schemes and their requirements for sampling from discrete Gaussians. Finally, we make some remarks on challenges and potential solutions for practical lattice-based cryptography.

128 citations


Book ChapterDOI
26 Mar 2014
TL;DR: This work introduces the first lattice-based VLR group signature, and thus, the first such scheme that is believed to be quantum-resistant, and in the random oracle model, the scheme is proved to be secure based on the hardness of the $\mathsf{SIVP}_{\widetilde{O}}n^{1.5}}$ problem in general lattices.
Abstract: Support of membership revocation is a desirable functionality for any group signature scheme. Among the known revocation approaches, verifier-local revocation VLR seems to be the most flexible one, because it only requires the verifiers to possess some up-to-date revocation information, but not the signers. All of the contemporary VLR group signatures operate in the bilinear map setting, and all of them will be insecure once quantum computers become a reality. In this work, we introduce the first lattice-based VLR group signature, and thus, the first such scheme that is believed to be quantum-resistant. In comparison with existing lattice-based group signatures, our scheme has several noticeable advantages: support of membership revocation, logarithmic-size signatures, and weaker security assumption. In the random oracle model, our scheme is proved to be secure based on the hardness of the $\mathsf{SIVP}_{\widetilde{\mathcal{O}}n^{1.5}}$ problem in general lattices - an assumption that is as weak as those of state-of-the-art lattice-based standard signatures. Moreover, our construction works without relying on encryption schemes, which is an intriguing feature for group signatures.

91 citations


Journal ArticleDOI
TL;DR: There is a strict relation between these two models of visual cryptography that to any random grid scheme corresponds a deterministic scheme and vice versa, which allows us to use results known in a model also in the other model.
Abstract: Visual cryptography is a special type of secret sharing. Two models of visual cryptography have been independently studied: 1) deterministic visual cryptography, introduced by Naor and Shamir, and 2) random grid visual cryptography, introduced by Kafri and Keren. In this paper, we show that there is a strict relation between these two models. In particular, we show that to any random grid scheme corresponds a deterministic scheme and vice versa. This allows us to use results known in a model also in the other model. By exploiting the (many) results known in the deterministic model, we are able to improve several schemes and to provide many upper bounds for the random grid model and by exploiting some results known for the random grid model, we are also able to provide new schemes for the deterministic model. A side effect of this paper is that future new results for any one of the two models should not ignore, and in fact be compared with, the results known in the other model.

65 citations


Proceedings ArticleDOI
01 Jun 2014
TL;DR: This work presents an efficient implementation of BLISS, a recently proposed, post-quantum secure, and formally analyzed novel lattice-based signature scheme that can achieve a significant performance of 35.3 and 6 ms for signing and verification, respectively, at a 128-bit security level on an ARM Cortex-M4F microcontroller.
Abstract: All currently deployed asymmetric cryptography is broken with the advent of powerful quantum computers. We thus have to consider alternative solutions for systems with long-term security requirements (e.g., for long-lasting vehicular and avionic communication infrastructures). In this work we present an efficient implementation of BLISS, a recently proposed, post-quantum secure, and formally analyzed novel lattice-based signature scheme. We show that we can achieve a significant performance of 35.3 and 6 ms for signing and verification, respectively, at a 128-bit security level on an ARM Cortex-M4F microcontroller. This shows that lattice-based cryptography can be efficiently deployed on today's hardware and provides security solutions for many use cases that can even withstand future threats.

62 citations


Journal ArticleDOI
TL;DR: A new lattice-based key exchange (KE) protocol is constructed, which is analogous to the classic Diffie-Hellman KE protocol and it is proved that it provides better security in case of worst-case hardness of lattice problems, relatively efficient implementations, and great simplicity.
Abstract: In this paper, we propose a new hard problem, called bilateral inhomogeneous small integer solution (Bi-ISIS), which can be seen as an extension of the small integer solution problem on lattices. The main idea is that, instead of choosing a rectangle matrix, we choose a square matrix with small rank to generate Bi-ISIS problem without affecting the hardness of the underlying SIS problem. Based on this new problem, we present two new hardness problems: computational Bi-ISIS and decisional problems. As a direct application of these problems, we construct a new lattice-based key exchange (KE) protocol, which is analogous to the classic Diffie- Hellman KE protocol. We prove the security of this protocol and show that it provides better security in case of worst-case hardness of lattice problems, relatively efficient implementations, and great simplicity.

50 citations


Book ChapterDOI
26 Mar 2014
TL;DR: This paper solves the SVP Challenge over a 128-dimensional lattice in Ideal Lattice Challenge from TU Darmstadt, which is currently the highest dimension in the challenge that has ever been solved and proposes a more practical parallelized Gauss Sieve algorithm.
Abstract: In this paper, we report that we have solved the SVP Challenge over a 128-dimensional lattice in Ideal Lattice Challenge from TU Darmstadt, which is currently the highest dimension in the challenge that has ever been solved. The security of lattice-based cryptography is based on the hardness of solving the shortest vector problem SVP in lattices. In 2010, Micciancio and Voulgaris proposed a Gauss Sieve algorithm for heuristically solving the SVP using a list L of Gauss-reduced vectors. Milde and Schneider proposed a parallel implementation method for the Gauss Sieve algorithm. However, the efficiency of the more than 10 threads in their implementation decreased due to the large number of non-Gauss-reduced vectors appearing in the distributed list of each thread. In this paper, we propose a more practical parallelized Gauss Sieve algorithm. Our algorithm deploys an additional Gauss-reduced list V of sample vectors assigned to each thread, and all vectors in list L remain Gauss-reduced by mutually reducing them using all sample vectors in V. Therefore, our algorithm allows the Gauss Sieve algorithm to run for large dimensions with a small communication overhead. Finally, we succeeded in solving the SVP Challenge over a 128-dimensional ideal lattice generated by the cyclotomic polynomial x128+1 using about 30,000 CPU hours.

44 citations


Book ChapterDOI
17 Aug 2014
TL;DR: This work presents the first algebraic construction of a traitor tracing scheme whose security relies on the worst-case hardness of standard lattice problems and introduces the notion of projective sampling family in which each sampling function is keyed and, with a projection of the key on a well chosen space, one can simulate the sampling function in a computationally indistinguishable way.
Abstract: We introduce the k-LWE problem, a Learning With Errors variant of the k-SIS problem. The Boneh-Freeman reduction from SIS to k-SIS suffers from an exponential loss in k. We improve and extend it to an LWE to k-LWE reduction with a polynomial loss in k, by relying on a new technique involving trapdoors for random integer kernel lattices. Based on this hardness result, we present the first algebraic construction of a traitor tracing scheme whose security relies on the worst-case hardness of standard lattice problems. The proposed LWE traitor tracing is almost as efficient as the LWE encryption. Further, it achieves public traceability, i.e., allows the authority to delegate the tracing capability to ”untrusted” parties. To this aim, we introduce the notion of projective sampling family in which each sampling function is keyed and, with a projection of the key on a well chosen space, one can simulate the sampling function in a computationally indistinguishable way. The construction of a projective sampling family from k-LWE allows us to achieve public traceability, by publishing the projected keys of the users. We believe that the new lattice tools and the projective sampling family are quite general that they may have applications in other areas.

43 citations


Proceedings ArticleDOI
01 Sep 2014
TL;DR: This paper studies a candidate of post-quantum cryptography, a new version of McEliece crypto-system based on polar codes, which are recently proposed promising error correcting codes in many applications.
Abstract: It is known that the widely used public key cryptosystems such as RSA and elliptic curve cryptography can be broken by using a specific computation in quantum computers. Currently, since quantum computers which can deal with practical length of parameters are not realized yet, we may still use the famous cryptographic algorithms. However, we need to prepare and deeply study the alternatives of these algorithms before the realization of the practical quantum computers and this line of research is called as the ‘post-quantum cryptography (PQC).’ In this paper, we study a candidate of post-quantum cryptography, a new version of McEliece crypto-system based on polar codes, which are recently proposed promising error correcting codes in many applications.

28 citations


Proceedings ArticleDOI
22 Oct 2014
TL;DR: It is shown that it is possible to implement a highly scalable version of GaussSieve on multi-core CPU-chips, and the key features of the implementation are a lock-free singly linked list, and hand-tuned, vectorized code.
Abstract: Lattice-based cryptography became a hot-topic in the past years because it seems to be quantum immune, i.e., resistant to attacks operated with quantum computers. The security of lattice-based cryptosystems is determined by the hardness of certain lattice problems, such as the Shortest Vector Problem (SVP). Thus, it is of prime importance to study how efficiently SVP-solvers can be implemented. This paper presents a parallel shared-memory implementation of the GaussSieve algorithm, a well known SVP-solver. Our implementation achieves almost linear and linear speedups with up to 64 cores, depending on the tested scenario, and delivers better sequential performance than any other disclosed GaussSieve implementation. In this paper, we show that it is possible to implement a highly scalable version of GaussSieve on multi-core CPU-chips. The key features of our implementation are a lock-free singly linked list, and hand-tuned, vectorized code. Additionally, we propose an algorithmic optimization that leads to faster convergence.

27 citations


Journal ArticleDOI
TL;DR: This thesis provides a comprehensive introduction to several concepts: quantum mechanics using the density operator formalism, quantum cryptography, and quantum key distribution, and proposes a framework that decomposes quantum-key-distribution protocols and their assumptions into several classes.
Abstract: Quantum cryptography uses techniques and ideas from physics and computer science. The combination of these ideas makes the security proofs of quantum cryptography a complicated task. To prove that a quantum-cryptography protocol is secure, assumptions are made about the protocol and its devices. If these assumptions are not justified in an implementation then an eavesdropper may break the security of the protocol. Therefore, security is crucially dependent on which assumptions are made and how justified the assumptions are in an implementation of the protocol. This thesis is primarily a review that analyzes and clarifies the connection between the security proofs of quantum-cryptography protocols and their experimental implementations. In particular, we focus on quantum key distribution: the task of distributing a secret random key between two parties. We provide a comprehensive introduction to several concepts: quantum mechanics using the density operator formalism, quantum cryptography, and quantum key distribution. We define security for quantum key distribution and outline several mathematical techniques that can either be used to prove security or simplify security proofs. In addition, we analyze the assumptions made in quantum cryptography and how they may or may not be justified in implementations. Along with the review, we propose a framework that decomposes quantum-key-distribution protocols and their assumptions into several classes. Protocol classes can be used to clarify which proof techniques apply to which kinds of protocols. Assumption classes can be used to specify which assumptions are justified in implementations and which could be exploited by an eavesdropper. Two contributions of the author are discussed: the security proofs of two two-way quantum-key-distribution protocols and an intuitive proof of the data-processing inequality.

Book ChapterDOI
17 Aug 2014
TL;DR: This work puts the Gentry-Szydlo algorithm into a mathematical framework, and shows that it is part of a general theory of “lattices with symmetry”, which should be applicable to a range of questions in cryptography.
Abstract: We put the Gentry-Szydlo algorithm into a mathematical framework, and show that it is part of a general theory of “lattices with symmetry” For large ranks, there is no good algorithm that decides whether a given lattice has an orthonormal basis But when the lattice is given with enough symmetry, we can construct a provably deterministic polynomial time algorithm to accomplish this, based on the work of Gentry and Szydlo The techniques involve algorithmic algebraic number theory, analytic number theory, commutative algebra, and lattice basis reduction This sheds new light on the Gentry-Szydlo algorithm, and the ideas should be applicable to a range of questions in cryptography

Proceedings ArticleDOI
03 Apr 2014
TL;DR: This paper presents a symmetric key cryptography technique that uses cellular automata(CA) that has been implemented in C and different rule configurations are used to form group Cellular automata that would be used in encryption and decryption.
Abstract: This paper presents a symmetric key cryptography technique that uses cellular automata(CA). Proposed cryptosystem has been implemented in C. State transitions of programmable cellular automata (PCA) are the basis to define certain fundamental transformations to encrypt and decrypt in the cryptographic system. Different rule configurations are used to form group cellular automata that would be used in encryption and decryption.

Wang Xiao1
01 Jan 2014
TL;DR: This paper gives a survey of the main progress on lattice-based cryptography in recent 30 years, which covers the following concerns: computational complexity and searching algorithms relating to hard lattice problems, design and cryptanalysis of lattICE-based cryptosystems, and the relationship of these four areas.
Abstract: Lattice-based cryptography is widely believed to resist quantum computer attacks, which involves many cryptographic mathematical problems and belongs to interdisciplinary study. The development of lattice-based cryptography follows two main lines: One is to study the computational complexity and searching algorithms for solving hard problems in high dimensional lattices based on the research of classical lattice problems. The other is to analyze the security of non-lattice-based public-key cryptosystems using the algorithms solving hard lattice problems, and further to design the lattice-based cryptosystems. This paper gives a survey of the main progress on lattice-based cryptography in recent 30 years, which covers the following concerns: computational complexity and searching algorithms relating to hard lattice problems, design and cryptanalysis of lattice-based cryptosystems. The paper tries to reflect the relationship of these four areas. In addition, some classical lattice problems and relative important results are described.

Book ChapterDOI
TL;DR: Initial attempts to obtain lattice based signatures were closely related to reducing a vector modulo the fundamental parallelepiped of a secret basis (like GGH, or NTRUSign), but there has been a noticeable interest on developing countermeasures to the attacks, but with little success.
Abstract: Initial attempts to obtain lattice based signatures were closely related to reducing a vector modulo the fundamental parallelepiped of a secret basis (like GGH [9], or NTRUSign [12]). This approach leaked some information on the secret, namely the shape of the parallelepiped, which has been exploited on practical attacks [24]. NTRUSign was an extremely efficient scheme, and thus there has been a noticeable interest on developing countermeasures to the attacks, but with little success [6].

Book ChapterDOI
28 May 2014
TL;DR: This paper shows how to instantiate the first lattice-based sequential aggregate signature (SAS) scheme that is provably secure in the random oracle model with NTRUSign signature scheme and how to generate aggregate signatures resulting in one single signature.
Abstract: We propose the first lattice-based sequential aggregate signature (SAS) scheme that is provably secure in the random oracle model. As opposed to factoring and number theory based systems, the security of our construction relies on worst-case lattice problems. Generally speaking, SAS schemes enable any group of signers ordered in a chain to sequentially combine their signatures such that the size of the aggregate signature is much smaller than the total size of all individual signatures. This paper shows how to instantiate our construction with trapdoor function families and how to generate aggregate signatures resulting in one single signature. In particular, we instantiate our construction with the provably secure NTRUSign signature scheme presented by Stehle and Steinfeld at Eurocrypt 2011. This setting allows to generate aggregate signatures being asymptotically as large as individual ones and thus provide optimal compression rates as known from RSA based SAS schemes.

Dissertation
30 Jun 2014
TL;DR: A lattice-based digital signature, two fully homomorphic encryption schemes and cryptographic multilinear maps are designed and implemented and a non interactive key exchange between more than three parties has been realized for the first time.
Abstract: Today, lattice-based cryptography is a thriving scientific field. Its swift expansion is due, among others, to the attractiveness of fully homomorphic encryption and cryptographic multilinear maps. Lattice-based cryptography has also been recognized for its thrilling properties: a security that can be reduced to worst-case instances of problems over lattices, a quasi-optimal asymptotic efficiency and an alleged resistance to quantum computers. However, its practical use in real-world products leaves a lot to be desired. This thesis accomplishes a step towards this goal by narrowing the gap between theoretical research and practical implementation of recent public key cryptosystems. In this thesis, we design and implement a lattice-based digital signature, two fully homomorphic encryption schemes and cryptographic multilinear maps. Our highly efficient signature scheme, BLISS, opened the way to implementing lattice-based cryptography on constrained devices and remains as of today a promising primitive for post-quantum cryptography. Our fully homomorphic encryption schemes enjoy competitive homomorphic evaluations of nontrivial circuits. Finally, we describe the first implementation of cryptographic multilinear maps. Based on our implementation, a non interactive key exchange between more than three parties has been realized for the first time, and amounts to a few seconds per party.

Journal ArticleDOI
TL;DR: Some conclusions are drawn regarding the best candidates for implementation on different platforms in the typical parameter range regarding lattice-based cryptosystems require sampling from discrete Gaussian distributions.
Abstract: Modern lattice-based cryptosystems require sampling from discrete Gaussian distributions. We review lattice based schemes and collect their requirements for sampling from discrete Gaussians. Then we survey the algorithms implementing such sampling and assess their practical performance. Finally we draw some conclusions regarding the best candidates for implementation on different platforms in the typical parameter range.

Journal ArticleDOI
TL;DR: This paper presents a lattice-based signcryption scheme which is secure under the standard model and proves that the scheme achieves indistinguishability against adaptive chosen-ciphertext attacks (IND-CCA2) under the learning with errors (LWE) assumption and existential unforgeability against Adaptive chosen-message attacks (EUFCMA)under the small integer solution (SIS) assumption.
Abstract: In order to achieve secure signcryption schemes in the quantum era, Li Fagen et al. [Concurrency and Computation: Practice and Experience, 2012, 25(4): 2112---2122] and Wang Fenghe et al. [Applied Mathematics & Information Sciences, 2012, 6(1): 23---28] have independently extended the concept of signcryption to lattice-based cryptography. However, their schemes are only secure under the random oracle model. In this paper, we present a lattice-based signcryption scheme which is secure under the standard model. We prove that our scheme achieves indistinguishability against adaptive chosen-ciphertext attacks (IND-CCA2) under the learning with errors (LWE) assumption and existential unforgeability against adaptive chosen-message attacks (EUFCMA) under the small integer solution (SIS) assumption.

Proceedings ArticleDOI
23 Apr 2014
TL;DR: It's concluded that for the polynomials whose degrees are up to 2000 the fastest polynomial multiplication method is iterative NTT.
Abstract: The demand to lattice-based cryptographic schemes has been inreasing. Due to processing unit having multiple processors, there is a need to implements such protocols on these platforms. Graphical processing units (GPU) have attracted so much attention. In this paper, polynomial multiplication algorithms, having a very important role in lattice-based cryptographic schemes, are implemented on a GPU (NVIDIA Quadro 600) using the CUDA platform. FFT-based and schoolbook multiplication methods are implemented in serial and parallel way and a timing comparison for these techniques is given. It's concluded that for the polynomials whose degrees are up to 2000 the fastest polynomial multiplication method is iterative NTT.

Proceedings ArticleDOI
10 Dec 2014
TL;DR: This paper proposes to fill the gap with a new approach using Residue Number Systems, RNS, for one of the core arithmetic operation of lattice based cryptography: namely solving the Closest Vector Problem (CVP).
Abstract: Lattice based cryptography is claimed as a serious candidate for post quantum cryptography, it recently became an essential tool of modern cryptography. Nevertheless, if lattice based cryptography has made theoretical progresses, its chances to be adopted in practice are still low due to the cost of the computation. If some approaches like RSA and ECC have been strongly optimized — in particular their core arithmetic operations, the modular multiplication and/or the modular exponentiation-lattice based cryptography has not been arithmetically improved. This paper proposes to fill the gap with a new approach using Residue Number Systems, RNS, for one of the core arithmetic operation of lattice based cryptography: namely solving the Closest Vector Problem (CVP).

Journal ArticleDOI
TL;DR: A lattice-based threshold hierarchical ABE (lattice- based t -HABE) scheme without random oracles is constructed and proved to be secure against selective attribute set and chosen plaintext attacks under the standard hardness assumption of the learning with errors problem.
Abstract: Attribute-based encryption (ABE) has been considered as a promising cryptographic primitive for realising information security and flexible access control. However, the characteristic of attributes is treated as the identical level in most proposed schemes. Lattice-based cryptography has been attracted much attention because of that it can resist to quantum cryptanalysis. In this study, lattice-based threshold hierarchical ABE (lattice-based t -HABE) scheme without random oracles is constructed and proved to be secure against selective attribute set and chosen plaintext attacks under the standard hardness assumption of the learning with errors problem. The notion of the HABE scheme can be considered as the generalisation of traditional ABE scheme where all attributes have the same level.

Proceedings ArticleDOI
03 Jun 2014
TL;DR: This paper proposes the first provably secure public key encryption scheme based on the Learning with Errors (LWE) problem, in which secrets and errors are sampled uniformly at random from a relatively small set rather than from the commonly used discrete Gaussian distribution.
Abstract: In this paper we propose the first provably secure public key encryption scheme based on the Learning with Errors (LWE) problem, in which secrets and errors are sampled uniformly at random from a relatively small set rather than from the commonly used discrete Gaussian distribution. Using a uniform distribution, instead of a Gaussian, has the potential of improving computational efficiency a great deal due to its simplicity, thus making the scheme attractive for use in practice. At the same time our scheme features the strong security guarantee of being based on the hardness of worst-case lattice problems. After presenting the construction of our scheme we prove its security and propose asymptotic parameters. Finally, we compare our scheme on several measures to one of the most efficient LWE-based encryption schemes with Gaussian noise. We show that the expected efficiency improvement is debunked, due to the large blow-up of the parameter sets involved.

Book ChapterDOI
Ron Steinfeld1
01 Jan 2014
TL;DR: A provable relation between the security of NTRU and the computational hardness of worst-case instances of certain lattice problems, and the construction of fully homomorphic and multilinear cryptographic algorithms are identified.
Abstract: The NTRU public-key cryptosystem, proposed in 1996 by Hoffstein, Pipher and Silverman, is a fast and practical alternative to classical schemes based on factorization or discrete logarithms. In contrast to the latter schemes, it offers quasi-optimal asymptotic efficiency and conjectured security against quantum computing attacks. The scheme is defined over finite polynomial rings, and its security analysis involves the study of natural statistical and computational problems defined over these rings. We survey several recent developments in both the security analysis and in the applications of NTRU and its variants, within the broader field of lattice-based cryptography. These developments include a provable relation between the security of NTRU and the computational hardness of worst-case instances of certain lattice problems, and the construction of fully homomorphic and multilinear cryptographic algorithms. In the process, we identify the underlying statistical and computational problems in finite rings.

Proceedings ArticleDOI
22 Dec 2014
TL;DR: A method of threshold secret sharing scheme in which secret reconstruction is based on celebrated Babai lattice algorithm, which indicates that the scheme is asymptotically correct and analyzed by giving a quantitative proof of security from the point of information theory.
Abstract: In this paper, we introduce a method of threshold secret sharing scheme in which secret reconstruction is based on celebrated Babai lattice algorithm. In order to supply secure public channels for transmitting shares to parties, we need to ensure that there is no quantum threats to these channels. One solution for this problem can be utilization of lattice cryptosystems for these channels which requires designing lattice based secret sharing schemes. We indicate that our scheme is asymptotically correct. Moreover, we analyze the security of our scheme by giving a quantitative proof of security from the view point of information theory. Keywords-threshold secret sharing scheme; closest vector problem; lattice based cryptography

Proceedings Article
11 Dec 2014
TL;DR: This paper introduces pairing-based cryptography focusing on homomorphic cryptography, a bilinear map from two rational point groups to a multiplicative group in extension field used for realizing recent innovative cryptographies.
Abstract: This paper introduces pairing-based cryptography focusing on homomorphic cryptography. Pairing is a bilinear map from two rational point groups to a multiplicative group in extension field. The bilinearity is used for realizing recent innovative cryptographies. In these years, it is applied to some homomorphic encryptions though it needs a heavy calculation. This paper introduces some related approaches from the view-point of pairing-based cryptography.

Posted Content
TL;DR: The radical use of quantum mechanics for cryptography is reviewed, which presents a challenge amongst the researchers to develop new cryptographic techniques that can survive the quantum computing era.
Abstract: Cryptography is an art and science of secure communication. Here the sender and receiver are guaranteed the security through encryption of their data, with the help of a common key. Both the parties should agree on this key prior to communication. The cryptographic systems which perform these tasks are designed to keep the key secret while assuming that the algorithm used for encryption and decryption is public. Thus key exchange is a very sensitive issue. In modern cryptographic algorithms this security is based on the mathematical complexity of the algorithm. But quantum computation is expected to revolutionize computing paradigm in near future. This presents a challenge amongst the researchers to develop new cryptographic techniques that can survive the quantum computing era. This paper reviews the radical use of quantum mechanics for cryptography.

Proceedings ArticleDOI
19 Dec 2014
TL;DR: This paper introduces a third party auditor (TPA) in their public auditing scheme, and constructs the first identity-based public auditors for secure cloud storage from lattice assumption, which is an interesting stepping stone in the post-quantum cryptographic communication.
Abstract: In this paper, we propose a post-quantum secure cloud storage system supporting privacy-preserving public auditing scheme from lattice assumption. In our public auditing scheme, we introduce a third party auditor (TPA), which can efficiently audit the cloud storage data, bringing no additional on-line burden to the users. We utilize preimage sample able functions to realize our lattice-based signature, thus can be considered as random masking to make sure the TPA can not recover the primitive data blocks of the users. Based on the inhomogeneous small integer solution assumption (ISIS), our public auditing scheme is proved secure against the data lost attacks and tamper attacks from the cloud service providers. To the best of our knowledge, we construct the first identity-based public auditing for secure cloud storage from lattice assumption, which is an interesting stepping stone in the post-quantum cryptographic communication.

Proceedings ArticleDOI
01 Sep 2014
TL;DR: This paper intends to analyze one of the major lattice-based cryptographic systems, Nth-degree truncated polynomial ring (NTRU), and accelerate its execution with Graphic Processing Unit (GPU) for acceptable processing performance.
Abstract: Lattice based cryptography is attractive for its quantum computing resistance and efficient encryption/decryption process. However, the big data problem has perplexed lattice based cryptographic systems with the slow processing speed. This paper intends to analyze one of the major lattice-based cryptographic systems, Nth-degree truncated polynomial ring (NTRU), and accelerate its execution with Graphic Processing Unit (GPU) for acceptable processing performance. Three strategies, including single GPU with zero copy, single GPU with data transfer, and multi-GPU versions are proposed. GPU computing techniques such as stream and zero copy are applied to overlap the computation and communication for possible speedup. Experimental results have demonstrated the effectiveness of GPU acceleration of NTRU. As the number of involved devices increases, better NTRU performance will be achieved.

01 Jan 2014
TL;DR: This thesis focuses on the security of fully homomorphic encryption schemes, and shows three different improvements of lattice reduction techniques that are able to accelerate the reduction form O(dβ + dβ) when the algorithm is dedicated for those cryptosystems.
Abstract: Lattice-based cryptography plays an important role in modern cryptography. Apart from being a perfect alternative of classic public key cryptosystems, should the quantum computers become available, the lattice-based cryptography also enables many applications that conventional cryptosystems, such as RSA encryption scheme, can not deliver. One of the most significant aspects from this point of view is the fully homomorphic encryption schemes. A fully homomorphic encryption scheme allows one to arbitrarily operate on the encrypted messages, without decrypting it. This notion was raised in 1978, and it becomes a “holy grail” for the cryptographers for 30 years until 2009, Craig Gentry presented a framework to construct a fully homomorphic encryption using ideal lattice. The fully homomorphic encryption schemes, although they may be lacking of efficiency at its current stage, enable many important applications, such as secured cloud searching verifiable outsourced computing. Nevertheless, just like other cryptosystems, and perhaps all other inventions at the initial stage, the fully homomorphic encryption is young, prospective, and hence requires more research. In this thesis, we focus on the security of fully homomorphic encryption schemes. The security of all known fully homomorphic encryption schemes can be reduced to some lattice problems. Therefore, our main tool, not surprisingly, is lattice. Previous work has shown that some of the fully homomorphic encryption schemes can be broken using lattice reduction algorithms. Indeed, there exist several lattice reduction algorithms, such as LLL and L, that run in polynomial time, that can break a homomorphic encryption scheme. However, the running time, even though it is a polynomial algorithm, is still beyond tolerance. Hence, our first step is to optimize those algorithms. In this thesis, we show three different improvements. To sum up, combining those techniques, we are able to accelerate the reduction form O(dβ + dβ) to O(dβ + dβ) when the algorithm is dedicated for those cryptosystems, where d is the dimension of the lattice, and β is the maximum bit-length of the norm of input