scispace - formally typeset
Search or ask a question

Showing papers on "Concrete security published in 2018"


Proceedings Article
15 Feb 2018
TL;DR: This article studied the adversarial robustness of neural networks through the lens of robust optimization and identified methods for both training and attacking neural networks that are reliable and, in a certain sense, universal.
Abstract: Recent work has demonstrated that deep neural networks are vulnerable to adversarial examples—inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much of the prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against any adversary. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest the notion of security against a first-order adversary as a natural and broad security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models. Code and pre-trained models are available at this https URL and this https URL.

3,581 citations


Journal ArticleDOI
TL;DR: The concrete security analysis suggests that a distributed SDN architecture that supports fault tolerance and consistency checks is important for SDN control plane security, and provides a framework for comparative analysis which focuses on essential network properties required by typical production networks.
Abstract: Software defined networking implements the network control plane in an external entity, rather than in each individual device as in conventional networks. This architectural difference implies a different design for control functions necessary for essential network properties, e.g., loop prevention and link redundancy. We explore how such differences redefine the security weaknesses in the SDN control plane and provide a framework for comparative analysis which focuses on essential network properties required by typical production networks. This enables analysis of how these properties are delivered by the control planes of SDN and conventional networks, and to compare security threats and mitigations. Despite the architectural difference, we find similar, but not identical, exposures in control plane security if both network paradigms provide the same network properties and are analyzed under the same threat model. However, defenses vary; SDN cannot depend on edge based filtering to protect its control plane, while this is arguably the primary defense in conventional networks. Our concrete security analysis suggests that a distributed SDN architecture that supports fault tolerance and consistency checks is important for SDN control plane security. Our analysis methodology may be of independent interest for future security analysis of SDN and conventional networks.

61 citations


Book ChapterDOI
29 Apr 2018
TL;DR: This paper revisits the multi-user (mu) security of symmetric encryption, from the perspective of delivering an analysis of the AEAD scheme, and shows that its mu security is comparable to that achieved in the single-user setting.
Abstract: This paper revisits the multi-user (mu) security of symmetric encryption, from the perspective of delivering an analysis of the \(\mathsf {AES\text {-}GCM\text {-}SIV}\) AEAD scheme. Our end result shows that its mu security is comparable to that achieved in the single-user setting. In particular, even when instantiated with short keys (e.g., 128 bits), the security of \(\mathsf {AES\text {-}GCM\text {-}SIV}\) is not impacted by the collisions of two user keys, as long as each individual nonce is not re-used by too many users. Our bounds also improve existing analyses in the single-user setting, in particular when messages of variable lengths are encrypted. We also validate security against a general class of key-derivation methods, including one that halves the complexity of the final proposal.

30 citations


Journal ArticleDOI
TL;DR: In this article, the authors improved the device-independent security proofs of Kaniewski and Wehner [New J. Phys. 18, 055004 (2016)NJOPFM1367-2630/18/5/055004] for two-party cryptography with memoryless devices and added a security proof for device independent position verification under different physical constraints on the adversary.
Abstract: Quantum communication has demonstrated its usefulness for quantum cryptography far beyond quantum key distribution. One domain is two-party cryptography, whose goal is to allow two parties who may not trust each other to solve joint tasks. Another interesting application is position-based cryptography whose goal is to use the geographical location of an entity as its only identifying credential. Unfortunately, security of these protocols is not possible against an all powerful adversary. However, if we impose some realistic physical constraints on the adversary, there exist protocols for which security can be proven, but these so far relied on the knowledge of the quantum operations performed during the protocols. In this work we improve the device-independent security proofs of Kaniewski and Wehner [New J. Phys. 18, 055004 (2016)NJOPFM1367-263010.1088/1367-2630/18/5/055004] for two-party cryptography (with memoryless devices) and we add a security proof for device-independent position verification (also memoryless devices) under different physical constraints on the adversary. We assess the quality of the devices by observing a Bell violation, and, as for Kaniewski and Wehner [New J. Phys. 18, 055004 (2016)NJOPFM1367-263010.1088/1367-2630/18/5/055004], security can be attained for any violation of the Clauser-Holt-Shimony-Horne inequality.

25 citations


Book ChapterDOI
02 Dec 2018
TL;DR: This chapter discusses local pseudorandom generators, which allow to expand a short random string into a long pseudo-random string, such that each output bit depends on a constant number d of input bits.
Abstract: Local pseudorandom generators allow to expand a short random string into a long pseudo-random string, such that each output bit depends on a constant number d of input bits. Due to its extreme efficiency features, this intriguing primitive enjoys a wide variety of applications in cryptography and complexity. In the polynomial regime, where the seed is of size n and the output of size \(n^{\textsf {s}}\) for \(\textsf {s}> 1\), the only known solution, commonly known as Goldreich’s PRG, proceeds by applying a simple d-ary predicate to public random size-d subsets of the bits of the seed.

20 citations


Proceedings ArticleDOI
15 Oct 2018
TL;DR: This paper revisits the mu security of GCM and provides new concrete security bounds which improve upon previous work by adopting a refined parameterization of adversarial resources that highlights the impact on security of (1) nonce re-use across users and of (2) re-keying.
Abstract: Multi-user (mu) security considers large-scale attackers (e.g., state actors) that given access to a number of sessions, attempt to compromise at least one of them. Mu security of authenticated encryption (AE) was explicitly considered in the development of TLS 1.3. This paper revisits the mu security of GCM, which remains to date the most widely used dedicated AE mode. We provide new concrete security bounds which improve upon previous work by adopting a refined parameterization of adversarial resources that highlights the impact on security of (1) nonce re-use across users and of (2) re-keying. As one of the main applications, we give tight security bounds for the nonce-randomization mechanism adopted in the record protocol of TLS 1.3 as a mitigation of large-scale multi-user attacks. We provide tight security bounds that yield the first validation of this method. In particular, we solve the main open question of Bellare and Tackmann (CRYPTO '16), who only considered restricted attackers which do not attempt to violate integrity, and only gave non-tight bounds.

17 citations


Posted Content
TL;DR: In this article, the authors revisited the multi-user security of GCM and provided tight security bounds for the nonce-randomization mechanism adopted in the record protocol of TLS 1.3.
Abstract: Multi-user (mu) security considers large-scale attackers (e.g., state actors) that given access to a number of sessions, attempt to compromise at least one of them. Mu security of authenticated encryption (AE) was explicitly considered in the development of TLS 1.3. This paper revisits the mu security of GCM, which remains to date the most widely used dedicated AE mode. We provide new concrete security bounds which improve upon previous work by adopting a refined parameterization of adversarial resources that highlights the impact on security of (1) nonce re-use across users and of (2) re-keying. As one of the main applications, we give tight security bounds for the nonce-randomization mechanism adopted in the record protocol of TLS 1.3 as a mitigation of large-scale multi-user attacks. We provide tight security bounds that yield the first validation of this method. In particular, we solve the main open question of Bellare and Tackmann (CRYPTO '16), who only considered restricted attackers which do not attempt to violate integrity, and only gave non-tight bounds.

13 citations


Journal ArticleDOI
TL;DR: This work analyzes the behavior of attackers who exploit the zero-day vulnerabilities and predicts their attack timing, and proposes a method of security measurement that designs a learning strategy to model the information sharing mechanism among multiattackers and uses spatial structure to models the long-term process.
Abstract: Security measurement matters to every stakeholder in network security. It provides security practitioners the exact security awareness. However, most of the works are not applicable to the unknown threat. What is more, existing efforts on security metric mainly focus on the ease of certain attack from a theoretical point of view, ignoring the “likelihood of exploitation.” To help administrator have a better understanding, we analyze the behavior of attackers who exploit the zero-day vulnerabilities and predict their attack timing. Based on the prediction, we propose a method of security measurement. In detail, we compute the optimal attack timing from the perspective of attacker, using a long-term game to estimate the risk of being found and then choose the optimal timing based on the risk and profit. We design a learning strategy to model the information sharing mechanism among multiattackers and use spatial structure to model the long-term process. After calculating the Nash equilibrium for each subgame, we consider the likelihood of being attacked for each node as the security metric result. The experiment results show the efficiency of our approach.

11 citations


Dissertation
01 Jan 2018
TL;DR: A theoretical and experimental validation of a success condition for BKZ when solving the uSVP which can be used to determine the minimal required block size, and a quantum version of the hybrid attack which replaces the classical meet-in-the-middle search by a quantum search.
Abstract: Over the past decade, lattice-based cryptography has emerged as one of the most promising candidates for post-quantum public-key cryptography. For most current lattice-based schemes, one can recover the secret key by solving a corresponding instance of the unique Shortest Vector Problem (uSVP), the problem of finding a shortest non-zero vector in a lattice which is unusually short. This work is concerned with the concrete hardness of the uSVP. In particular, we study the uSVP in general as well as instances of the problem with particularly small or sparse short vectors, which are used in cryptographic constructions to increase their efficiency. We study solving the uSVP in general via lattice reduction, more precisely, the Block-wise Korkine-Zolotarev (BKZ) algorithm. In order to solve an instance of the uSVP via BKZ, the applied block size, which specifies the BKZ algorithm, needs to be sufficiently large. However, a larger block size results in higher runtimes of the algorithm. It is therefore of utmost interest to determine the minimal block size that guarantees the success of solving the uSVP via BKZ. In this thesis, we provide a theoretical and experimental validation of a success condition for BKZ when solving the uSVP which can be used to determine the minimal required block size. We further study the practical implications of using so-called sparsification techniques in combination with the above approach. With respect to uSVP instances with particularly small or sparse short vectors, we investigate so-called hybrid attacks. We first adapt the “hybrid lattice reduction and meet-in-the-middle attack” (or short: the hybrid attack) by Howgrave-Graham on the NTRU encryption scheme to the uSVP. Due to this adaption, the attack can be applied to a larger class of lattice-based cryptosystems. In addition, we enhance the runtime analysis of the attack, e.g., by an explicit calculation of the involved success probabilities. As a next step, we improve the hybrid attack in two directions as described in the following. To reflect the potential of a modern attacker on classical computers, we show how to parallelize the attack. We show that our parallel version of the hybrid attack scales well within realistic parameter ranges. Our theoretical analysis is supported by practical experiments, using our implementation of the parallel hybrid attack which employs Open Multi-Processing and the Message Passing Interface. To reflect the power of a potential future attacker who has access to a large-scale quantum computer, we develop a quantum version of the hybrid attack which replaces the classical meet-in-the-middle search by a quantum search. Not only is the quantum hybrid attack faster than its classical counterpart, but also applicable to a wider range of uSVP instances (and hence to a larger number of lattice-based schemes) as it uses a quantum search which is sensitive to the distribution on the search space. Finally, we demonstrate the practical relevance of our results by using the techniques developed in this thesis to evaluate the concrete security levels of the lattice-based schemes submitted to the US National Institute of Standards and Technology’s process of standardizing post-quantum public-key cryptography.

8 citations


Proceedings ArticleDOI
23 Apr 2018
TL;DR: This paper explores the engineering of secure and resilient systems through a detailed examination of security strategies and principles as presented in Appendix F of the recently published NIST SP 800-160.
Abstract: This paper explores the engineering of secure and resilient systems through a detailed examination of security strategies and principles as presented in Appendix F of the recently published National Institute of Standards and Technology Special Publication (NIST SP) 800-160. First, a brief introduction to systems security engineering is provided with recommended readings for those who desire to become more familiar with the specialty domain. Next, the NIST SP 800-160 Appendix F systems security strategies and principles are described, as well as, examined for implementation considerations. This examination and mapping provides a linkage of abstract security strategies to concrete security principles which can be more directly implemented, traced, and tested.

6 citations


Journal ArticleDOI
TL;DR: This paper focuses on the security problem of input information in a class of system identification problems with noise and binary-valued observations, presents a cryptography based security protocol, and improves it in the range of allowed errors.
Abstract: Traditional control does not pay much attention to information security problems in system identification enough, which are important in practical applications. This paper focuses on the security problem of input information in a class of system identification problems with noise and binary-valued observations, presents a cryptography based security protocol, and improves it in the range of allowed errors. During solving the identification problem, the improved security protocol can ensure that the input information is not leaked, and thus, can deal with passive attacks effectively. Besides, a quantitative relationship among the input information, the public key in encryption and the number of participants in the improved protocol is shown. Finally, the simulation results show that, the identification algorithm can still achieve the estimation accuracy by adding the improved security protocol. However, compared with the original identification algorithm, the time complexity of the algorithm with the improved security protocol increases.

Journal ArticleDOI
01 Jul 2018
TL;DR: This article presents a signature scheme that uses partially blind signatures and chameleon hash functions such that both the prover and verifier achieve efficient authentication.
Abstract: This article describes how after the concept of anonymous credential systems was introduced in 1985, a number of similar systems have been proposed. However, these systems use zero-knowledge protocols to authenticate users, resulting in inefficient authentication during the stage of proving credential possession. To overcome this drawback, this article presents a signature scheme that uses partially blind signatures and chameleon hash functions such that both the prover and verifier achieve efficient authentication. In addition to providing a computational cost comparison table showing that the proposed signature scheme achieves a more efficient credential possession proving compared to other schemes, concrete security proofs are provided under a random oracle model to demonstrate that the proposed scheme satisfies the properties of anonymous credentials.