scispace - formally typeset
Search or ask a question

Showing papers on "Concrete security published in 2019"


Proceedings ArticleDOI
06 Nov 2019
TL;DR: This work introduces SPHINCS+, a stateless hash-based signature framework, and introduces a new few-time signature scheme that is called FORS, and gives a security reduction for SPHinCS+ using this abstraction and derive secure parameters in accordance with the resulting bound.
Abstract: We introduce SPHINCS+, a stateless hash-based signature framework. SPHINCS+ has significant advantages over the state of the art in terms of speed, signature size, and security, and is among the nine remaining signature schemes in the second round of the NIST PQC standardization project. One of our main contributions in this context is a new few-time signature scheme that we call FORS. Our second main contribution is the introduction of tweakable hash functions and a demonstration how they allow for a unified security analysis of hash-based signature schemes. We give a security reduction for SPHINCS+ using this abstraction and derive secure parameters in accordance with the resulting bound. Finally, we present speed results for our optimized implementation of SPHINCS+ and compare to SPHINCS-256, Gravity-SPHINCS, and Picnic.

124 citations


Journal ArticleDOI
TL;DR: It is argued that masking with non-independent leakages may provide improved security levels in certain scenarios and the tradeoff between the measurement complexity and the key enumeration time complexity in divide-and-conquer side-channel attacks can be lower bounded based on the mutual information metric.
Abstract: We investigate the relationship between theoretical studies of leaking cryptographic devices and concrete security evaluations with standard side-channel attacks. Our contributions are in four parts. First, we connect the formal analysis of the masking countermeasure proposed by Duc et al. (Eurocrypt 2014) with the Eurocrypt 2009 evaluation framework for side-channel key recovery attacks. In particular, we re-state their main proof for the masking countermeasure based on a mutual information metric, which is frequently used in concrete physical security evaluations. Second, we discuss the tightness of the Eurocrypt 2014 bounds based on experimental case studies. This allows us to conjecture a simplified link between the mutual information metric and the success rate of a side-channel adversary, ignoring technical parameters and proof artifacts. Third, we introduce heuristic (yet well-motivated) tools for the evaluation of the masking countermeasure when its independent leakage assumption is not perfectly fulfilled, as it is frequently encountered in practice. Thanks to these tools, we argue that masking with non-independent leakages may provide improved security levels in certain scenarios. Eventually, we consider the tradeoff between the measurement complexity and the key enumeration time complexity in divide-and-conquer side-channel attacks and show that these complexities can be lower bounded based on the mutual information metric, using simple and efficient algorithms. The combination of these observations enables significant reductions of the evaluation costs for certification bodies.

50 citations


Proceedings ArticleDOI
01 Jun 2019
TL;DR: Noise Explorer is presented, an online engine for designing, reasoning about, formally verifying and implementing arbitrary Noise Hand shake patterns and can parse formal verification results to generate detailed-but-pedagogical reports regarding the exact security goals of each message of a Noise Handshake Pattern.
Abstract: The Noise Protocol Framework, introduced recently, allows for the design and construction of secure channel protocols by describing them through a simple, restricted language from which complex key derivation and local state transitions are automatically inferred. Noise "Handshake Patterns" can support mutual authentication, forward secrecy, zero round-trip encryption, identity hiding and other advanced features. Since the framework's release, Noise-based protocols have been adopted by WhatsApp, WireGuard and other high-profile applications. We present Noise Explorer, an online engine for designing, reasoning about, formally verifying and implementing arbitrary Noise Handshake Patterns. Based on our formal treatment of the Noise Protocol Framework, Noise Explorer can validate any Noise Handshake Pattern and then translate it into a model ready for automated verification and also into a production-ready software implementation written in Go or in Rust. We use Noise Explorer to analyze more than 57 handshake patterns. We confirm the stated security goals for 12 fundamental patterns and provide precise properties for the rest. We also analyze unsafe handshake patterns and document weaknesses that occur when validity rules are not followed. All of this work is consolidated into a usable online tool that presents a compendium of results and can parse formal verification results to generate detailed-but-pedagogical reports regarding the exact security goals of each message of a Noise Handshake Pattern with respect to each party, under an active attacker and including malicious principals. Noise Explorer evolves alongside the standard Noise Protocol Framework, having already contributed new security goal verification results and stronger definitions for pattern validation and security parameters.

29 citations


Proceedings ArticleDOI
31 Jan 2019
TL;DR: This work is the first to formally analyze and, importantly, verify an Open Banking security profile, based on an existing comprehensive model of the web infrastructure - the Web Infrastructure Model (WIM) proposed by Fett, Küsters, and Schmitz.
Abstract: Forced by regulations and industry demand, banks worldwide are working to open their customers' online banking accounts to third-party services via web-based APIs. By using these so-called Open Banking APIs, third-party companies, such as FinTechs, are able to read information about and initiate payments from their users' bank accounts. Such access to financial data and resources needs to meet particularly high security requirements to protect customers. One of the most promising standards in this segment is the OpenID Financial-grade API (FAPI), currently under development in an open process by the OpenID Foundation and backed by large industry partners. The FAPI is a profile of OAuth 2.0 designed for high-risk scenarios and aiming to be secure against very strong attackers. To achieve this level of security, the FAPI employs a range of mechanisms that have been developed to harden OAuth 2.0, such as Code and Token Binding (including mTLS and OAUTB), JWS Client Assertions, and Proof Key for Code Exchange. In this paper, we perform a rigorous, systematic formal analysis of the security of the FAPI, based on an existing comprehensive model of the web infrastructure - the Web Infrastructure Model (WIM) proposed by Fett, Kusters, and Schmitz. To this end, we first develop a precise model of the FAPI in the WIM, including different profiles for read-only and read-write access, different flows, different types of clients, and different combinations of security features, capturing the complex interactions in a web-based environment. We then use our model of the FAPI to precisely define central security properties. In an attempt to prove these properties, we uncover partly severe attacks, breaking authentication, authorization, and session integrity properties. We develop mitigations against these attacks and finally are able to formally prove the security of a fixed version of the FAPI. Although financial applications are high-stakes environments, this work is the first to formally analyze and, importantly, verify an Open Banking security profile. By itself, this analysis is an important contribution to the development of the FAPI since it helps to define exact security properties and attacker models, and to avoid severe security risks before the first implementations of the standard go live. Of independent interest, we also uncover weaknesses in the aforementioned security mechanisms for hardening OAuth 2.0. We illustrate that these mechanisms do not necessarily achieve the security properties they have been designed for.

22 citations


Posted Content
TL;DR: It is found that current instantiations using k-bit wire labels can be completely broken—in the sense that the circuit evaluator learns all the inputs of the circuit garbler in time \(O(2^k/C)\), where C is the total number of (non-free) gates that are garbled.
Abstract: We study the concrete security of high-performance implementations of half-gates garbling, which all rely on (hardware-accelerated) AES. We find that current instantiations using k-bit wire labels can be completely broken—in the sense that the circuit evaluator learns all the inputs of the circuit garbler—in time \(O(2^k/C)\), where C is the total number of (non-free) gates that are garbled, possibly across multiple independent executions. The attack can be applied to existing circuit-garbling libraries using \(k=80\) when \(C \approx 10^9\), and would require \(267\) machine-months and cost about $\(3500\) to implement on the Google Cloud Platform. Since the attack can be fully parallelized, it could be carried out in about a month using \({\approx }250\) machines.

19 citations


Book ChapterDOI
19 May 2019
TL;DR: In this article, the authors show that other resource limitations, such as the attacker's memory, could make the achievable advantage smaller, and thus these proven bounds too pessimistic, and propose to handle memory limitations.
Abstract: Concrete security proofs give upper bounds on the attacker’s advantage as a function of its time/query complexity. Cryptanalysis suggests however that other resource limitations – most notably, the attacker’s memory – could make the achievable advantage smaller, and thus these proven bounds too pessimistic. Yet, handling memory limitations has eluded existing security proofs.

16 citations


Book ChapterDOI
23 Sep 2019
TL;DR: Ch Chow closed the gap between theory and practice by introducing a new entity called an identity-certifying authority (ICA) and proposed an anonymous key-issuing protocol, which allows the users, KGC, and ICA to interactively generate secret keys without users ever having to reveal their identities to the KGC.
Abstract: The key escrow problem is one of the main barriers to the widespread real-world use of identity-based encryption (IBE). Specifically, a key generation center (KGC), which generates secret keys for a given identity, has the power to decrypt all ciphertexts. At PKC 2009, Chow defined a notion of security against the KGC, that relies on assuming that it cannot discover the underlying identities behind ciphertexts. However, this is not a realistic assumption since, in practice, the KGC manages an identity list and hence it can easily guess the identities corresponding to given ciphertexts. Chow later closed the gap between theory and practice by introducing a new entity called an identity-certifying authority (ICA) and proposed an anonymous key-issuing protocol. Essentially, this allows the users, KGC, and ICA to interactively generate secret keys without users ever having to reveal their identities to the KGC. Unfortunately, the proposed protocol did not include a concrete security definition, meaning that all of the subsequent works following Chow lack the formal proofs needed to determine whether or not it delivers a secure solution to the key escrow problem.

15 citations


Book ChapterDOI
05 Jun 2019
TL;DR: A new provably secure ephemeral-only RLWE+Rounding-based key exchange protocol and a proper approach to more accurately estimate the security level of the RLWE problem with only one sample is introduced.
Abstract: In this paper, we introduce a new provably secure ephemeral-only RLWE+Rounding-based key exchange protocol and a proper approach to more accurately estimate the security level of the RLWE problem with only one sample. Since our scheme is an ephemeral-only key exchange, it generates only one RLWE sample from protocol execution. We carefully analyze how to estimate the practical security of the RLWE problem with only one sample, which we call the ONE-sample RLWE problem. Our approach is different from existing approaches that are based on estimation with multiple RLWE samples. Though our analysis is based on some recently developed techniques in Darmstadt, our type of practical security estimate was never done before and it produces security estimates substantial different from the estimates before based on multiple RLWE samples. We show that the new design improves the security and reduce the communication cost of the protocol simultaneously by using one RLWE+Rounding sample technique. We also present two parameter choices ensuring \(2^{-60}\) key exchange failure probability which cover security of AES-128/192/256 with concrete security analysis and implementation. We believe that our construction is secure, simple, efficient and elegant with wide application prospects.

6 citations


Posted Content
TL;DR: If the authors start from a commit-and-open identification scheme, where the prover first commits to several strings and then as a second message opens a subset of them depending on the verifier's message, then there is a tight quantum reduction for the Fiat-Shamir transform to special soundness notions.
Abstract: Applying the Fiat-Shamir transform on identification schemes is one of the main ways of constructing signature schemes. While the classical security of this transformation is well understood, it is only very recently that generic results for the quantum case have been proposed [DFMS19,LZ19]. These results are asymptotic and therefore can't be used to derive the concrete security of these signature schemes without a significant loss in parameters. In this paper, we show that if we start from a commit-and-open identification scheme, where the prover first commits to several strings and then as a second message opens a subset of them depending on the verifier's message, then there is a tight quantum reduction for the the Fiat-Shamir transform to special soundness notions. Our work applies to most 3 round schemes of this form and can be used immediately to derive quantum concrete security of signature schemes. We apply our techniques to several identification schemes that lead to signature schemes such as Stern's identification scheme based on coding problems, the [KTX08] identification scheme based on lattice problems, the [SSH11] identification schemes based on multivariate problems, closely related to the NIST candidate MQDSS, and the PICNIC scheme based on multiparty computing problems, which is also a NIST candidate.

5 citations


Journal ArticleDOI
TL;DR: In this paper, a novel method for security-level classification (SLC) based on power system partitioning is proposed, in which power system is partitioned into different subareas satisfying different N-k -consuming contingencies.
Abstract: Secure and reliable operation of power systems is a crucial factor to the security of power supply, and security assessment is an effective way to evaluate the quality of security. In order to evaluate the specific security status of a power system, a novel method for security-level classification (SLC) based on power system partitioning is proposed here. In this method, power system is partitioned into different subareas satisfying different N - k contingencies. Then, the mutual power supply between each subarea is coordinated to obtain the total supply capacity (TSC) under N-k contingencies. The security margin (SM) index, average system disequilibrium (ASD) index, and comprehensive safety index (CSI) are applied to assess the security of power system. Besides this, the threshold crossing (TC) index and the loss rate of load (LRL) index are applied to assess the unsafe conditions of power systems. According to the above procedures, the power system security states are classified into five levels, and a quantitative criterion to determine the exact security level is also given. Finally, a practical power system and the IEEE 118-bus test system are adopted to validate the feasibility of security classification based on N-k contingencies partition.

5 citations


Book ChapterDOI
14 Apr 2019
TL;DR: A new LAF family where the tag size is only linear in n and it is shown how to modify the scheme so as to prove it (almost) tightly secure, meaning that security reductions are not affected by a concrete security loss proportional to the number of adversarial queries.
Abstract: Lossy algebraic filters (LAFs) are function families where each function is parametrized by a tag, which determines if the function is injective or lossy. While initially introduced by Hofheinz (Eurocrypt 2013) as a technical tool to build encryption schemes with key-dependent message chosen-ciphertext (KDM-CCA) security, they also find applications in the design of robustly reusable fuzzy extractors. So far, the only known LAF family requires tags comprised of \(\varTheta (n^2)\) group elements for functions with input space \(\mathbb {Z}_p^n\), where p is the group order. In this paper, we describe a new LAF family where the tag size is only linear in n and prove it secure under simple assumptions in asymmetric bilinear groups. Our construction can be used as a drop-in replacement in all applications of the initial LAF system. In particular, it can shorten the ciphertexts of Hofheinz’s KDM-CCA-secure public-key encryption scheme by 19 group elements. It also allows substantial space improvements in a recent fuzzy extractor proposed by Wen and Liu (Asiacrypt 2018). As a second contribution, we show how to modify our scheme so as to prove it (almost) tightly secure, meaning that security reductions are not affected by a concrete security loss proportional to the number of adversarial queries.

Posted Content
TL;DR: Concrete security proofs give upper bounds on the attacker’s advantage as a function of its time/query complexity, but cryptanalysis suggests however that other resource limitations – most notably, the attacker's memory – could make the achievable advantage smaller, and thus these proven bounds too pessimistic.
Abstract: Concrete security proofs give upper bounds on the attacker’s advantage as a function of its time/query complexity. Cryptanalysis suggests however that other resource limitations – most notably, the attacker’s memory – could make the achievable advantage smaller, and thus these proven bounds too pessimistic. Yet, handling memory limitations has eluded existing security proofs.

Journal ArticleDOI
TL;DR: This paper proposes a method, based upon the notion of knowledge base, for helping developers devise more secure applications from the threat modelling step up to the testing one, and results are very encouraging in terms of the two criteria: comprehensibility and effectiveness.
Abstract: This paper tackles the problems of choosing security solutions and writing concrete security test cases for software, which are two tasks of the software life cycle requiring time, expertise and experience. We propose in this paper a method, based upon the notion of knowledge base, for helping developers devise more secure applications from the threat modelling step up to the testing one. The first stage of the approach consists of the acquisition and integration of publicly available security data into a data store. This one is used to assist developers in the design of attack-defense trees expressing the attacker possibilities to compromise an application and the defenses that may be implemented. These defenses are given under the form of security pattern combinations, a security pattern being a generic and reusable solution to design more secure applications. In the second stage, these trees are used to guide developers in the test case generation. Test verdicts show whether an application is vulnerable to the threats modelled by an ADTree and whether the consequences of the chosen security patterns are observed from the application (a consequence leading to some observable events partly showing that a pattern is correctly implemented). We applied this approach to web applications and evaluated it on 24 participants. The results are very encouraging in terms of the two criteria: comprehensibility and effectiveness.

Posted Content
TL;DR: In this paper, the authors perform a rigorous, systematic formal analysis of the security of the OpenID Financial-grade API (FAPI), based on the Web Infrastructure Model (WIM) proposed by Fett, Kuesters, and Schmitz.
Abstract: Forced by regulations and industry demand, banks worldwide are working to open their customers' online banking accounts to third-party services via web-based APIs. By using these so-called Open Banking APIs, third-party companies, such as FinTechs, are able to read information about and initiate payments from their users' bank accounts. One of the most promising standards in this segment is the OpenID Financial-grade API (FAPI), currently under development in an open process by the OpenID Foundation and backed by large industry partners. The FAPI is a profile of OAuth 2.0 designed for high-risk scenarios and aiming to be secure against very strong attackers. To achieve this level of security, the FAPI employs a range of mechanisms that have been developed to harden OAuth 2.0. In this paper, we perform a rigorous, systematic formal analysis of the security of the FAPI, based on the Web Infrastructure Model (WIM) proposed by Fett, Kuesters, and Schmitz. To this end, we first develop a precise model of the FAPI in the WIM, including different profiles and combinations of security features. We then use our model of the FAPI to precisely define central security properties. In an attempt to prove these properties, we uncover partly severe attacks, breaking authentication, authorization, and session integrity properties. We develop mitigations against these attacks and finally are able to formally prove the security of a fixed version of the FAPI. This analysis is an important contribution to the development of the FAPI since it helps to define exact security properties and attacker models, and to avoid severe security risks. Of independent interest, we also uncover weaknesses in the aforementioned security mechanisms for hardening OAuth 2.0. We illustrate that these mechanisms do not necessarily achieve the security properties they have been designed for.

Book ChapterDOI
06 Dec 2019
TL;DR: A combined signature scheme is proposed which not only can reduce the public key size of the UOV signature scheme, and also can provide tighter security against chosen-message attack in the random oracle.
Abstract: Multivariate public key cryptography which relies on multivariate quadratic (MQ) problem is one of the main approaches to guarantee the security of communication in the post-quantum world. In this paper, we focus mainly on the yet unbroken (under proper parameter choice) Unbalanced Oil and Vinegar (UOV) scheme, and discuss the exact security of it. Then we propose a combined signature scheme which that (1) not only can reduce the public key size of the UOV signature scheme, and (2) but also can provide tighter security against chosen-message attack in the random oracle. On the other hand, we propose a novel aggregate signature scheme based on UOV signature scheme. Additionally, we give security proof for our aggregate signature scheme under the security of our proposed signature scheme.

Book ChapterDOI
22 Nov 2019
TL;DR: The security analysis of a newly proposed Non-linear Cellular Automata-based Hash function, NCASH is studied and the security bound of this scheme is examined by using the random oracle model to provide better security comparing with most of the other acclaimed existing schemes.
Abstract: In this work, we study the security analysis of a newly proposed Non-linear Cellular Automata-based Hash function, NCASH. The uncomplicated structure of this double-block-length hash function instigates us to scrutinize its construction by analyzing the security of the design. Here, we have performed a security analysis with respect to the standard model of concrete security. In addition, structural security has also been investigated by performing the correlation analysis. We have examined the security bound of this scheme by using the random oracle model. The Preimage or Second Preimage Resistance and Collision Resistance of NCASH-256 are 2\(^{256}\) and 2\(^{128}\) respectively. According to the best of our knowledge, these bounds provide better security comparing with most of the other acclaimed existing schemes.