scispace - formally typeset
Search or ask a question
Book ChapterDOI

Rig: A Simple, Secure and Flexible Design for Password Hashing

TL;DR: Rig as mentioned in this paper is a secure password hashing framework based on secure cryptographic hash functions which provides the flexibility to choose different functions for different phases of the construction and is flexible as the memory parameter is independent of time parameter (no actual time and memory trade-off).
Abstract: Password Hashing, a technique commonly implemented by a server to protect passwords of clients, by performing a one-way transformation on the password, turning it into another string called the hashed password In this paper, we introduce a secure password hashing framework Rig which is based on secure cryptographic hash functions It provides the flexibility to choose different functions for different phases of the construction The design of the scheme is very simple to implement in software and is flexible as the memory parameter is independent of time parameter (no actual time and memory trade-off) and is strictly sequential (difficult to parallelize) with comparatively huge memory consumption that provides strong resistance against attackers using multiple processing units It supports client-independent updates, ie, the server can increase the security parameters by updating the existing password hashes without knowing the password Rig can also support the server relief protocol where the client bears the maximum effort to compute the password hash, while there is minimal effort at the server side We analyze Rig and show that our proposal provides an exponential time complexity against the low-memory attack

Content maybe subject to copyright    Report

Citations
More filters
01 Jan 2014
TL;DR: In this paper, an overview of the candidates of the Password Hashing Competition (PHC) regarding to their functionality, client-independent update and server relief, their security, e.g., memory-hardness and side-channel resistance, and its general properties, such as memory usage and flexibility of the underlying primitives.
Abstract: In this work we provide an overview of the candidates of the Password Hashing Competition (PHC) regarding to their functionality, e.g., client-independent update and server relief, their security, e.g., memory-hardness and side-channel resistance, and its general properties, e.g., memory usage and flexibility of the underlying primitives. Furthermore, we formally introduce two kinds of attacks, called GarbageCollector and Weak Garbage-Collector Attack, exploiting the memory management of a candidate. Note that we consider all candidates which are not yet withdrawn from the competition.

7 citations

Journal ArticleDOI
TL;DR: In this article, the authors present a simple technique to analyze TMTO for a password-hashing scheme which can be represented as a directed acyclic graph (DAG).
Abstract: Increasing threat of password leakage from compromised password hashes demands a resource consuming password-hashing algorithm to prevent the precomputation of the password hashes. A class of password-hashing schemes (PHS) provides such a defense by making the design Memory hard. This ensures that any reduction in the memory consumed by the algorithm leads to an exponential increase in its runtime. The security offered by a memory-hard PHS design is measured in terms of its time–memory trade-off (TMTO) defense. Another important measure for a good PHS is its efficiency in utilizing all the available memory as quickly as possible, and fast running time when more than the required memory is available. In this work, we present a simple technique to analyze TMTO for a password-hashing scheme which can be represented as a directed acyclic graph (DAG). The nodes of the DAG correspond to the storage required by the algorithm and the edges correspond to the flow of the execution. Our proposed technique provides expected runtimes at varied levels of available storage utilizing the DAG representation of the algorithm. We show the effectiveness of our proposed technique by applying it on three designs from the “Password Hashing Competition" (PHC)—Argon2-Version 1.2.1 (the PHC winner), Catena-Version 3.2 and Rig-Version 2. Our analysis shows that Argon2i is not providing expected memory hardness which is also highlighted in a recent work by Corrigan-Gibbs et al. We analyze these PHS for performance under various settings of time and memory complexities. Our experimental results show (i) simple DAGs for PHS are efficient but not memory hard, (ii) complex DAGs for PHS are memory hard but less efficient, and (iii) combination of two simple graphs in the representation of a DAG for PHS achieves both memory hardness and efficiency.

4 citations

Proceedings ArticleDOI
29 May 2018
TL;DR: This work captures the evaluation of an iMHF as a directed acyclic graph (DAG) and investigates a combinatorial property of each underlying DAG, called its depth-robustness, which is a measure for the hardware cost of evaluating the iM HF on an ASIC.
Abstract: We show attacks on five data-independent memory-hard functions (iMHF) that were submitted to the password hashing competition (PHC). Informally, an MHF is a function which cannot be evaluated on dedicated hardware, like ASICs, at significantly lower hardware and/or energy cost than evaluating a single instance on a standard single-core architecture. Data-independent means the memory access pattern of the function is independent of the input; this makes iMHFs harder to construct than data-dependent ones, but the latter can be attacked by various side-channel attacks. Following [Alwen-Blocki'16], we capture the evaluation of an iMHF as a directed acyclic graph (DAG). The cumulative parallel pebbling complexity of this DAG is a measure for the hardware cost of evaluating the iMHF on an ASIC. Ideally, one would like the complexity of a DAG underlying an iMHF to be as close to quadratic in the number of nodes of the graph as possible. Instead, we show that (the DAGs underlying) the following iMHFs are far from this bound: Rig.v2, TwoCats and Gambit each having an exponent no more than 1.75. Moreover, we show that the complexity of the iMHF modes of the PHC finalists Pomelo and Lyra2 have exponents at most 1.83 and 1.67 respectively. To show this we investigate a combinatorial property of each underlying DAG (called its depth-robustness. By establishing upper bounds on this property we are then able to apply the general technique of [Alwen-Block'16] for analyzing the hardware costs of an iMHF.

3 citations

Book ChapterDOI
08 Dec 2014
TL;DR: This paper presents an implementation of a password hashing scheme (PHS) implemented inside a Cryptographic module, as suggested by NIST in a set of standards (FIPS 140), which aims to provide defense against hardware attacks.
Abstract: Password Hashing is the technique of performing one-way transformation of the password. One of the requirements of password hashing algorithms is to be memory demanding to provide defense against hardware attacks. In practice, most Cryptographic designs are implemented inside a Cryptographic module, as suggested by NIST in a set of standards (FIPS 140). A cryptographic module has a limited memory and this makes it challenging to implement a password hashing scheme (PHS) inside it.

1 citations

Posted Content
TL;DR: This work has analyzed some password hashing schemes for performance under various settings of time and memory complexities and attempted to benchmark the said algorithms at similar levels of memory consumption.
Abstract: In this work we have analyzed some password hashing schemes for performance under various settings of time and memory complexities. We have attempted to benchmark the said algorithms at similar levels of memory consumption. Given the wide variations in security margins of the algorithms and incompatibility of memory and time cost settings, we have attempted to be as fair as possible in choosing the various parameters while executing the benchmarks.
References
More filters
Journal ArticleDOI
TL;DR: Moore's Law has become the central driving force of one of the most dynamic of the world's industries as discussed by the authors, and it is viewed as a reliable method of calculating future trends as well, setting the pace of innovation, and defining the rules and the very nature of competition.
Abstract: A simple observation, made over 30 years ago, on the growth in the number of devices per silicon die has become the central driving force of one of the most dynamic of the world's industries. Because of the accuracy with which Moore's Law has predicted past growth in IC complexity, it is viewed as a reliable method of calculating future trends as well, setting the pace of innovation, and defining the rules and the very nature of competition. And since the semiconductor portion of electronic consumer products keeps growing by leaps and bounds, the Law has aroused in users and consumers an expectation of a continuous stream of faster, better, and cheaper high-technology products. Even the policy implications of Moore's Law are significant: it is used as the baseline assumption in the industry's strategic road map for the next decade and a half.

1,649 citations

Proceedings ArticleDOI
01 Feb 2000
TL;DR: The rules of thumb for the design of data storage systems are reexamines with a particular focus on performance and price/performance, and the 5-minute rule for disk caching becomes a cache-everything rule for Web caching.
Abstract: This paper reexamines the rules of thumb for the design of data storage systems Briefly, it looks at storage, processing, and networking costs, ratios, and trends with a particular focus on performance and price/performance Amdahl's ratio laws for system design need only slight revision after 35 years-the major change being the increased use of RAM An analysis also indicates storage should be used to cache both database and Web data to save disk bandwidth, network bandwidth, and people's time Surprisingly, the 5-minute rule for disk caching becomes a cache-everything rule for Web caching

232 citations

Proceedings Article
06 Jun 1999
TL;DR: It is shown that the computational cost of any secure password scheme must increase as hardware improves, and two algorithms with adaptable cost are presented--eksblowfish, a block cipher with a purposefully expensive key schedule, and bcrypt, a related hash function.
Abstract: Many authentication schemes depend on secret passwords Unfortunately, the length and randomness of user-chosen passwords remain fixed over time In contrast, hardware improvements constantly give attackers increasing computational power As a result, password schemes such as the traditional UNIX user-authentication system are failing with time This paper discusses ways of building systems in which password security keeps up with hardware speeds We formalize the properties desirable in a good password system, and show that the computational cost of any secure password scheme must increase as hardware improves We present two algorithms with adaptable cost--eksblowfish, a block cipher with a purposefully expensive key schedule, and bcrypt, a related hash function Failing a major breakthrough in complexity theory, these algorithms should allow password-based systems to adapt to hardware improvements and remain secure well into the future

212 citations

Book ChapterDOI
25 Jun 2013
TL;DR: BLAKE2 is presented, an improved version of the SHA-3 finalist BLAKE optimized for speed in software, and provides a comprehensive support for tree-hashing as well as keyed hashing (be it in sequential or tree mode).
Abstract: We present the hash function BLAKE2, an improved version of the SHA-3 finalist BLAKE optimized for speed in software. Target applications include cloud storage, intrusion detection, or version control systems. BLAKE2 comes in two main flavors: BLAKE2b is optimized for 64-bit platforms, and BLAKE2s for smaller architectures. On 64-bit platforms, BLAKE2 is often faster than MD5, yet provides security similar to that of SHA-3: up to 256-bit collision resistance, immunity to length extension, indifferentiability from a random oracle, etc. We specify parallel versions BLAKE2bp and BLAKE2sp that are up to 4 and 8 times faster, by taking advantage of SIMD and/or multiple cores. BLAKE2 reduces the RAM requirements of BLAKE down to 168 bytes, making it smaller than any of the five SHA-3 finalists, and 32% smaller than BLAKE. Finally, BLAKE2 provides a comprehensive support for tree-hashing as well as keyed hashing (be it in sequential or tree mode).

189 citations

Book ChapterDOI
21 Feb 2005
TL;DR: This work presents a new way to construct a MAC function based on a block cipher that is a factor 2.5 more efficient than CBC-MAC with AES, while providing a comparable claimed security level.
Abstract: We present a new way to construct a MAC function based on a block cipher. We apply this construction to AES resulting in a MAC function that is a factor 2.5 more efficient than CBC-MAC with AES, while providing a comparable claimed security level.

89 citations

Trending Questions (1)
What is the default password for unlocking the client user interface when troubleshooting the Mcafee hips client?

It supports client-independent updates, i. e., the server can increase the security parameters by updating the existing password hashes without knowing the password.