scispace - formally typeset
Search or ask a question
Author

Matthieu Finiasz

Bio: Matthieu Finiasz is an academic researcher from French Institute for Research in Computer Science and Automation. The author has contributed to research in topics: Cryptosystem & Cryptography. The author has an hindex of 19, co-authored 35 publications receiving 1267 citations. Previous affiliations of Matthieu Finiasz include École Polytechnique Fédérale de Lausanne & École Normale Supérieure.

Papers
More filters
Book ChapterDOI
09 Dec 2001
TL;DR: This paper disproves the belief that code-based cryptosystems like McEliece do not allow practical digital signatures, and shows a way to build a practical signature scheme based on coding theory.
Abstract: McEliece is one of the oldest known public key cryptosystems. Though it was less widely studied than RSA, it is remarkable that all known attacks are still exponential. It is widely believed that code-based cryptosystems like McEliece do not allow practical digital signatures. In the present paper we disprove this belief and show a way to build a practical signature scheme based on coding theory. Its security can be reduced in the random oracle model to the well-known syndrome decoding problem and the distinguishability of permuted binary Goppa codes from a random code. For example we propose a scheme with signatures of 81-bits and a binary security workfactor of 283.

338 citations

Book ChapterDOI
02 Dec 2009
TL;DR: Lower bounds are given on the work factor of idealized versions of code-based cryptography algorithms, taking into account all possible tweaks which could improve their practical complexity.
Abstract: Code-based cryptography is often viewed as an interesting "Post-Quantum" alternative to the classical number theory cryptography. Unlike many other such alternatives, it has the convenient advantage of having only a few, well identified, attack algorithms. However, improvements to these algorithms have made their effective complexity quite complex to compute. We give here some lower bounds on the work factor of idealized versions of these algorithms, taking into account all possible tweaks which could improve their practical complexity. The aim of this article is to help designers select durably secure parameters.

251 citations

Book ChapterDOI
28 Sep 2005
TL;DR: This article presents a family of secure hash functions, whose security is directly related to the syndrome decoding problem from the theory of error-correcting codes, and proposes a few sets of parameters giving a good security and either a faster hashing or a shorter description for the function.
Abstract: Recently, some collisions have been exposed for a variety of cryptographic hash functions [20,21] including some of the most widely used today. Many other hash functions using similar constructions can however still be considered secure. Nevertheless, this has drawn attention on the need for new hash function designs. In this article is presented a family of secure hash functions, whose security is directly related to the syndrome decoding problem from the theory of error-correcting codes. Taking into account the analysis by Coron and Joux [4] based on Wagner’s generalized birthday algorithm [19] we study the asymptotical security of our functions. We demonstrate that this attack is always exponential in terms of the length of the hash value. We also study the work-factor of this attack, along with other attacks from coding theory, for non asymptotic range, i.e. for practical values. Accordingly, we propose a few sets of parameters giving a good security and either a faster hashing or a shorter description for the function.

103 citations

Proceedings ArticleDOI
28 Jun 2009
TL;DR: Two algorithms are presented, one due to Valembois and the other brand new, useful in different contexts, able to verify if a given length/synchronization is correct and practically recover the synchronization of several codes.
Abstract: We focus on the problem of recovering the length and synchronization of a linear block code from an intercepted bitstream. We place ourselves in an operational context where the intercepted bitstream contains a realistic noise level. We present two algorithms, one due to Valembois and the other one brand new. They are both useful in different contexts, able to verify if a given length/synchronization is correct. Using them, we were able to practically recover the synchronization of several codes.

55 citations

Proceedings ArticleDOI
11 Dec 2009
TL;DR: Compared to existing techniques, this new technique to reconstruct punctured convolutional codes from a noisy intercepted bit-stream has two major advantages: it can tolerate much higher noise levels in the bitstream and it is able to recover the best possible decoder.
Abstract: We present here a new technique to reconstruct punctured convolutional codes from a noisy intercepted bit-stream. Compared to existing techniques our algorithm has two major advantages: it can tolerate much higher noise levels in the bitstream and it is able to recover the best possible decoder (in terms of decoding complexity). This is achieved by identifying the exact puncturing pattern that was used and recovering the parent convolutional code.

49 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This review begins by reviewing protocols of quantum key distribution based on discrete variable systems, and considers aspects of device independence, satellite challenges, and high rate protocols based on continuous variable systems.
Abstract: Quantum cryptography is arguably the fastest growing area in quantum information science. Novel theoretical protocols are designed on a regular basis, security proofs are constantly improving, and experiments are gradually moving from proof-of-principle lab demonstrations to in-field implementations and technological prototypes. In this paper, we provide both a general introduction and a state-of-the-art description of the recent advances in the field, both theoretical and experimental. We start by reviewing protocols of quantum key distribution based on discrete variable systems. Next we consider aspects of device independence, satellite challenges, and protocols based on continuous-variable systems. We will then discuss the ultimate limits of point-to-point private communications and how quantum repeaters and networks may overcome these restrictions. Finally, we will discuss some aspects of quantum cryptography beyond standard quantum key distribution, including quantum random number generators and quantum digital signatures.

769 citations

Book ChapterDOI
14 Aug 2005
TL;DR: This paper analyzes a particular human-to-computer authentication protocol designed by Hopper and Blum (HB), and shows it to be practical for low-cost pervasive devices, and proves the security of the HB+ protocol against active adversaries based on the hardness of the Learning Parity with Noise (LPN) problem.
Abstract: Forgery and counterfeiting are emerging as serious security risks in low-cost pervasive computing devices. These devices lack the computational, storage, power, and communication resources necessary for most cryptographic authentication schemes. Surprisingly, low-cost pervasive devices like Radio Frequency Identification (RFID) tags share similar capabilities with another weak computing device: people. These similarities motivate the adoption of techniques from human-computer security to the pervasive computing setting. This paper analyzes a particular human-to-computer authentication protocol designed by Hopper and Blum (HB), and shows it to be practical for low-cost pervasive devices. We offer an improved, concrete proof of security for the HB protocol against passive adversaries. This paper also offers a new, augmented version of the HB protocol, named HB+, that is secure against active adversaries. The HB+ protocol is a novel, symmetric authentication protocol with a simple, low-cost implementation. We prove the security of the HB+ protocol against active adversaries based on the hardness of the Learning Parity with Noise (LPN) problem.

767 citations

Book ChapterDOI
01 Mar 2004
TL;DR: This paper proposes a new short signature scheme from the bilinear pairings that unlike BLS, uses general cryptographic hash functions such as SHA-1 or MD5, and does not require special hash functions.
Abstract: In Asiacrypt2001, Boneh, Lynn, and Shacham [8] proposed a short signature scheme (BLS scheme) using bilinear pairing on certain elliptic and hyperelliptic curves. Subsequently numerous cryptographic schemes based on BLS signature scheme were proposed. BLS short signature needs a special hash function [6,1,8]. This hash function is probabilistic and generally inefficient. In this paper, we propose a new short signature scheme from the bilinear pairings that unlike BLS, uses general cryptographic hash functions such as SHA-1 or MD5, and does not require special hash functions. Furthermore, the scheme requires less pairing operations than BLS scheme and so is more efficient than BLS scheme. We use this signature scheme to construct a ring signature scheme and a new method for delegation. We give the security proofs for the new signature scheme and the ring signature scheme in the random oracle model.

540 citations

Book ChapterDOI
02 Dec 2007
TL;DR: This model captures the notion of a powerful adversary who can monitor all communications, trace tags within a limited period of time, corrupt tags, and get side channel information on the reader output.
Abstract: We provide a formal model for identification schemes. Under this model, we give strong definitions for security and privacy. Our model captures the notion of a powerful adversary who can monitor all communications, trace tags within a limited period of time, corrupt tags, and get side channel information on the reader output. Adversaries who do not have access to this side channel are called narrow adversaries. Depending on restrictions on corruption, adversaries are called strong, destructive, forward, or weak adversaries. We derive some separation results: strong privacy is impossible. Narrow-strong privacy implies key agreement. We also prove some constructions: narrow-strong and forward privacy based on a public-key cryptosystem, narrow-destructive privacy based on a random oracle, and weak privacy based on a pseudorandom function.

429 citations