scispace - formally typeset
Search or ask a question

Showing papers by "Takeshi Shimoyama published in 2007"


Book ChapterDOI
02 Jul 2007
TL;DR: This paper proposes a new search algorithm that consists of three sub searches, naming the forward search, the backward search, and the joint search, so that it can find a differential path by computers.
Abstract: In this paper, we propose a new construction algorithm for finding differential paths of Round 1 of SHA-1 for use in the collision search attack. Generally, the differential path of Round 1 is very complex, and it takes much time to find one by hand. Therefore, we propose a new search algorithm that consists of three sub searches, naming the forward search, the backward search, and the joint search, so that we can find a differential path by computers. By implementing our new algorithm and doing some experiments on a computer, we actually found 383 differential paths in the joint search that are different from Wang's. Since it is designed by quite a new policy, our algorithm can search a range of space that was not examined by existing algorithms.

16 citations


Book ChapterDOI
29 Oct 2007
TL;DR: An efficient algorithm is developed that shares the sum of vectors in each node, and the network structure among the nodes only requires to include a ring, for the linear algebra step of the number field sieve.
Abstract: This paper shows experimental results of the linear algebra step in the number field sieve on parallel environment with implementation techniques. We developed an efficient algorithm that shares the sum of vectors in each node, and the network structure among the nodes only requires to include a ring. We also investigated the construction of a network for the linear algebra step. The construction can be realized through switches and network interface cards, whose prices are not expensive. Moreover, we investigated the implementation of the linear algebra step using various parameters. The implementation described in this paper was used for the integer factoring of a 176 digit number by GNFS and a 274 digit number by SNFS.

15 citations


Book ChapterDOI
10 Sep 2007
TL;DR: This paper reports implementational and experimental results of a dedicated sieving device "CAIRN 2" with Xilinx's FPGA which is designed to handle up to 768-bit integers and adapted a new implementational method (the pipelined sieving) for NFS sieving.
Abstract: The hardness of the integer factorization problem assures the security of some public-key cryptosystems including RSA, and the number field sieve method (NFS), the most efficient algorithm for factoring large integers currently, is a threat for such cryptosystems. Recently, dedicated factoring devices attract much attention since it might reduce the computing cost of the number field sieve method. In this paper, we report implementational and experimental results of a dedicated sieving device "CAIRN 2" with Xilinx's FPGA which is designed to handle up to 768-bit integers. Used algorithm is based on the line sieving, however, in order to optimize the efficiency, we adapted a new implementational method (the pipelined sieving). In addition, we actually factored a 423-bit integer in about 30 days with the developed device CAIRN 2 for the sieving step and usual PCs for other steps. As far as the authors know, this is the first FPGA implementation and experiment of the sieving step in NFS.

14 citations


Journal ArticleDOI
TL;DR: This paper improves the low-density attack by incorporating an idea that integral lattice points can be covered with polynomially many spheres of shorter radius and of lower dimension, and shows that the success probability of the attack can be higher than that of Coster et al.
Abstract: The low-density attack proposed by Lagarias and Odlyzko is a powerful algorithm against the subset sum problem. The improvement algorithm due to Coster et al. would solve almost all the problems of density <0.9408... in the asymptotical sense. On the other hand, the subset sum problem itself is known as an NP-hard problem, and a lot of efforts have been paid to establish public-key cryptosystems based on the problem. In these cryptosystems, densities of the subset sum problems should be higher than 0.9408... in order to avoid the low-density attack. For example, the Chor-Rivest cryptosystem adopted subset sum problems with relatively high densities. In this paper, we further improve the low-density attack by incorporating an idea that integral lattice points can be covered with polynomially many spheres of shorter radius and of lower dimension. As a result, the success probability of our attack can be higher than that of Coster et al.'s attack for fixed dimensions. The density bound is also improved for fixed dimensions. Moreover, we numerically show that our improved low-density attack makes the success probability higher in case of low Hamming weight solution, such as the Chor-Rivest cryptosystem, if we assume SVP oracle calls.

14 citations


Posted Content
TL;DR: In this paper, the authors further improved the low-density attack by incorporating an idea that integral lattice points can be covered with polynomially many spheres of shorter radius and of lower dimension.
Abstract: The low-density attack proposed by Lagarias and Odlyzko is a powerful algorithm against the subset sum problem. The improvement algorithm due to Coster et al. would solve almost all the problems of density < 0.9408... in the asymptotical sense. On the other hand, the subset sum problem itself is known as an NP-hard problem, and a lot of efforts have been paid to establish public-key cryptosystems based on the problem. In these cryptosystems, densities of the subset sum problems should be higher than 0.9408... in order to avoid the low-density attack. For example, the Chor-Rivest cryptosystem adopted subset sum problems with relatively high densities. In this paper, we further improve the low-density attack by incorporating an idea that integral lattice points can be covered with polynomially many spheres of shorter radius and of lower dimension. As a result, the success probability of our attack can be higher than that of Coster et al.’s attack for fixed dimensions. The density bound is also improved for fixed dimensions. Moreover, we numerically show that our improved low-density attack makes the success probability higher in case of low Hamming weight solution, such as the Chor-Rivest cryptosystem, if we assume SVP oracle calls.

12 citations


Proceedings ArticleDOI
10 Apr 2007
TL;DR: This paper analyzes Bleichenbacher's forgery attack against the signature scheme RSASSA-PKCS1-v1_5 and shows applicable composite sizes for given exponents.
Abstract: In 2006, Bleichenbacher presented a new forgery attack against the signature scheme RSASSA-PKCS1-v1_5. The attack allows an adversary to forge a signature on almost arbitrary messages, if an implementation is not proper. Since the example was only limited to the case when the public exponent is 3 and the bit-length of the public composite is 3072, a potential threat is not known. This paper analyzes Bleichenbacher's forgery attack and shows applicable composite sizes for given exponents. We also propose two extended attacks with numerical examples

6 citations


Book ChapterDOI
18 Dec 2007
TL;DR: This paper shows how to forge a time-stamp which the latest version of Adobe's Acrobat and Acrobat Reader accept improperly, based on Bleichenbacher's forgery attack presented in CRYPTO 2006.
Abstract: This paper shows how to forge a time-stamp which the latest version of Adobe's Acrobat and Acrobat Reader accept improperly. The target signature algorithm is RSASSA-PKCS1-v1 5 with a 1024-bit public composite and the public key e = 3, and our construction is based on Bleichenbacher's forgery attack presented in CRYPTO 2006. Since the original attack is not able to forge with these parameters, we used an extended attack described in this paper. Numerical examples of the forged signatures and times-stamp are also provided.

1 citations


Book ChapterDOI
28 Jul 2007
TL;DR: The “combinatorics proliferation model”, based on discrete mathematics, developed in this study derives a threshold that gives the number of the packets sent by a victim that must not be exceeded in order to suppress thenumber of infected hosts to less than a few.
Abstract: One of the worst threats present in an enterprise network is the propagation of “scanning malware” (eg, scanning worms and bots) It is important to prevent such scanning malware from spreading within an enterprise network It is especially important to suppress scanning malware infection to less than a few infected hosts We estimated the timing of containment software to block “scanning malware” in a homogeneous enterprise network The “combinatorics proliferation model”, based on discrete mathematics, developed in this study derives a threshold that gives the number of the packets sent by a victim that must not be exceeded in order to suppress the number of infected hosts to less than a few This model can appropriately express the early state under which an infection started The result from our model fits very well to the result of computer simulation using a typical existing scanning malware and an actual network

1 citations


Proceedings Article
01 Jan 2007
TL;DR: The “combinatorics proliferation model”, based on discrete mathematics, developed in this study derives a threshold that gives the number of the packets sent by a victim that must not be exceeded in order to suppress thenumber of infected hosts to less than a few.
Abstract: One of the worst threats present in an enterprise network is the propagation of “scanning malware” (e.g., scanning worms and bots). It is important to prevent such scanning malware from spreading within an enterprise network. It is especially important to suppress scanning malware infection to less than a few infected hosts. We estimated the timing of containment software to block “scanning malware” in a homogeneous enterprise network. The “combinatorics proliferation model”, based on discrete mathematics, developed in this study derives a threshold that gives the number of the packets sent by a victim that must not be exceeded in order to suppress the number of infected hosts to less than a few. This model can appropriately express the early state under which an infection started. The result from our model fits very well to the result of computer simulation using a typical existing scanning malware and an actual network.

1 citations