scispace - formally typeset
Search or ask a question

Showing papers in "IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences in 2011"


Journal ArticleDOI
TL;DR: A novel random number generation method based on chaotic sampling of regular waveform, which passes the four basic tests of FIPS-140-2, and a high speed IC truly random number generator based on this method is presented.
Abstract: A novel random number generation method based on chaotic sampling of regular waveform is proposed. A high speed IC truly random number generator based on this method is also presented. Simulation and experimental results, verifying the feasibility of the circuit, are given. Numerical binary data obtained according to the proposed method pass the four basic tests of FIPS-140-2, while experimental data pass the full NIST-800-22 random number test suite without post-processing.

50 citations


Journal ArticleDOI
TL;DR: All functions of this type with an odd number of variables can be obtained in this way, and a lower bound of the number of Boolean functions with optimal algebraic immunity is given.
Abstract: In this note, we go further on the “basis exchange” idea presented in [2] by using Mobious inversion. We show that the matrix S1(ƒ)S0(ƒ)-1 has a nice form when ƒ is chosen to be the majority function, where S1(ƒ) is the matrix with row vectors υk(α) for all α ∈ 1ƒ and S0(ƒ)=S1(ƒ ⊕ 1). And an exact counting for Boolean functions with maximum algebraic immunity by exchanging one point in on-set with one point in off-set of the majority function is given. Furthermore, we present a necessary condition according to weight distribution for Boolean functions to achieve algebraic immunity not less than a given number.

49 citations


Journal ArticleDOI
TL;DR: In this article, the authors give a security proof for ABREAST-DM in terms of collision resistance and preimage resistance, based on a novel technique using query-response cycles, which is simpler than those for MDC-2 and TANDEM-DM.
Abstract: As old as TANDEM-DM, the compression function ABREAST-DM is one of the most well-known constructions for double block length compression functions. In this paper, we give a security proof for ABREAST-DM in terms of collision resistance and preimage resistance. The bounds on the number of queries for collision resistance and preimage resistance are given by Ω(2 n ). Based on a novel technique using query-response cycles, our security proof is simpler than those for MDC-2 and TANDEM-DM. We also present a wide class of ABREAST-DM variants that enjoy a birthday-type security guarantee with a simple proof * .

37 citations


Journal ArticleDOI
TL;DR: This paper shows the existence of efficient conversion matrices whose row vectors all have the Hamming weights less than or equal to 4 and proposes another mixture of bases that contributes to the reduction of the critical path delay of SubBytes.
Abstract: A lot of improvements and optimizations for the hardware implementation of SubBytes of Rijndael, in detail inversion in F 2 8 have been reported. Instead of the Rijndael original F 2 8, it is known that its isomorphic tower field F ((2 2 ) 2 ) 2 has a more efficient inversion. Then, some conversion matrices are also needed for connecting these isomorphic binary fields. According to the previous works, it is said that the number of 1's in the conversion matrices is preferred to be small; however, they have not focused on the Hamming weights of the row vectors of the matrices. It plays an important role for the calculation architecture, in detail critical path delays. This paper shows the existence of efficient conversion matrices whose row vectors all have the Hamming weights less than or equal to 4. They are introduced as quite rare cases. Then, it is pointed out that such efficient conversion matrices can connect the Rijndael original F 2 8 to some less efficient inversions in F ((2 2 ) 2 ) 2 but not to the most efficient ones. In order to overcome these inconveniences, this paper next proposes a technique called mixed bases. For the towerings, most of previous works have used several kinds of bases such as polynomial and normal bases in mixture. Different from them, this paper proposes another mixture of bases that contributes to the reduction of the critical path delay of SubBytes. Then, it is shown that the proposed mixture contributes to the efficiencies of not only inversion in F ((2 2 ) 2 ) 2 but also conversion matrices between the isomorphic fields F 2 8 and F ((2 2 ) 2 ) 2.

37 citations


Journal ArticleDOI
TL;DR: Kudekar et al. recently proved that for transmission over the binary erasure channel (BEC), spatial coupling of LDPC codes increases the BP threshold of the coupled ensemble to the MAP thresholds of the underlyingLDPC codes.
Abstract: Kudekar et al. recently proved that for transmission over the binary erasure channel (BEC), spatial coupling of LDPC codes increases the BP threshold of the coupled ensemble to the MAP threshold of the underlying LDPC codes. One major drawback of the capacity-achieving spatially-coupled LDPC codes is that one needs to increase the column and row weight of parity-check matrices of the underlying LDPC codes. It is proved, that Hsu-Anastasopoulos (HA) codes and MacKay-Neal (MN) codes achieve the capacity of memoryless binary-input symmetric-output channels under MAP decoding with bounded column and row weight of the parity-check matrices. The HA codes and the MN codes are dual codes each other. The aim of this paper is to present an empirical evidence that spatially-coupled MN (resp. HA) codes with bounded column and row weight achieve the capacity of the BEC. To this end, we introduce a spatial coupling scheme of MN (resp. HA) codes. By density evolution analysis, we will show that the resulting spatially-coupled MN (resp. HA) codes have the BP threshold close to the Shannon limit.

34 citations


Journal ArticleDOI
TL;DR: This paper proposes a new recovery mechanism, called Recovery Critical Misprediction (RCM), that employs critical path prediction to identify the branches that will be most harmful if mispredicted and improves IPC value by 10.05% compared with a conventional processor.
Abstract: Current trends in modern out-of-order processors involve implementing deeper pipelines and a large instruction window to achieve high performance, which lead to the penalty of the branch misprediction recovery being a critical factor in overall processor performance. Multi path execution is proposed to reduce this penalty by executing both paths following a branch, simultaneously. However, there are some drawbacks in this mechanism, such as design complexity caused by processing both paths after a branch and performance degradation due to hardware resource competition between two paths. In this paper, we propose a new recovery mechanism, called Recovery Critical Misprediction (RCM), to reduce the penalty of branch misprediction recovery. The mechanism uses a small trace cache to save the decoded instructions from the alternative path following a branch. Then, during the subsequent predictions, the trace cache is accessed. If there is a hit, the processor forks the second path of this branch at the renamed stage so that the design complexity in the fetch stage and decode stage is alleviated. The most contribution of this paper is that our proposed mechanism employs critical path prediction to identify the branches that will be most harmful if mispredicted. Only the critical branch can save its alternative path into the trace cache, which not only increases the usefulness of a limited size of trace cache but also avoids the performance degradation caused by the forked non-critical branch. Experimental results employing SPECint 2000 benchmark show that a processor with our proposed RCM improves IPC value by 10.05% compared with a conventional processor.

27 citations



Journal ArticleDOI
TL;DR: A novel and fundamental algorithm for cancelable biometrics called correlation-invariant random filtering (CIRF) with provable security is proposed and constructed and a method for generating cancelable fingerprint templates based on the chip matching algorithm and the CIRF is constructed.
Abstract: Biometric authentication has attracted attention because of its high security and convenience. However, biometric feature such as fingerprint can not be revoked like passwords. Thus once the biometric data of a user stored in the system has been compromised, it can not be used for authentication securely for his/her whole life long. To address this issue, an authentication scheme called cancelable biometrics has been studied. However, there remains a major challenge to achieve both strong security and practical accuracy. In this paper, we propose a novel and fundamental algorithm for cancelable biometrics called correlation-invariant random filtering (CIRF) with provable security. Then we construct a method for generating cancelable fingerprint templates based on the chip matching algorithm and the CIRF. Experimental evaluation shows that our method has almost the same accuracy as the conventional fingerprint verification based on the chip matching algorithm.

26 citations


Journal ArticleDOI
TL;DR: A cryptosystem in which even if the proxy transformation is applied to a TRE ciphertext, the release time is still effective, called Timed-Release PRE (TR-PRE), which can be applied to efficient multicast communication with a release time indication.
Abstract: Timed-Release Encryption (TRE) is a kind of time-dependent encryption, where the time of decryption can be controlled. More precisely, TRE prevents even a legitimate recipient decrypting a ciphertext before a semi-trusted Time Server (TS) sends trapdoor sT assigned with a release time T of the encryptor's choice. Cathalo et al. (ICICS2005) and Chalkias et al. (ESORICS2007) have already considered encrypting a message intended for multiple recipients with the same release time. One drawback of these schemes is the ciphertext size and computational complexity, which depend on the number of recipients N. Ideally, it is desirable that any factor (ciphertext size, computational complexity of encryption/decryption, and public/secret key size) does not depend on N. In this paper, to achieve TRE with such fully constant costs from the encryptor's/decryptor's point of view, by borrowing the technique of Proxy Re-Encryption (PRE), we propose a cryptosystem in which even if the proxy transformation is applied to a TRE ciphertext, the release time is still effective. By sending a TRE ciphertext to the proxy, an encryptor can foist N-dependent computation costs on the proxy. We call this cryptosystem Timed-Release PRE (TR-PRE). This function can be applied to efficient multicast communication with a release time indication.

25 citations


Journal ArticleDOI
TL;DR: In this article, a family of quaternary sequences of period 2p using generalized cyclotomic classes over the residue class ring modulo 2p was defined, and exact values of the linear complexity were computed.
Abstract: Let p be an odd prime number. We define a family of quaternary sequences of period 2p using generalized cyclotomic classes over the residue class ring modulo 2p. We compute exact values of the linear complexity, which are larger than half of the period. Such sequences are 'good' enough from the viewpoint of linear complexity.

25 citations


Journal ArticleDOI
TL;DR: The experimental results demonstrate that the proposed ANC system (head-mounted structure) can significantly reduce MR noise by approximately 30dB in a high field in an actual MRI room even if the imaging mode changes frequently.
Abstract: We propose an active noise control (ANC) system for reducing periodic noise generated in a high magnetic field such as noise generated from magnetic resonance imaging (MRI) devices (MR noise). The proposed ANC system utilizes optical microphones and piezoelectric loudspeakers, because specific acoustic equipment is required to overcome the high-field problem, and consists of a head-mounted structure to control noise near the user's ears and to compensate for the low output of the piezoelectric loudspeaker. Moreover, internal model control (IMC)-based feedback ANC is employed because the MR noise includes some periodic components and is predictable. Our experimental results demonstrate that the proposed ANC system (head-mounted structure) can significantly reduce MR noise by approximately 30dB in a high field in an actual MRI room even if the imaging mode changes frequently.

Journal ArticleDOI
TL;DR: A detection-type watermarking scheme by which a watermark is visible by anyone but unremovable without secret trapdoor, so that both correctness and security of cryptographic data remain satisfied even if the trapdoor is published.
Abstract: This paper introduces a novel type of digital watermarking, which is mainly designed for embededing information into cryptographic data such as keys, ciphertexts, and signatures. We focus on a mathematical structure of the recent major cryptosystems called pairing-based schemes. We present a detection-type watermarking scheme by which a watermark is visible by anyone but unremovable without secret trapdoor. The important feature is that both correctness and security of cryptographic data remain satisfied even if the trapdoor is published.

Journal ArticleDOI
TL;DR: The asymptotic exponential growth rate of the weight distributions in the limit of large codelength is derived and it is shown that the normalized typical minimum distance does not monotonically increase with the size of the field.
Abstract: In this paper, we study the average symbol and bit-weight distributions for ensembles of non-binary low-density parity-check codes defined on GF(2p). Moreover, we derive the asymptotic exponential growth rate of the weight distributions in the limit of large codelength. Interestingly, we show that the normalized typical minimum distance does not monotonically increase with the size of the field.

Journal ArticleDOI
TL;DR: For this specific original instantiation of the NTRU encryption system with parameters (N,p,q), the attack succeeds with probability ≈ 1 - 1/p and when the number of faulted coefficients is upper bounded by t, it requires O((pN)t) polynomial inversions in Z/pZ[x]/(xN - 1).
Abstract: In this paper, we present a fault analysis of the original NTRU public key cryptosystem The fault model in which we analyze the cipher is the one in which the attacker is assumed to be able to fault a small number of coefficients of the polynomial input to (or output from) the second step of the decryption process but cannot control the exact location of injected faults For this specific original instantiation of the NTRU encryption system with parameters (N,p,q), our attack succeeds with probability ≈ 1 - 1/p and when the number of faulted coefficients is upper bounded by t, it requires O((pN)t) polynomial inversions in Z/pZ[x]/(xN - 1)

Journal ArticleDOI
TL;DR: This paper proposes another type of gradient descent learning based on both the phases and the amplitude that improves the noise robustness and accelerates the learning speed of Complex-valued Associative Memory.
Abstract: Complex-valued Associative Memory (CAM) is an advanced model of Hopfield Associative Memory. The CAM is based on multi-state neurons and has the high ability of representation. Lee proposed gradient descent learning for the CAM to improve the storage capacity. It is based on only the phases of input signals. In this paper, we propose another type of gradient descent learning based on both the phases and the amplitude. The proposed learning method improves the noise robustness and accelerates the learning speed.

Journal ArticleDOI
TL;DR: This letter shows that an identity-based signcryption scheme in the standard model does not have the indistinguishability against adaptive chosen ciphertext attacks and existential unforgeability against Adaptive chosen messages attacks.
Abstract: Recently, Jin, Wen, and Du proposed an identity-based signcryption scheme in the standard model. In this letter, we show that their scheme does not have the indistinguishability against adaptive chosen ciphertext attacks and existential unforgeability against adaptive chosen messages attacks.

Journal ArticleDOI
TL;DR: A novel construction of ternary sequences having a zero-correlation zone for periodic, aperiodic, and odd correlation functions and a wide inter-subset zero-Correlation enables performance improvement during application of the proposed sequence set.
Abstract: The present paper introduces a novel construction of ternary sequences having a zero-correlation zone. The cross-correlation function and the side-lobe of the auto-correlation function of the proposed sequence set is zero for the phase shifts within the zero-correlation zone. The proposed sequence set consists of more than one subset having the same member size. The correlation function of the sequences of a pair of different subsets, referred to as the inter-subset correlation function, has a wider zero-correlation zone than that of the correlation function of sequences of the same subset (intra-subset correlation function). The wide inter-subset zero-correlation enables performance improvement during application of the proposed sequence set. The proposed sequence set has a zero-correlation zone for periodic, aperiodic, and odd correlation functions.

Journal ArticleDOI
TL;DR: In the proposed scheme, the server is capable of detecting forged login messages by users having only expired smart cards and their passwords without storing user information on the server.
Abstract: We propose a user authentication scheme with user anonymity for wireless communications. Previous works have some weaknesses such as (1) user identity can be revealed from the login message, and (2) after a smart card is no longer valid or is expired, users having the expired smart cards can generate valid login messages under the assumption that the server does not maintain the user information. In this letter, we propose a new user authentication scheme for providing user anonymity. In the proposed scheme, the server is capable of detecting forged login messages by users having only expired smart cards and their passwords without storing user information on the server.

Journal ArticleDOI
TL;DR: Simulation results show that the derived low-rate and high-rate non-binary LDPC convolutional codes exhibit good decoding performance without loss of large gap to the Shannon limits.
Abstract: In this paper, we present a construction method of non-binary low-density parity-check (LDPC) convolutional codes. Our construction method is an extension of Felstroem and Zigangirov construction for non-binary LDPC convolutional codes. The rate-compatibility of the non-binary convolutional code is also discussed. The proposed rate-compatible code is designed from one single mother (2,4)-regular non-binary LDPC convolutional code of rate 1/2. Higher-rate codes are produced by puncturing the mother code and lower-rate codes are produced by multiplicatively repeating the mother code. Simulation results show that non-binary LDPC convolutional codes of rate 1/2 outperform state-of-the-art binary LDPC convolutional codes with comparable constraint bit length. Also the derived low-rate and high-rate non-binary LDPC convolutional codes exhibit good decoding performance without loss of large gap to the Shannon limits.

Journal ArticleDOI
TL;DR: From this condition, a possible way to regularize the recursive least-squares (RLS) algorithm is proposed based on a belief that makes intuitively sense.
Abstract: SUMMARY Regularization plays a fundamental role in adaptive filter-ing. There are, very likely, many different ways to regularize an adaptivefilter. In this letter, we propose one possible way to do it based on a con-dition that makes intuitively sense. From this condition, we show how toregularize the recursive least-squares (RLS) algorithm. key words: echo cancellation, adaptive filters, regularization, recursiveleast-squares (RLS) algorithm 1. Introduction It is well known that regularization is a must in all prob-lems when a noisy linear system of equations needs to besolved [1]. Any adaptive filter has a linear system of equa-tions to solve, explicitly or implicitly, so that regularizationis required in order that the algorithm converges smoothlyand consistently to the optimal Wiener solution, especiallyin the presence of additive noise.In many adaptive filters [2],[3], the regularization pa-rameter is chosen as δ=βσ 2 x ,whereσ x = Ex 2 ( n )is thevariance of the zero-mean input signal

Journal ArticleDOI
TL;DR: The concepts and properties of RDOEL are introduced, and it is shown that it is also a complete solution to a simple case of DOEL.
Abstract: In this letter, we propose a new observer error linearization approach that is called reduced-order dynamic observer error linearization (RDOEL), which is a modified version of dynamic observer error linearization (DOEL). We introduce the concepts and properties of RDOEL, and provide a complete solution to RDOEL with one integrator. Moreover, we show that it is also a complete solution to a simple case of DOEL.

Journal ArticleDOI
TL;DR: A modular exponentiation processing method and circuit architecture that can exhibit the maximum performance of FPGA resources and can perform fast operations using small-scale resources is described.
Abstract: This paper describes a modular exponentiation processing method and circuit architecture that can exhibit the maximum performance of FPGA resources. The modular exponentiation architecture proposed by us comprises three main techniques. The first one is to improve the Montgomery multiplication algorithm in order to maximize the performance of the multiplication unit in an FPGA. The second one is to balance and improve the circuit delay. The third one is to ensure scalability of the circuit. Our architecture can perform fast operations using small-scale resources; in particular, it can complete a 512-bit modular exponentiation as fast as in 0.26ms with the smallest Virtex-4 FPGA, XC4VF12-10SF363. In fact the number of SLICEs used is approx. 4200, which proves the compactness of our design. Moreover, the scalability of our design also allows 1024-, 1536-, and 2048-bit modular exponentiations to be processed in the same circuit.

Journal ArticleDOI
TL;DR: In this paper, a non-interactive verifiable secret sharing scheme (VSS) tolerating a dishonest majority based on data pre-distributed by a trusted authority is presented.
Abstract: This paper presents a non-interactive verifiable secret sharing scheme (VSS) tolerating a dishonest majority based on data pre-distributed by a trusted authority. As an application of this VSS scheme we present very efficient unconditionally secure protocols for performing multiplication of shares based on pre-distributed data which generalize two-party computations based on linear pre-distributed bit commitments. The main results of this paper are a non-interactive VSS, a simplified multiplication protocol for shared values based on pre-distributed random products, and non-interactive zero knowledge proofs for arbitrary polynomial relations. The security of the schemes is proved using the UC framework.

Journal ArticleDOI
TL;DR: Two stochastic models based on working schemes of a generational garbage collector are proposed, using the techniques of cumulative processes and reliability theory, and optimal policies of major collection times which minimize them are discussed analytically and computed numerically.
Abstract: It is an important problem to determine major collection times to meet the pause time goal for a generational garbage collector. From such a viewpoint, this paper proposes two stochastic models based on working schemes of a generational garbage collector: Garbage collections occur in a nonhomogeneous Poisson process, tenuring collection is made at a threshold level K, and major collection is made at time T or at Nth collection including minor and tenuring collections for the first model and at time T or at Nth collection including tenuring collections for the second model. Using the techniques of cumulative processes and reliability theory, expected cost rates are obtained, and optimal policies of major collection times which minimize them are discussed analytically and computed numerically.

Journal ArticleDOI
TL;DR: By introducing the adaptive directional filteringing to the generalized structure, the 2D non-separable ADL structure is realized and applied into image coding and is shown to be efficient for the lossy and lossless image coding performance.
Abstract: In this paper, we propose a two dimensional (2D) non-separable adaptive directional lifting (ADL) structure for discrete wavelet transform (DWT) and its image coding application. Although a 2D non-separable lifting structure of 9/7 DWT has been proposed by interchanging some lifting, we generalize a polyphase representation of 2D non-separable lifting structure of DWT. Furthermore, by introducing the adaptive directional filteringingto the generalized structure, the 2D non-separable ADL structure is realized and applied into image coding. Our proposed method is simpler than the 1D ADL, and can select the different transforming direction with 1D ADL. Through the simulations, the proposed method is shown to be efficient for the lossy and lossless image coding performance.


Journal ArticleDOI
TL;DR: From the viewpoint of stored energy, a new regulation method to conserve and share the stored energy can be found and the Hamiltonian formulation, in particular, is important for grasping the dynamics of the energy in converter cir-cuits.
Abstract: SUMMARY A regulation of converters connected in parallel is dis-cussed considering their stored energy and passivity characteristics. Fromthe viewpoint of stored energy, a new regulation method to conserve andshare the stored energy can be found. The energy stored in inductors andcapacitors is transferred to loads so that the load keeps the energy dissipa-tion constant. Though numerical simulation, the method is validated for aparallel converter system. key words: parallel converter, energy, passivity, PWM, transient 1. Introduction Recently, power converters have been used as key devicesin dispersed power sources. In a dispersed power sys-tem, a parallel connection seems an appropriate structureto achieve these goals reliably and to adjust the variationsof input voltages for constant output voltage. However, theconventional design method related to converters is ineffi-cient.In this decade, a control theory based on energy storedin a system was proposed; this scheme targets the passiv-ity characteristics and is applied to various systems [1]–[4]. The converter models based on this theory are physi-cally rational and their dynamics are easy to depict [2]. Onthe other hand, in the design of a power converter’s controlsystem, the role of power and energy was discussed fromthe standpoint of power-factor control [5]. It was shownthat the energy in the converter circuits is a substantial statevariable related to power flow and a strategy was proposed.The general formulation was givenby Sira-Ramirez et al.[6]and Escobar et al.[7] on the basis of control and dynamics.The Hamiltonian formulation, in particular, is important forgrasping the dynamics of the energy flow in converter cir-cuits. The concept of passivity is applicable to both linearand nonlinear system. Passivity physically corresponds tothe energy of the system. The converter is an electric circuitwith active and passive elements. Consider a one-port cir-cuit consisting of passive elements with the energy stored inthe one-port circuit

Journal ArticleDOI
TL;DR: Two watermarking schemes targeting HDR images are proposed, which are based on µ-Law and bilateral filtering, respectively, and both of the subjective and objective qualities of watermarked images are greatly improved by the two methods.
Abstract: High Dynamic Range (HDR) images have been widely applied in daily applications. However, HDR image is a special format, which needs to be pre-processed known as tone mapping operators for display. Since the visual quality of HDR images is very sensitive to luminance value variations, conventional watermarking methods for low dynamic range (LDR) images are not suitable and may even cause catastrophic visible distortion. Currently, few methods for HDR image watermarking are proposed. In this paper, two watermarking schemes targeting HDR images are proposed, which are based on µ-Law and bilateral filtering, respectively. Both of the subjective and objective qualities of watermarked images are greatly improved by the two methods. What's more, these proposed methods also show higher robustness against tone mapping operations.

Journal ArticleDOI
TL;DR: The proposed sequences can be potentially applied to communication systems using 16-QAM constellation as spreading sequences so that the multiple access interference (MAI) and multi-path interference (MPI) are removed synchronously.
Abstract: Based on the known quadriphase zero correlation zone (ZCZ) sequences ZCZ4(N,M,T), four families of 16-QAM sequences with ZCZ are presented, where the term “QAM sequences” means the sequences over the quadrature amplitude modulation (QAM) constellation. When the quadriphase ZCZ sequences employed by this letter arrive at the theoretical bound on the ZCZ sequences, and are of the even family size M or the odd width T of ZCZ, two of the resulting four 16-QAM sequence sets satisfy the bound referred to above. The proposed sequences can be potentially applied to communication systems using 16-QAM constellation as spreading sequences so that the multiple access interference (MAI) and multi-path interference (MPI) are removed synchronously.

Journal ArticleDOI
TL;DR: This work illustrates how Kim and Chung’s password-based user authentication scheme can be compromised and proposes an improved scheme to overcome the weaknesses, based on the Rabin cryptosystem.
Abstract: SUMMARY Kim and Chung previously proposed a password-based user authentication scheme to improve Yoon and Yoo’s scheme. However, Kim and Chung’s scheme is still vulnerable to an offline password guessing attack, an unlimited online password guessing attack, and server impersonation. We illustrate how their scheme can be compromised and then propose an improved scheme to overcome the weaknesses. Our improvement is based on the Rabin cryptosystem. We verify the correctness