scispace - formally typeset
Search or ask a question

Showing papers in "Electronics and Communications in Japan Part Iii-fundamental Electronic Science in 2005"


Journal ArticleDOI
TL;DR: Numerical simulation indicates that it is possible to share the private key in a realistic setting with an SN ratio of 15 dB by means of various measures for prevention of key disagreement and an evaluation confirms that independence is sufficiently maintained.
Abstract: As a measure for preventing eavesdropping on land-mobile communication, this paper studies an application of a private key agreement based on the propagation characteristics of channel to a practical system without key delivery. The paper proposes a private key agreement based on the time-varying frequency characteristics suitable for OFDM. In the proposed method, the time-varying frequency characteristics are alternately measured on the basis of the channel reciprocity and generation of the key and key agreement is attempted. Also, in the proposed method, for prevention of errors in key generation, measurement time difference compensation of the channel, noise reduction by the synchronous addition process, and key disagreement correction by an algebraic decoding method are applied. Further, in order to evaluate the safety of the key, a method for evaluation of independence of the key is discussed. As a system mode for evaluation of the performance of the proposed method, a model is constructed for study that is based on the specification of the IEEE802.11a wireless LAN using the OFDM, so that verification of an operation can be carried out in a more realistic environment. Numerical simulation indicates that it is possible to share the private key in a realistic setting with an SN ratio of 15 dB by means of various measures for prevention of key disagreement. Also, an evaluation confirms that independence is sufficiently maintained. © 2005 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(9): 1–10, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjc.20167

36 citations


Journal ArticleDOI
TL;DR: The major purpose of this study is to realize the function that can manage the diversified learning objects with various information granularities and representation formats, using the learning object metadata, so that each learner can utilize theLearning object based on the learning scenario, which is matched to the individual learner.
Abstract: In this study, an e-learning system is developed to handle the e-learning environment based on the learning ecological model. In the learning ecological model, which represents the comprehensive e-learning environment, not only the contents of learning, but also the learning environment are managed and provided, based on the content, the goal, and the configuration of the learning. The major purpose of this study is to realize the function that can manage the diversified learning objects with various information granularities and representation formats, using the learning object metadata, so that each learner can utilize the learning object based on the learning scenario, which is matched to the individual learner. The learning scenario is constructed by sequencing the learning objects based on the learning necessity, the learning history information, and the curriculum information of the object of learning, according to the characteristics of the learning object. As the sequencing procedure, the sequencing of the learning objects is considered, by applying the optimization technique of the multi-objective optimization problem, so that multiple evaluation viewpoints are simultaneously satisfied. The genetic algorithm is used as the optimization procedure. The learning object metadata and the sequencing of the learning objects are discussed in detail in this paper. The evaluation of the developed e-learning system is also described. © 2004 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(3): 54–71, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjc.20163

26 citations


Journal ArticleDOI
TL;DR: Time-varying pen-position signals of the on-line signature are decomposed into subband signals by using the DWT and adaptive signal processing and verification was 95% even if the writer was not permitted to refer to his/her own signature or if a forger traced the genuine signature.
Abstract: This paper presents an on-line signature verification method based on the Discrete Wavelet Transform (DWT) and adaptive signal processing. Time-varying pen-position signals of the on-line signature are decomposed into subband signals by using the DWT. Individual features are extracted as high-frequency signals in subbands. Verification is achieved in each subband by determining whether the adaptive weight converges to unity. The overall verification decision is made by averaging the verification results in lower subbands. Experimental results show that the verification rate was 95% even if the writer was not permitted to refer to his/her own signature or if a forger traced the genuine signature. © 2005 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(6): 1–11, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjc.20143

17 citations


Journal ArticleDOI
TL;DR: In this article, the boundary surface control principle is used in such a way that a boundary surface is formed only in the direction not intended for radiation of the sound, then a directional speaker system can be realized.
Abstract: According to the boundary surface control principle, it is possible to reflect the flow of acoustic energy at a boundary if a boundary surface with impedance 0 can be formed by controlling the sound pressure and the particle velocity at an arbitrary position in space. If this principle is used in such a way that a boundary surface is formed only in the direction not intended for radiation of the sound, then a directional speaker system can be realized. In the conventional design method of the directional speaker array system, in order to achieve sharp directivity in the low frequencies, it is necessary to enlarge the array length. On the other hand, in our proposed system, it is easy to achieve sharp directivity in the low frequencies, since the boundary surface control principle can achieve high performance in the low frequencies. Moreover, by changing the control sound source spacing for each bandwidth, rather flat characteristics can be obtained over the entire control range. In this paper, the basic characteristics constituting guidelines for system design and the effect of control are confirmed by numerical calculations and experiments. © 2004 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(2): 1–9, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjc.20122

13 citations


Journal ArticleDOI
TL;DR: In this paper, a method for multi-beam forming of wideband signals by means of delay processors and two-dimensional fan filters is proposed, which is carried out by multiplexing the delay processors for two dimensional signals for each beam direction.
Abstract: In this paper, a method is proposed for multi-beam forming of wideband signals by means of delay processors and two-dimensional fan filters. Only one axially symmetric two-dimensional fan filter is used. Multi-beam forming is carried out by multiplexing the delay processors for two-dimensional signals for each beam direction. The delay processing here corresponds to a rotation of the two-dimensional spectra in the two-dimensional frequency domain and provides a delay proportional to the sensor location to each sensor output. Since each delay is a fractional delay, a method of digital signal processing based on high-speed sampling and thinning is used for realization. A design method for multi-beam forming is given and a study of expanding the receiving angle range is carried out. It is shown that excellent multi-beam forming is possible if the method is used within an appropriate receiving signal range, although the beam width is slightly different depending on the beam direction. Finally, a difference of the method from multi-beam forming by the use of several two-dimensional fan filters is discussed. The positioning of this method and its effectiveness are clarified. © 2005 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(12): 1–12, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjc.20175

11 citations


Journal ArticleDOI
TL;DR: In this paper, an autonomous decentralized ATC system (D-ATC) is proposed, which works as follows: each train detects its own position, and only the position at which to stop is communicated from the ground to the train, and the velocity of the train is determined and controlled autonomously on the basis of this instruction.
Abstract: High transport capacity, high safety, and high reliability are required of train control systems in the high-density operational sections. The conventional systems operate as follows. The presence of a train is detected at each fixed section. Based on a decision as to whether or not the train should progress into the section, the velocity of the train is calculated by a centralized control system on the ground, and the result is communicated to the automatic train control device (ATC) of the train. It is difficult to realize high-density transport by this mechanism. This paper proposes an autonomous decentralized ATC system (D-ATC), which works as follows. Each train detects its own position. Only the position at which to stop is communicated from the ground to the train, and the velocity of the train is determined and controlled autonomously on the basis of this instruction. Then, an assurance technology is proposed in which trains with D-ATC are introduced stepwise without disturbing the operation of the whole system, and their testing in on-line operation is assured while they coexist with trains with the existing ATC. The on-train integration and the on-train separation technologies are presented as two methods of stepwise introduction and are evaluated from the viewpoint of assurance. Based on the results obtained in this study, a practical application was performed in the train control system of the JR East Yamanote/Keihin Tohoku line. © 2005 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(10): 46–56, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjc.20194

9 citations


Journal ArticleDOI
TL;DR: An efficient rollback algorithm is proposed, which is based on the snapshots taken by the subsnapshot algorithm, in which the snapshot is taken among the agents who are in the causal relation, through message exchange and agent creation.
Abstract: This paper considers an Internet agent system in which a tremendous number of agents operate, frequently appearing and disappearing, and discusses the fault-tolerant algorithm. Application of the snapshot algorithm to the agent system is considered. The snapshot algorithm is used to view the whole situation (snapshot) of the distributed system. The snapshot algorithm of Chandy and Lamport [2] is considered as a representative snapshot algorithm, in terms of the high efficiency and the simplicity of the procedure. It is not practical, however, to apply their snapshot algorithm to the distributed agent system in which a tremendous number of agents operate. From such a viewpoint, this paper extends the idea of Chandy and Lamport's algorithm and proposes a subsnapshot algorithm, in which the snapshot is taken among the agents who are in the causal relation, through message exchange and agent creation. Then, an efficient rollback algorithm is proposed, which is based on the snapshots taken by the subsnapshot algorithm. In the general rollback algorithm utilizing the snapshot, all agents must roll back. In contrast, in the rollback algorithm proposed in this paper, it suffices that only some agents should roll back. © 2005 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(12): 43–57, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjc.20208

9 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examined what type of bifurcation develops when an extremely small periodic force drives a van der Pol oscillator that generates the duck solution and found that the existence of two duck solutions when driven by such a small force is possible.
Abstract: In this paper, we examine what type of bifurcation develops when an extremely small periodic force drives a van der Pol oscillator that generates the duck solution. We create the bifurcation diagrams and explain the structures of bifurcation in the fundamental harmonic entrainment region, 1/2 subharmonic entrainment region, and 1/3 subharmonic entrainment region. In each region, we observed the cascade generation of the period-doubling bifurcation in the extremely small periodic force range, and the occurrence of chaos. We also discovered the very interesting phenomenon of the existence of two duck solutions when driven by an extremely small periodic force. © 2004 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(4): 51–59, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjc.20093

8 citations


Journal ArticleDOI
TL;DR: In this paper, an adaptive estimation of the direction of arrival is proposed by using the training network called the RBF (Radial Basis Function), which is treated as one of deriving the mapping from the autocorrelation matrix to the angle of arrival and then the DOA estimation problem is solved.
Abstract: DOA (direction-of-arrival) estimation problems have considerable importance in the fields of radar, sonar, and high-resolution spectral analysis. Many methods have been proposed for solution of DOA estimation by using array antennas. In general, the computational complexity is significant. In the present paper, adaptive estimation of the direction of arrival is proposed by using the training network called the RBF (Radial Basis Function). In this method, the problem is treated as one of deriving the mapping from the autocorrelation matrix to the angle of arrival and then the DOA estimation problem is solved. Specifically, the angle of arrival and the signal power are independently discretized. The autocorrelation matrix corresponding to these discretized data is used as the training data for the RBF network. In this processing, the basis functions are reduced by means of clustering. Also, by sensitivity analysis, a network that is robust to the estimation error of the autocorrelation matrix is constructed. In this way, the DOA can be estimated at high speed. © 2005 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(9): 11–20, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjc.20161

8 citations


Journal ArticleDOI
TL;DR: It is shown that the global optimum solution of problems solved iteratively by an interactive process can be derived by means of an extended Dinkelbach-type algorithm.
Abstract: This paper treats a multiobjective linear programming problem in which the coefficients contained in the objective function of the problem are fuzzy random variables. First, in order to take into account ambiguities of judgment by a human decision maker, fuzzy objectives are introduced. Subsequently, we consider a problem of maximizing the possibility and necessity of the objective function value to satisfy the fuzzy objectives. Since these degrees vary stochastically, a formulation is based on the fractile optimization model in a stochastic programming method. A process is presented for equivalent transformation to a deterministic multiobjective nonlinear fractional programming method. For the transformed multiobjective programming problem, an interactive fuzzy satisficing method that derives a satisfactory solution of the decision maker through interactions with the decision maker is proposed. It is shown that the global optimum solution of problems solved iteratively by an interactive process can be derived by means of an extended Dinkelbach-type algorithm. © 2005 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(5): 20–28, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjc.20136

8 citations


Journal ArticleDOI
TL;DR: A technique that is used to recognize a material from the reflected ultrasonic waves is employed and a category recognition system using two ultrasonic sensors is proposed, showing a recognition rate of nearly 100% and high recognition results were obtained for almost all of the materials when the categories were extended to combinations of the material, angle, and distance.
Abstract: In this paper, we employ a technique that is used to recognize a material from the reflected ultrasonic waves and propose a category recognition system using two ultrasonic sensors. In this system, the reflected waveforms are handled as two-dimensional images, and which category an object belongs to is instantaneously recognized by pattern matching to reference data by a combinatorial logic circuit. In this system, combinations of information such as the material, angle, and distance of the object are set as the recognition categories. The pattern matching is carried out by a combinatorial logic circuit created directly from the reference data. The recognition experiments with measured waveform data performed according to the proposed method demonstrated the features of the proposed system and its effectiveness when applied to recognition under specific conditions. With five materials set as the categories, the results show that a recognition rate of nearly 100% was obtained, and high recognition results were obtained for almost all of the materials when the categories were extended to combinations of the material, angle, and distance. © 2005 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(7): 33–42, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjc.20147

Journal ArticleDOI
TL;DR: Results suggest that a correspondence strategy based on the survey map is probably used in the experiments with both the route map cue and survey map cue conditions, which seems to indicate that the correspondence strategy using the survey Map becomes more effective in successive trials.
Abstract: SUMMARY This study is concerned with route learning and its verification. It is investigated how accurately information can be utilized from a cognitive map of the same format or from cognitive maps with different formats. An experiment based on cognitive psychology is performed in which the subject learns beforehand a route in a virtual space. A cue to the location of the decision point is given. Then, the subject judges the orientation from that point. Two kinds of cue, either conditions in route map format or conditions in survey map format, are provided. The subject learns the virtual environment based on the route map, but it is inferred from the experiments that the cognitive map is formed as a survey map. In the test session, when a cue to the survey map concerning the start point and the present point, or a cue to the route map for automatic progress is given, the score achieved in the judgment of the orientation toward the goal point is better than in a control experiment without any cues. These results suggest that a correspondence strategy based on the survey map is probably used in the experiments with both the route map cue and survey map cue conditions. It should be noted, however, that the percentage correct for the route map condition is low in the initial stage of the experiment block, which seems to indicate that the correspondence strategy using the survey map becomes more effective in successive trials. © 2004 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(4): 43–50, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjc.20096

Journal ArticleDOI
TL;DR: A road representation model that can detect and track road areas from road scenes photographed by monocular cameras and reconstruct 3D shapes of roads and reveal stable road area extraction at a level of 96%.
Abstract: This paper proposes a road representation model that can detect and track road areas from road scenes photographed by monocular cameras and reconstruct 3D shapes of roads. This model can reconstruct road shapes smoothly and accurately while tracking the road boundaries stably with the parallel property of roads as the active or dynamic contour model constraints. Results of tests on 2500 real road scenes using the model reveal stable road area extraction at a level of 96%. © 2005 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(9): 42–52, 2005; Published online in Wiley InterScience (www.interscience. wiley.com). DOI 10.1002/ecjc.20189

Journal ArticleDOI
TL;DR: This paper proposes a method which generates a prime degree irreducible polynomial with a Type II ONB as its zeros, and shows that an m-th degree irReducible Polynomial can always be generated from a 2m-th degrees self-reciprocal irreducing polynomials by the self- Reciprocal inverse transformation.
Abstract: In most of the methods of public key cryptography devised in recent years, a finite field of a large order is used as the field of definition. In contrast, there are many studies in which a higher-degree extension field of characteristic 2 is fast implemented for easier hardware realization. There are also many reports of the generation of the required higher-degree irreducible polynomial, and of the construction of a basis suited to fast implementation, such as an optimal normal basis (ONB). For generating higher-degree irreducible polynomials, there is a method in which a 2m-th degree self-reciprocal irreducible polynomial is generated from an m-th degree irreducible polynomial by a simple polynomial transformation (called the self-reciprocal transformation). This paper considers this transformation and shows that when the set of zeros of the m-th degree irreducible polynomial forms a normal basis, the set of zeros of the generated 2m-th order self-reciprocal irreducible polynomial also forms a normal base. Then it is clearly shown that there is a one-to-one correspondence between the transformed irreducible polynomial and the generated self-reciprocal irreducible polynomial. Consequently, the inverse transformation of the self-reciprocal transformation (self-reciprocal inverse transformation) can be applied to a self-reciprocal irreducible polynomial. It is shown that an m-th degree irreducible polynomial can always be generated from a 2m-th degree self-reciprocal irreducible polynomial by the self-reciprocal inverse transformation. We can use this fact for generating 1/2-degree irreducible polynomials. As an application of 1/2-degree irreducible polynomial generation, this paper proposes a method which generates a prime degree irreducible polynomial with a Type II ONB as its zeros. © 2005 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(7): 23–32, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjc.20151

Journal ArticleDOI
TL;DR: A packet-forwarding scheme based on the UWB system, where the spreading factor is adjusted according to the distance between the transmitter and the receiver is considered to realize higher throughput than in single-hop transmission by using multihop transmission which controls the maximum transmission distance.
Abstract: This paper discusses a routing scheme for an autonomous distributed wireless network (mobile ad-hoc networking) using an ultra-wideband (UWB) system. The UWB system is a kind of spread-spectrum communication system which has the function of measuring the distance between the transmitter and the receiver with high accuracy. Using this distance measurement function, the received energy per bit is kept constant by adjusting the spreading factor according to the transmission distance. This paper considers a packet-forwarding scheme based on the above feature of the UWB system, where the spreading factor is adjusted according to the distance between the transmitter and the receiver. The objective is to realize higher throughput than in single-hop transmission by using multihop transmission which controls the maximum transmission distance. © 2004 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(2): 22–30, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjc.20082

Journal ArticleDOI
TL;DR: A survey of previous research into speaker adaptation techniques focusing particularly on maximum a posteriori (MAP) parameter estimation, maximum likelihood linear regression (MLLR), and eigenvoices is presented.
Abstract: In speech recognition, speaker adaptation refers to the range of techniques whereby a speech recognition system is adapted to the acoustic features of a specific user using a small sample of utterances from that user. In recent years the practical development of speaker-independent speech recognition systems using continuous density hidden Markov models has seen significant progress; however, the recognition performance of these systems has not yet reached that of speaker-dependent speech recognition systems in which a user's speech is registered beforehand. Much hope has therefore been placed on the establishment of speaker adaptation techniques that can bring performance of a speaker-independent system up to that of a speaker-dependent one using the smallest amounts of data. In this paper we present a survey of previous research into speaker adaptation techniques focusing particularly on three important approaches in this area: maximum a posteriori (MAP) parameter estimation, maximum likelihood linear regression (MLLR), and eigenvoices. We also discuss approaches that combine these techniques in a lateral fashion. © 2005 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(12): 25–42, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ ecjc.20207

Journal ArticleDOI
TL;DR: The authors used a normal camera and computer for the experiments and assume that the technique will be implemented in the future on a device created by using simple electronic circuits and light-receive elements such as photodiode arrays.
Abstract: To aid an automobile driver, image processing is used to detect another vehicle approaching from the left or right rear, which may easily enter a blind spot. The basic information that is used for detection is image movement (optical flow). However, calculating image movement by traditional means is operationally intensive and requires expensive equipment. However, after noticing that an insect uses information from simple sensors such as compound eyes and processing performed by a simple nervous system to obtain sufficient information, the authors tried to reach their objective according to simple processing and low-resolution image information. Although they used a normal camera and computer for the experiments, the authors assume that the technique will be implemented in the future on a device created by using simple electronic circuits and light-receiving elements such as photodiode arrays. A major feature of the proposed technique is that its computational complexity is low. The difference in computation time compared with the average image processing for the conventional technique was on the order of 10−5. © 2005 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(10): 57–65, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjc.20195

Journal ArticleDOI
Fumio Teraoka1
TL;DR: In this paper, the various protocols used to achieve node mobility on the Internet are compared and evaluated, and the methods used to satisfy the above factors are classified.
Abstract: In this paper, the various protocols used to achieve node mobility on the Internet are compared and evaluated. Node mobility can be defined as hiding from the application or user changes in the connection point to the Internet of the terminal. Node mobility can be broken down into two forms of mobility: wide area mobility and local mobility. Wide area mobility can be achieved by satisfying two factors: mobile node reachability and communication continuity. Local mobility can be achieved by satisfying two factors: packet loss avoidance and control traffic suppression. In this paper, the various proposed protocols for achieving node mobility are analyzed, and the methods used to satisfy the above factors are classified. Each classified method is then analyzed and evaluated from the standpoint of communication efficiency, scalability, security, fault tolerance, and ease of implementation. In addition, specific examples are given for the representative protocols. © 2005 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(6): 39–59, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjc.20173


Journal ArticleDOI
TL;DR: This paper considers the multi-objective linear programming problem and discusses the case in which the coefficients of the objective function are fuzzy random variables, and proposes a model inWhich the possibility or the necessity that the objectivefunction value achieves the fuzzy goal is maximized on the basis of fuzzy programming or possibility programming.
Abstract: This paper considers the multi-objective linear programming problem and discusses the case in which the coefficients of the objective function are fuzzy random variables. First, the fuzzy goal is introduced into the objective function, considering the fuzzy decision of the human decision-maker. We consider a model in which the possibility or the necessity that the objective function value achieves the fuzzy goal is maximized on the basis of fuzzy programming or possibility programming. It is noted that the possibility or necessity fluctuates stochastically, and a decision-making process based on the stochastic programming model is proposed. After modifying the problem with constraint into an equivalent deterministic problem, the convex programming problem with a parameter is introduced. Then, an algorithm is proposed in which the optimal solution is derived, combining nonlinear programming and the bisection method. © 2004 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(1): 68–75, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjc.10151

Journal ArticleDOI
TL;DR: A new class of discriminant measures defined on the set of all positive finite measures using the invariance of the f-divergence is derived, which includes measures that facilitate statistical data processing and are suitable for the explicit formulation and analysis of the ensemble learning method.
Abstract: The f-divergence, which is defined by using a class of convex functions, is used as a generalized phenotype of discriminant measures between two probability distributions. In this paper, we derive a new class of discriminant measures defined on the set of all positive finite measures using the invariance of the f-divergence. This is a class of discriminant measures different from ones which have often been used. We mention some aspects of its effectiveness concerning the extension of discriminant measures by showing that the proposed class includes measures that facilitate statistical data processing and are suitable for the explicit formulation and analysis of the ensemble learning method. © 2004 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(4): 35–42, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjc.20128

Journal ArticleDOI
TL;DR: It is shown that generation of random numbers is possible with an LC oscillator to which shot noise is applied and an evaluation of statistical randomness (FIPS140-2) is carried out for the generated number sequence as cryptographic random numbers.
Abstract: The authors previously published a physical method for the generation of random numbers by a variable capacitance parametron, which can generate uniform random numbers without periodicity. It is considered possible to generate random numbers with an ordinary oscillator if the oscillation frequency is made unstable. As a method of introducing instability, frequency modulation of an LC oscillator with shot noise is considered. The present paper describes experiments with an LC oscillator to which shot noise is applied. An evaluation of statistical randomness (FIPS140-2) is carried out for the generated number sequence as cryptographic random numbers. It is shown that generation of random numbers is possible. © 2005 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(5): 12–19, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjc.20149

Journal ArticleDOI
TL;DR: In this paper, an adjacent solution is proposed for effective search of only the general configuration floor plan without empty rooms by using the simulated annealing method, and the diameter of this solution space is proven to be of polynomial size.
Abstract: In VLSI layout design, a “floor plan” is often derived prior to solving the placement problem. The floor plan is obtained by dividing the square representing the chip into several rectangular rooms, to which modules are assigned, and then the locations of the module and interconnects are approximately determined. In floor plan study, the use of slice structures is common. Recently, however, a method of converting the sequence-pair, as a representation of the rectangular packing, to the floor plan has been proposed. Hence, it is now possible to list all floor plans with general configurations including the nonslice structure. However, in such a method there is the deficiency that an “empty room” without an assigned module may be generated. In the present paper, an adjacent solution is proposed for effective search of only the general configuration floor plan without empty rooms by using the simulated annealing method. An ingenious technique for effectively deriving this adjacent solution is described. The diameter of this solution space is proven to be of polynomial size. Also, the proposed solution space is experimentally compared with the solution space of slice structures. © 2005 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(6): 28–38, 2005; Published online in Wiley InterScience (www.interscience. wiley.com). DOI 10.1002/ecjc.10003

Journal ArticleDOI
TL;DR: In this paper, a friction drive model was proposed to analyze the effect of the changes in these parameters on the drive performance and to discuss the motor design and the drive conditions based on evaluations from the perspectives of the drive power, output, and efficiency.
Abstract: The contact states of the slider and stator substrate greatly affect the drive performance of a surface acoustic wave motor. The contact states are determined by factors such as the friction coefficient, vibration amplitude, rigidity of the contact surface, contact time with the Rayleigh wave, and slider speed. A model was proposed that represented the relationship between these factors and the drive performance. However, guidance has not been provided on how these values should be changed to improve the drive performance. In this research, we use a friction drive model we proposed to analyze the effect of the changes in these parameters on the drive performance and to discuss the motor design and the drive conditions based on evaluations from the perspectives of the drive power, output, and efficiency. © 2004 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(1): 37–47, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ ecjc.20079

Journal ArticleDOI
TL;DR: A new method is being proposed for effective detection of alteration of digital image data by applying coding theory and the cryptographic theory at the same time.
Abstract: It is conceivable to embed additional information with a view to judge whether certificates existing as paper media or digital image data have been altered. The authors have already proposed and discussed a method for configuration of image information enabling reconstruction of the original image information from contaminated image information on paper by means of coding theory. Based on this proposal, a new method is being proposed for effective detection of alteration of digital image data by applying coding theory and the cryptographic theory at the same time. In the present paper, this method is critically discussed and a new method is proposed. © 2005 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(8): 9–17, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjc.20150

Journal ArticleDOI
TL;DR: Simulation shows that packet recovery by (n, k, m) convolutional codes is effective in the area where the packet loss rate is rather low.
Abstract: A lost packet recovery method is proposed by using (n, k, m) convolutional codes. The amount of computation in encoding and decoding as well as the lost packet recovery capability are evaluated. In the proposed method, a packet with a length of q bits is considered as a symbol on GF(2q) for application of the convolutional codes. It is recognized that on the Internet the packet loss position can be identified. Then, decoding is applied to a lossy communications channel on the packet level. First, the procedures for coding and decoding are presented. Subsequently, the packet loss recovery characteristic in the proposed method is analyzed. The condition for recovery of all lost packets is discussed. In the proposed method, linear simultaneous equations can be developed from the generating matrix and the packet loss positions. If these equations have a unique solution, all packet losses can be recovered. Also, by means of the lost packet recovery simulation, the recovery capability and computational complexity of the proposed method are evaluated. Under the model in which packets are independently lost, the method is compared with the packet recovery method using (n, k) Reed–Solomon codes with identical n and k. Simulation shows that packet recovery by (n, k, m) convolutional codes is effective in the area where the packet loss rate is rather low. © 2005 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(7): 1–13, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjc.20155

Journal ArticleDOI
TL;DR: An interpolation scheme involving the warping and biasing that randomly moves the coordinate points and the signal amplitudes is explained and implemented, and digital image interpolation with improved resolution is implemented.
Abstract: Since the dimensions for digital images are set by the acquisition system, digital image enlargement is indispensable when the dimensions that can be displayed are larger in the display system than in the acquisition system. Conventionally, this enlargement process was performed by interpolation. The digital image enlargement corresponds to a larger Nyquist frequency, but the frequency band that is expanded during the enlargement cannot be considered by the interpolation scheme, and blur is produced in the enlarged image. The parts where the image blur stands out subjectively are primarily the signals forming the image contours having the step edge shape and the details of the image consisting of mountain- and valley-shaped signals when viewed microscopically. Therefore, the proposed method moves the coordinate points interpolated to preserve the discontinuities of step edge signals and decides whether the interpolated points are the vertices of the mountain- or valley-shaped signals, and then moves the amplitudes of the interpolated points for creating these peak signals up or down. In other words, we explain an interpolation scheme involving the warping and biasing that randomly moves the coordinate points and the signal amplitudes, and implement digital image interpolation with improved resolution. Furthermore, the parameter settings for the proposed method are both objectively and subjectively evaluated. © 2004 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(2): 10–21, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjc.20095

Journal ArticleDOI
TL;DR: Using the aftereffect function method, the point-spread function of an imaging system which is difficult to measure, can be derived and the feasibility of the biological imaging by CP-MCT is discussed.
Abstract: This paper considers chirp pulse microwave CT (CP-MCT), in which the chirp pulse and signal processing technology are used to estimate the internal structure and temperature variation of biological objects. A method is developed in which the CT image to be acquired by actual measurement can be derived by numerical computation. The aftereffect function between the transmitting and receiving antennas is calculated by a Gaussian pulse and the FDTD method. After convolution of the function with the input chirp pulse signal on the time axis, the measured signal at a point is constructed by the same signal processing as in real measurement. This computation is repeated for each point on the translation-scanning axis. The projection data are acquired and the CT image is generated. The validity of this computation procedure, which is called the aftereffect function method, is demonstrated by comparing the resolution and the measured temperature variation in actual measurements with the results of a simulation computation. Using the aftereffect function method, the point-spread function of an imaging system which is difficult to measure, can be derived. An analytical model for the human head is constructed. Simulations are performed for the attenuation distribution and the temperature variation distribution. Based on the results, the feasibility of the biological imaging by CP-MCT is discussed. © 2005 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(9): 53–63, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjc.20190

Journal ArticleDOI
TL;DR: A method of efficiently clustering large-scale data using the likelihoods of a cluster model that was created from small- scale data as the criteria to obtain a high-precision adapted acoustic model is proposed.
Abstract: In speech recognition systems where the speaker and utterance environment cannot be designated, the drop in recognition precision due to the incompatibility of the input speech and acoustic model's training data is a problem. Although this problem is normally solved by speaker adaptation, sufficient precision cannot be achieved for speaker adaptation unless good-quality adaptation data can be obtained. In this paper, the authors propose a method of efficiently clustering large-scale data using the likelihoods of a cluster model that was created from small-scale data as the criteria to obtain a high-precision adapted acoustic model. They also propose a method of using the cluster model to automatically determine the adapted acoustic model during recognition from only the beginning of the sentences of the input speech. The results of applying the proposed technique to news speech recognition experiments show that the adapted acoustic model selection precision can be ensured by using only 0.5 second of data of the beginnings of sentences of the input speech and that the proposed technique achieves a reduction rate for invalid recognitions of 20% and a reduction in the time required for recognition of 23% compared with when the adapted acoustic model for each cluster is not used. © 2004 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(2): 41–51, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjc.20157

Journal ArticleDOI
TL;DR: This paper proposes an algorithm that does not transmit preprocessed reference signals to an unknown system, but only uses them to estimate the adaptive filter coefficients, and explains the design conditions needed to implement this algorithm.
Abstract: The reference signals used in a multichannel system are usually strongly crosscorrelated. Preprocessing is normally added to reduce the crosscorrelation between the reference signals because this correlation makes estimating the adaptive filter coefficients difficult. However, this preprocessing is equivalent to warping the reference signals. The transmission of these warped reference signals to an unknown system hinders the essential system operation. On the other hand, it is necessary for uncorrelated components between the reference signals to exist when estimating the adaptive filter coefficients. Preprocessing that increases the ratio of these components is important in improving the convergence characteristics. In this paper, we propose an algorithm that does not transmit preprocessed reference signals to an unknown system, but only uses them to estimate the adaptive filter coefficients, and explain the design conditions needed to implement this algorithm. In other words, with the condition of not increasing the estimation errors, we derive the optimum values of the coefficients that reduce the correlation between the reference signals, which is a feature of this algorithm, and verify its effectiveness in simulations. By employing the proposed algorithm, improvements in the convergence characteristics can be designed without affecting the essential operation of the system. © 2004 Wiley Periodicals, Inc. Electron Comm Jpn Pt 3, 88(3): 32–41, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjc.20092