scispace - formally typeset
Search or ask a question

Showing papers in "IEICE Transactions on Information and Systems in 2002"


Journal Article
TL;DR: A new kind of hidden Markov model (HMM) based on multi-space probability distribution is proposed, and a parameter estimation algorithm for the extended HMM is derived, which can model sequences which consist of observation vectors with variable dimensionality and discrete symbols.
Abstract: This paper proposes a new kind of hidden Markov model (HMM) based on multi-space probability distribution, and derives a parameter estimation algorithm for the extended HMM HMMs are widely used statistical models for characterizing sequences of speech spectra, and have been successfully applied to speech recognition systems HMMs are categorized into discrete HMMs and continuous HMMs, which can model sequences of discrete symbols and continuous vectors, respectively However, we cannot apply both the conventional discrete and continuous HMMs to observation sequences which consist of continuous values and discrete symbols: F0 pattern modeling of speech is a good illustration The proposed HMM includes discrete HMM and continuous HMM as special cases, and furthermore, can model sequences which consist of observation vectors with variable dimensionality and discrete symbols key words: hidden Markov model, text-to-speech, F0, multispace probability distribution

262 citations



Journal Article
TL;DR: A new effective method of managing flash memory space for flash memory-specific file systems based on a log-structured file system with focus on lowering cleaning cost and evenly utilizing flash memory cells while maintaining a balance between these two often-conflicting goals is proposed.
Abstract: We propose a new effective method of managing flash memory space for flash memory-specific file systems based on a log-structured file system. Flash memory has attractive features such as non-volatility and fast I/O speed, but it also suffers from inability to update in situ and from limited usage (erase) cycles. These drawbacks necessitate a number of changes to conventional storage (file) management techniques. Our focus is on lowering cleaning cost and evenly utilizing flash memory cells while maintaining a balance between these two often-conflicting goals. The proposed cleaning method performs well especially when storage utilization and the degree of locality are high. The cleaning efficiency is enhanced by dynamically separating cold data and non-cold data, which is called ‘collection operation.’ The second goal, that of cycle-leveling, is achieved to the degree that the maximum difference between erase cycles is below the error range of the hardware. Experimental results show that the proposed technique provides sufficient performance for reliable flash storage systems. key words: flash memory, logging, cleaning algorithm, cycle leveling, data collection

67 citations


Journal Article
TL;DR: Experimental results show that the homomorphic filtering method improves the contrast in breast tumor images such that the contrast improvement index is increased by two fold compared to the conventional wavelet-based enhancement technique.
Abstract: SUMMARY The use of the homomorphic filter technique is described in order to enhance the contrast in the mammographic images, which is adopted to the dyadic wavelet transform. The proposed method has employed the nonlinear enhancement in homomorphic filtering as well as denoising method in the wavelet domains. Experimental results show that the homomorphic filtering method improves the contrast in breast tumor images such that the contrast improvement indexis increased by two fold compared to the conventional wavelet-based enhancement technique.

47 citations


Journal Article
TL;DR: The experimental results demonstrated that the performance on edge-preserving smoothing of the NFF, obtained by the proposed framework, is superior to that of the conventional neural and dynamic filters.
Abstract: SUMMARY In this paper, we propose a new neural filter to which the features related to a given task are input, called a neural filter with features (NFF), to improve further the performance of the conventional neural filter.In order to handle the issue concerning the optimal selection of input features, we propose a framework composed of 1) manual selection of candidates for input features related to a given task and 2) training with automatically selection of the optimal input features required for achieving the given task.Experiments on the proposed framework with an application to improving the image quality of medical X-ray image sequences were performed.The experimental results demonstrated that the performance on edge-preserving smoothing of the NFF, obtained by the proposed framework, is superior to that of the conventional neural and dynamic filters.

45 citations


Journal Article
TL;DR: In this paper, a digest of work carried out at The Institute of Cancer Research to develop freehand elastography and apply it to breast investigations is provided. But the focus of this article is not on the performance of the freehand method, but on the comparison of the physical performances of freehand and mechanical methods.
Abstract: Progress in freehand elastography of the breast. A digest is provided of work carried out at The Institute of Cancer Research to develop freehand elastography and apply it to breast investigations. Topics covered include the development of freehand elastography and its relationship to other methods, a description of the system for off-line clinical evaluation of the freehand method, comparison of the physical performances of freehand and mechanical elastography, early clinical results on 70 breast tumours, real-time imaging, quantitative elastography and psychophysical aspects of the detection and assessment; of elastic lesions. Progress in developing this new medical imaging modality is occurring rapidly throughout the world and its future looks promising.

42 citations



Journal Article
TL;DR: In Vivo Diffusion Tensor Magnetic Resonance Imaging (DT-MRI) as mentioned in this paper, a continuous diffusion tensor field from the discrete, noisy, measured DT-MRI data is calculated and solved to identify the local direction vector of the trajectory with the direction of maximum apparent diffusivity.
Abstract: In Vivo Diffusion Tensor Magnetic Resonance Imaging (DT-MRI) can now be used to elucidate and investigate major nerve pathways in the brain. Nerve pathways are constructed by a) calculating a continuous diffusion tensor field from the discrete, noisy, measured DT-MRI data and then b) solving an equation describing the evolution of a fiber tract, in which the local direction vector of the trajectory is identified with the direction of maximum apparent diffusivity. This approach has been validated previously using synthesized, noisy DT-MRI data. Presently, it is possible to reconstruct large white matter structures in the brain, such as the corpus callosum and the pyramidal tracts. Several problems, however, still affect the method’s reliability. Its accuracy degrades where the fiber-tract directional distribution is non-uniform, and background noise in diffusion weighted MRIs can cause computed trajectories to jump to different tracts. Nonetheless, this method can provide quantitative information with which to visualize and study connectivity and continuity of neural pathways in the central and peripheral nervous systems in vivo, and holds promise for elucidating architectural features in other fibrous tissues and ordered media. key words: MRI, DTI, DT, diffusion, tensor, human, brain, white matter, fiber, tract, trajectory, artifact, noise

39 citations


Journal Article
TL;DR: The results of this experiment show that fault tolerance can be achieved at the cost of small degree of time redundancy, and that performance in the case of a processor failure is improved compared to a previous algorithm.
Abstract: In this paper, we propose a new scheduling algorithm to achieve fault tolerance in multiprocessor systems. This algorithm first partitions a parallel program into subsets of tasks, based on the notion of height of a task graph. For each subset, the algorithm then duplicates and schedules the tasks in the subset successively. We prove that schedules obtained by the proposed algorithm can tolerate a single processor failure and show that the computational complexity of the algorithm is O(|V |4) where V is the set of nodes of a task graph. We conduct simulations by applying the algorithm to two kinds of practical task graphs (Gaussian elimination and LU-decomposition). The results of this experiment show that fault tolerance can be achieved at the cost of small degree of time redundancy, and that performance in the case of a processor failure is improved compared to a previous algorithm. key words: multiprocessors, fault-tolerant scheduling, task graph, heights, task groups

38 citations



Journal Article
TL;DR: Experimental results show that CG models can recognize vehicles in real images, and confirm that the eigen-window method can classify vehicles.
Abstract: This paper describes our vehicle classification system, which is based on local-feature configuration. We have already demonstrated that our system works very well for vehicle recognition in outdoor environments. The algorithm is based on our previous work, which is a generalization of the eigen-window method. This method has the following three advantages: (1) It can detect even if parts of the vehicles are occluded. (2) It can detect even if vehicles are translated due to veering out of the lanes. (3) It does not require segmentation of vehicle areas from input images. However, this method does have a problem. Because it is view-based, our system requires model images of the target vehicles. Collecting real images of the target vehicles is generally a time consuming and difficult task. To ease the task of collecting images of all target vehicles, we apply our system to computer graphics (CG) models to recognize vehicles in real images. Through outdoor experiments, we have confirmed that using CG models is effective than collecting real images of vehicles for our system. Experimental results show that CG models can recognize vehicles in real images, and confirm that our system can classify vehicles. key words: eigen-window method, vector quantization, CG image, vehicle recognition, vehicle classification.

Journal Article
TL;DR: Multichannel closed-loop FES control of the wrist joint could be realized with this method, and the new method proposed was found to provide a solution of the ill-posed problem.

Journal Article
TL;DR: This paper presents an active vision system for on-line traffic sign recognition that is designed by intensively using built-in functions of an off-theshelf image processing board to realize both easy implementation and fast recognition.
Abstract: This paper presents an active vision system for on-line traffic sign recognition. The system is composed of two cameras, one is equipped with a wide-angle lens and the other with a telephoto lens, and a PC with an image processing board. The system first detects candidates for traffic signs in the wide-angle image using color, intensity, and shape information. For each candidate, the telephoto-camera is directed to its predicted position to capture the candidate in a larger size in the image. The recognition algorithm is designed by intensively using built-in functions of an off-theshelf image processing board to realize both easy implementation and fast recognition. The results of on-road experiments show the feasibility of the system. key words: Traffic sign recognition, Active vision, Real-time vision, ITS






Journal Article
TL;DR: This paper constructs the feed-forward neural network, which is widely used in practice, based on the qubit neuron model, and it is shown that the proposed model has a higher performance than the conventional model, using the learning diagram composed of convergence rate and the number of learning iterations.

Journal Article
TL;DR: A method for solving problems posed by imbalanced training sets by applying the modified largescale neural network “CombNET-II,” which consists of two types of neural networks that could achieve a high prediction rate compared to the k-nearest neighbor classifier and the three-layered multilayer perceptron trained with BP.
Abstract: SUMMARY Studies on artificial neural network have been conducted for a long time, and its contribution has been shown in many fields. However, the application of neural networks in the real world domain is still a challenge, since nature does not always provide the required satisfactory conditions. One example is the class size imbalanced condition in which one class is heavily under-represented compared to another class. This condition is often found in the real world domain and presents several difficulties for algorithms that assume the balanced condition of the classes. In this paper, we propose a method for solving problems posed by imbalanced training sets by applying the modified largescale neural network “CombNET-II.” CombNET-II consists of two types of neural networks. The first type is a one-layer vector quantization neural network to turn the problem into a more balanced condition. The second type consists of several modules of three-layered multilayer perceptron trained by backpropagation for finer classification. CombNET-II combines the two types of neural networks to solve the problem effectively within a reasonable time. The performance is then evaluated by turning the model into a practical application for a fog forecasting problem. Fog forecasting is an imbalanced training sets problem, since the probability of fog appearance in the observation location is very low. Fog events should be predicted every 30 minutes based on the observation of meteorological conditions. Our experiments showed that CombNET-II could achieve a high prediction rate compared to the k-nearest neighbor classifier and the three-layered multilayer perceptron trained with BP. Part of this research was presented in the 1999 Fog Forecasting Contest sponsored by Neurocomputing Technical Group of IEICE, Japan, and CombNET-II achieved the highest accuracy among the participants.


Journal Article
TL;DR: In this article, the authors proposed a method to eliminate characters in only one image with telops or other text through an image interpolation method that uses the eigenspace method.
Abstract: In this paper the authors propose a method to use interpolation to eliminate characters in only one image with telops or other text through an image interpolation method that uses the eigenspace method. Background scenes and other images have a fractal character, and often the self-correlation in the image can be assumed to be high. The authors focus on this point and represent rules for describing the image based on an eigenspace consisting of only one image that has defects. The eigenspace generated in this manner reflects the features of the image, and by using this eigenspace, image interpolation can be achieved. Although this interpolation method does not restore the original image, the authors confirmed through experimental results that it can provide interpolation without a feeling of oddness for images which have a high level of self-correlation. © 2006 Wiley Periodicals, Inc. Syst Comp Jpn, 38(1): 87– 96, 2007; Published online in Wiley InterScience (). DOI 10.1002sscj.10319.





Journal Article
TL;DR: Robovie, which was developed, is equipped with hardware mechanisms suitable for communication, an appearance that is readily personified, and a gesture expression ability comparable to that of humans, which makes it suitable for use as a platform for research on humanized robots.
Abstract: あらまし 日常生活の中で人間と関わりながら活動する日常活動型のロボットにおいては,自然で円滑なコ ミュニケーションのために相互作用と関係性を重視してロボットを設計することが必要である.我々の開発し た Robovieはこのために豊富なセンサ,擬人化しやすい外見,人間と同様のゼスチャ表現能力といったコミュニ ケーションに適したハードウェア機構を持つ.さらに,相互作用機能を容易に実現するため考案したソフトウェア アーキテクチャに基づきプログラムを行う.このような特徴から,Robovieはロボットと人間とのコミュニケー ションやロボットの身体性に関する研究といったヒューマノイドロボットを用いた研究のプラットフォームとし ての利用に適している.本稿では,Robovieのハードウェアおよびソフトウェアアーキテクチャについて報告し, 相互作用機能の実証実験について報告することで,Robovieの研究用プラットフォームとしての有用性を示す. キーワード 日常活動型ロボット,研究用プラットフォーム,ヒューマンロボットコミュニケーション

Journal Article
TL;DR: Some fast stereo matching algorithms based on the proposed matching strategy and the hierarchical multiresolution techniques have a very good performance with real complex scenes, therefore they are very suitable for fast edge-based stereo applications in real scenes like robotic applications.
Abstract: SUMMARY Finding corresponding edges is considered being the most difficult part of edge-based stereo matching algorithms. Usually, correspondence for a feature point in the first image is obtained by searching in a predefined region of the second image, based on epipolar line and maximum disparity. Reduction of search region can increase performances of the matching process, in the context of execution time and accuracy. Traditionally, hierarchical multiresolution techniques, as the fastest methods are used to decrease the search space and therefore increase the processing speed. Considering maximum of directional derivative of disparity in real scenes, we formulated some relations between maximum search space in the second images with respect to relative displacement of connected edges (as the feature points), in successive scan lines of the first images. Then we proposed a new matching strategy to reduce the search space for edge-based stereo matching algorithms. Afterward, we developed some fast stereo matching algorithms based on the proposed matching strategy and the hierarchical multiresolution techniques. The proposed algorithms have two stages: feature extraction and feature matching. We applied these new algorithms on some stereo images and compared their results with those of some hierarchical multiresolution ones. The execution times of our proposed methods are decreased between 30% to 55%, in the feature matching stage. Moreover, the execution time of the overall algorithms (including the feature extraction and the feature matching) is decreased between 15% to 40% in real scenes. Meanwhile in some cases, the accuracy is increased too. Theoretical investigation and experimental results show that our algorithms have a very good performance with real complex scenes, therefore these new algorithms are very suitable for fast edge-based stereo applications in real scenes like robotic applications.


Journal Article
TL;DR: In this article, the authors proposed a new method for designing predictors suitable for lossless image coding, where the image is divided into blocks, and these blocks are classified into multiple classes; multiple predictors for adaptive prediction are optimized at the same time by repeatedly executing the design of the predictor for each class and updating the class for each block, based on the cost representing the amount of information.
Abstract: In this paper, the authors propose a new method for designing predictors suitable for lossless image coding. In recent years, lossless coding systems based on optimal design of predictors for each image have been studied. In these systems, the linear prediction coefficients are determined so as to minimize the mean squared prediction errors. In lossless image coding, however, where the ultimate goal is to reduce the coding rate, minimizing the mean squared prediction errors does not necessarily yield the best results. Therefore, in order to reduce the coding rate directly, the authors attempted to formulate the amount of information on the prediction errors and design the predictors so as to minimize that value. Moreover, the image is divided into blocks, and these blocks are classified into multiple classes; multiple predictors for adaptive prediction are optimized at the same time by repeatedly executing the design of the predictor for each class and updating the class for each block, based on the cost representing the amount of information. In the results from a coding simulation, this system demonstrated superior coding efficiency compared to the conventional method for minimizing the mean squared prediction errors, and was confirmed to achieve a coding rate of 0.37 bitspel lower than JPEG-LS which is an international standard for lossless image coding. © 2007 Wiley Periodicals, Inc. Syst Comp Jpn, 38(5): 90– 98, 2007; Published online in Wiley InterScience (). DOI 10.1002sscj.10318