scispace - formally typeset
Search or ask a question
Author

Ryozo Kitajima

Bio: Ryozo Kitajima is an academic researcher from Tokai University. The author has contributed to research in topics: Artificial neural network & Competitive learning. The author has an hindex of 4, co-authored 13 publications receiving 32 citations.

Papers
More filters
Journal ArticleDOI
01 Jul 2015
TL;DR: A new information-theoretic method based on the information enhancement method to extract important input variables and demonstrate that only one main factor was able to be extracted from the mission statement, namely, “contribution to the society”.
Abstract: Abstract This paper proposes a new information-theoretic method based on the information enhancement method to extract important input variables. The information enhancement method was developed to detect important components in neural systems. Previous methods have focused on the detection of only the most important components, and therefore have failed to fully incorporated the information contained in the components into learning processes. In addition, it has been observed that the information enhancement method cannot always extract input information from input patterns. Thus, in this paper a computational method is developed to accumulate information content in the process of information enhancement. The method was applied to an artificial data set and the analysis of mission statements. The results demonstrate that while we were able to explicitly extract the symmetric properties of the data from the artificial data set, only one main factor was able to be extracted from the mission statement, namely, “contribution to the society”. The companies with higher profits tend to have mission statements concerning the society. The results can be considered to be a first step toward the full clarification of the importance of mission statements in actual business activities.

13 citations

Proceedings ArticleDOI
12 Jul 2015
TL;DR: A new type of information-theoretic method to enhance the potentiality of input neurons for improving the class structure of the self-organizing maps (SOM) and results showed that the method could be used to enhance a smaller number ofinput neurons.
Abstract: The present paper proposes a new type of information-theoretic method to enhance the potentiality of input neurons for improving the class structure of the self-organizing maps (SOM) The SOM has received much attention in neural networks, because it can be used to visualize input patterns, in particular, to clarify class structure However, it has been observed that the good performance of visualization is limited to relatively simple data sets To visualize more complex data sets, it is needed to develop a method to extract main characteristics of input patterns more explicitly For this, several information-theoretic methods have been developed with some problems One of the main problems is that the method needs much heavy computation to obtain the main features, because the computational procedures to obtain information content should be repeated many times To simplify the procedures, a new measure called “potentiality” of input neurons is proposed The potentiality is based on the variance of connection weights for input neurons and it can be computed without the complex computation of information content The method was applied to the artificial and symmetric data set and the biodegradation data from the machine learning database Experimental results showed that the method could be used to enhance a smaller number of input neurons Those neurons were effective in intensifying class boundaries for clearer class structures The present results show the effectiveness of the new measure of the potentiality for improved visualization and class structure

8 citations

Proceedings ArticleDOI
01 Nov 2015
TL;DR: The method was applied to the real tweets data collected in the earthquake and it was found that the method could classify the tweets as important and unimportant ones more accurately than the other conventional machine learning methods.
Abstract: The present paper aims to apply a new neural learning method called "Neural Potential Learning, NPL" to the classification and interpretation of tweets. It has been well known that social media such as the Twitter play crucial roles in transmitting important information at the time of natural disasters. In particular, since the Great East Japan Earthquake in 2011, the Twitter has been considered as one of the most efficient and convenient communication tools. However, because much redundant information is contained in the tweets, it is usually difficult to obtain important information from the flows of the tweets. Thus, it is urgently needed to develop some methods to extract the important and useful information from redundant tweets. To cope with complex and redundant data, a new neural potential learning has been developed to extract the important information. The method aims to find some highly potential neurons and enhance those neurons as much as possible to reduce redundant information and to focus on important information. The method was applied to the real tweets data collected in the earthquake and it was found that the method could classify the tweets as important and unimportant ones more accurately than the other conventional machine learning methods. In addition, the method made it possible to interpret how the tweets could be classified, based on the examination of highly potential neurons.

4 citations

Proceedings Article
06 Feb 2008
TL;DR: This paper uses several information-theoretic measures such as conditional information and information losses to extract main features in input patterns and applies the method to an artificial data, the Iris problem and a student survey.
Abstract: In this paper, we propose a new information-theoretic approach to competitive learning and self-organizing maps. We use several information-theoretic measures such as conditional information and information losses to extract main features in input patterns. For each competitive unit, conditional information content is used to show how much information on input patterns is contained. In addition, for detecting the importance of each variable, information losses are introduced. The information loss is defined by difference between information with all input units and information without an input unit. We applied the method to an artificial data, the Iris problem and a student survey. In all cases, experimental results showed that main features in input patterns were clearly detected.

2 citations


Cited by
More filters
Proceedings ArticleDOI
01 Oct 2015
TL;DR: A new type of method based on the self-organizing maps to enhance the potentiality of input neurons which can be applied to the extraction important features and improved generalization performance is proposed.
Abstract: The present paper proposes a new type of method based on the self-organizing maps to enhance the potentiality of input neurons which can be applied to the extraction important features and improved generalization performance. The importance of input neurons plays important roles in the self-organizing maps. However, little attempts have been made to determine the importance of input neurons, because it has been difficult to measure the importance of neurons in unsupervised learning such as the SOM. Though some information-theoretic methods have been developed to estimate the importance, they need heavy computation to reach the final state. In this context, a new and very simple method is proposed to estimate the importance of input neurons and its performance is experimentally evaluated. The new method is based on the concept of "potentiality". The potentiality means the variance of input neurons toward competitive neurons. When the potentiality becomes higher, the variance of input neurons becomes larger. The self-organizing maps with this potentiality was applied to the two well-known data sets. In both cases, the smaller number of important neurons could be extracted and generalization performance in the supervised mode was much improved compared with that by the conventional methods. However, the map quality in terms of quantization and topographic errors may be degraded. This implies that in actual application, it is needed to compromise between improved generalization and the quality of maps. Though some problems should be solved, the present method of potentiality is simple and strong enough for extracting the importance of input neurons.

14 citations

Book ChapterDOI
12 Jun 2016
TL;DR: This paper presents application of Givens rotations in the process of learning feedforward artificial neural network based on QR decomposition, and describes mathematical background that needs to be considered during the application.
Abstract: This paper presents application of Givens rotations in the process of learning feedforward artificial neural network. This approach is based on QR decomposition. The paper describes mathematical background that needs to be considered during the application of the Givens rotations. The paper concludes with results of example simulations.

14 citations

Book ChapterDOI
02 Sep 2016
TL;DR: The present paper aims to interpret final representations obtained by neural networks by maximizing the mutual information between neurons and data sets by applying the present method to restaurant data for which the ordinary regression analysis could not show good performance.
Abstract: The present paper aims to interpret final representations obtained by neural networks by maximizing the mutual information between neurons and data sets. Because complex procedures are needed to maximize information, the computational procedures are simplified as much as possible using the present method. The simplification lies in realizing mutual information maximization indirectly by focusing on the potentiality of neurons. The method was applied to restaurant data for which the ordinary regression analysis could not show good performance. For this problem, we tried to interpret final representations and obtain improved generalization performance. The results revealed a simple configuration where just a single important feature was extracted to explicitly explain the motivation to visit the restaurant.

9 citations

Proceedings ArticleDOI
01 Aug 2016
TL;DR: The paper presents a new feedforward neural network architecture that is able to process imperfect input data, i.e. in the form of intervals or with missing values, and gives an imprecise answer as the result of input data imperfection.
Abstract: The paper presents a new feedforward neural network architecture. Thanks to incorporating the rough set theory, the new network is able to process imperfect input data, i.e. in the form of intervals or with missing values. The paper focuses on the last case. In contrast to imputation, marginalisation and similar solutions, the proposed architecture is able to give an imprecise answer as the result of input data imperfection. In the extreme case, the answer can be indefinite contrary to a confabulation specific for the aforementioned methods. The results of experiments performed on three classification benchmark datasets for every possible combination of missing attribute values showed the proposed solution works well with missing data with accuracy dependent on the level of missing data.

8 citations

Book ChapterDOI
18 Nov 2016
TL;DR: A new type of information-theoretic method called “potential joint information maximization” is proposed, which has an effect to reduce the number of jointly fired neurons and then to stabilize the production of final representations.
Abstract: The present paper aims to propose a new type of information-theoretic method called “potential joint information maximization”. The joint information maximization has an effect to reduce the number of jointly fired neurons and then to stabilize the production of final representations. Then, the final connection weights are collectively interpreted by averaging weights produced by different data sets. The method was applied to the data set of rebel participation among youths. The result show that final weights could be collectively interpreted and only one feature could be extracted. In addition, generalization performance could be improved.

8 citations