scispace - formally typeset
Search or ask a question

Showing papers in "Transactions of The Japanese Society for Artificial Intelligence in 2004"


Journal ArticleDOI
TL;DR: The purpose of this presentation is to propose two new efficient algorithms to compute reducts in information systems based on the proposition of reduct and the relation between the reduCT and discernibility matrix, which improves the execution time when compared with the other methods.
Abstract: In the process of data mining of decision table using Rough Sets methodology, the main computational effort is associated with the determination of the reducts. Computing all reducts is a combinatorial NP-hard computational problem. Therefore the only way to achieve its faster execution is by providing an algorithm, with a better constant factor, which may solve this problem in reasonable time for real-life data sets. The purpose of this presentation is to propose two new efficient algorithms to compute reducts in information systems. The proposed algorithms are based on the proposition of reduct and the relation between the reduct and discernibility matrix. Experiments have been conducted on some real world domains in execution time. The results show it improves the execution time when compared with the other methods. In real application, we can combine the two proposed algorithms.

18 citations



Journal ArticleDOI
TL;DR: The resulting system demonstrates that the number of interactions needed to find a common reference is reduced as the user model is refined, and a Belief Network based probabilistic reasoning system is presented to determine the object reference.

16 citations



Journal ArticleDOI
TL;DR: A method of unknown word processing, which connects an inputted unknown word to a word that is registered in the judgment knowledge base, is proposed and aims to retrieve the meaning concerning sense, and deepen semantic understanding.
Abstract: When we humans receive uncertain information, we interpret it properly, so we can expand the conversation, and take proper actions. This is possible because we have "commonsense" concerning the word, which is built up from knowledge that is stored through long time experience. Among the commonsense we use in our every day lives it is thought that there are the commonsense concerning; quantity such as size, weight, speed, time, or place; sense or feeling such as hot, beautiful, or loud; and moreover emotion such as happy or sad. In order for computers to understand the meaning and become closer to human beings, it is thought that the construction of a "Commonsense Judgment System" which deals with these "commonsense" is necessary. A subsystem needed for the "Commonsense Judgment System" is the system that judges the commonsense concerning the characteristics of words, namely the "Commonsense Feeling Judgment System." This paper proposes a mechanism to associate the characteristics of a word based on our five senses, such as an apple is red, with a knowledge base consisting of basic words. When aiming to realize this "Commonsense Feeling Judgment System" and trying to make a computer have the same commonsense and judgment ability as human beings, a very important factor is the handling of unknown words. Judgment concerning words, which are given to the computer as knowledge before hand, is not a problem since it can refer to that knowledge. But when an unknown word, non-registered knowledge, is inputted, how to process that word is a very difficult problem. In this paper, a method of unknown word processing, which connects an inputted unknown word to a word that is registered in the judgment knowledge base, is proposed. By using a concept base, which is made from several electric dictionaries, the closeness of meaning is put into consideration. With this process, it is possible to understand a word that does not exist in the knowledge base. This study aims to retrieve the meaning concerning sense, and deepen semantic understanding.

9 citations


Journal ArticleDOI
TL;DR: The effectiveness of the proposal method is tested with several benchmark functions including k-tablet structures and it is shown that the proposed LUNDX-m performs better than traditional crossovers especially when the dimensionality n is higher than 100.
Abstract: This paper presents the Real-coded Genetic Algorithms(RCGA) which can treat with high-dimensional ill-scaled structures, what is called, k-tablet structure. The k-tablet structure is the landscape that the scale of the fitness function is different between the k-dimensional subspace and the orthogonal (n-k)-dimensional subspace. The search speed of traditional RCGAs degrades when high-dimensional k-tablet structures are included in the landscape of fitness function. In this structure, offspring generated by crossovers is likely to spread wider region than the region where the parental population covers. This phenomenon causes the stagnation of the search. To resolve this problem, we propose a new crossover LUNDX-m, which uses only m-dimensional latent variables. The effectiveness of the proposal method is tested with several benchmark functions including k-tablet structures and we show that our proposal method performs better than traditional crossovers especially when the dimensionality n is higher than 100.

8 citations



Journal ArticleDOI
TL;DR: The support system to interactively search certain articles of users' interest on the World Wide Web without their hesitating over query choices is introduced, with a focus on a concept to extract event keywords and event information through the appearance of keywords periodically.
Abstract: We have been developed a support system to interactively search certain articles of users' interest on the World Wide Web (WWW) without their hesitating over query choices. Especially we have been implementing an effective application system to enable tourists to easily find special event information of their interest and to enjoy their own tours. This system also enables developers of each system to provide them with the means of easily constructing an initial database and automatically updating it. As events are generally held cyclically, we have assumed events or keywords related to the events will appear in each term. If we can extract keywords that appear cyclically in a corpus including date information, we can obtain event keywords easily. The system can extract event information using the event keywords as queries for WWW information retrieval systems, and update the database automatically. In this paper, we introduce our support system with a focus on a concept to extract event keywords and event information through the appearance of keywords periodically. We found our approach effective by some experiments.

7 citations


Journal ArticleDOI
TL;DR: The proposed mechanism for bootstrap learning of joint attention consists of the robot's embedded mechanisms: visual attention and learning with self-evaluation and it is suggested that the proposed mechanism could explain the developmental mechanism of infants' joint attention.
Abstract: This study argues how human infants acquire the ability of joint attention through interactions with their caregivers from a viewpoint of cognitive developmental robotics. In this paper, a mechanism by which a robot acquires sensorimotor coordination for joint attention through bootstrap learning is described. Bootstrap learning is a process by which a learner acquires higher capabilities through interactions with its environment based on embedded lower capabilities even if the learner does not receive any external evaluation nor the environment is controlled. The proposed mechanism for bootstrap learning of joint attention consists of the robot's embedded mechanisms: visual attention and learning with self-evaluation. The former is to find and attend to a salient object in the field of the robot's view, and the latter is to evaluate the success of visual attention, not joint attention, and then to learn the sensorimotor coordination. Since the object which the robot looks at based on visual attention does not always correspond to the object which the caregiver is looking at in an environment including multiple objects, the robot may have incorrect learning situations for joint attention as well as correct ones. However, the robot is expected to statistically lose the learning data of the incorrect ones as outliers because of its weaker correlation between the sensor input and the motor output than that of the correct ones, and consequently to acquire appropriate sensorimotor coordination for joint attention even if the caregiver does not provide any task evaluation to the robot. The experimental results show the validity of the proposed mechanism. It is suggested that the proposed mechanism could explain the developmental mechanism of infants' joint attention because the learning process of the robot's joint attention can be regarded as equivalent to the developmental process of infants' one.

6 citations


Journal ArticleDOI
TL;DR: This study focuses on the tables contained in the WWW pages and proposes a method to integrate them according to the category of objects presented in each table, which enables us to easily compare the objects of different locations and styles of expressions.
Abstract: The World Wide Web (WWW) allows a person to access a great amount of data provided by a wide variety of entities. However, the content varies widely in expression. This makes it difficult to browse many pages effectively, even if the contents of the pages are quite similar. This study is the first step toward the reduction of such variety of WWW contents. The method proposed in this paper enables us to easily obtain information about similar objects scattered over the WWW. We focus on the tables contained in the WWW pages and propose a method to integrate them according to the category of objects presented in each table. The table integrated in a uniform format enables us to easily compare the objects of different locations and styles of expressions.

6 citations


Journal ArticleDOI
TL;DR: This research proposes a new method called Extended On-line Profit Sharing with Judgement (EOPSwJ) to detect important incomplete perception, which doesn't need large computation cost and numerous samples and uses two criterions for detecting important incomplete perceptions to attain a task.
Abstract: To apply reinforcement learning to difficult classes such as real-environment learning, we need to use a method robust to perceptual aliasing problem. The exploitation-oriented methods such as Profit Sharing can deal with the perceptual aliasing problem to a certain extent. However, when the agent needs to select different actions at the same sensory input, the learning efficiency worsens. To overcome the problem, several state partition methods using history information of state-action pairs are proposed. These methods try to convert a POMDP environment into an MDP environment, and thus they are sometimes very useful. However, their computation cost is very high especially in large state spaces. In contrast, memory-less approaches try to escape from the aliased states by outputting actions stochastically. However, these methods output actions stochastically even in unaliased states, and thus the learning efficiency is bad. If we desire to guarantee the rationality in POMDPs, it is efficient to output actions stochastically only in the aliased states and to output one action deterministically in the other unaliased states. Hence, to discriminate between aliased states and unaliased states, the utilization of χ² -goodness-of-fit test is proposed by Miyazaki et al. They point out that, in aliased states, the distributions of the state transitions by random search and a particular policy are different. This difference doesn't occur owing to non-deterministic actions. Hence, if the agent can collect enough samples to implement the test, the agent can distinguish between aliased states and unaliased states well. However, such a test needs a large amount of data, and it's a problem how the agent collects samples without worsening learning efficiency. If the agent uses random search in the course of learning, the learning efficiency worsens especially in unaliased states. Therefore, in this research, we propose a new method called Extended On-line Profit Sharing with Judgement (EOPSwJ) to detect important incomplete perception, which doesn't need large computation cost and numerous samples. We use two criterions for detecting important incomplete perceptions to attain a task. One is the rate of transitions to each state, and the other is the deterministic rate of actions. We confirm the availability of EOPSwJ using two simulations.

Journal ArticleDOI
TL;DR: This work presents a system that tries to automatically collect and monitor Japanese blog collections that include not only ones made with blog software but also ones written as normal web pages, based on extraction of date expressions and analysis of HTML documents.

Journal ArticleDOI
TL;DR: A generic image classification system with an automatic knowledge acquisition mechanism from the World Wide Web is described, and utilizing of Web images as learning images is effective and promising forGeneric image classification.
Abstract: In this paper, we describe a generic image classification system with an automatic knowledge acquisition mechanism from the World Wide Web. Due to the recent spread of digital imaging devices, the demand for generic image classification/recognition of various kinds of real world scenes becomes greater. To realize it, visual knowledge on various kinds of scenes is required, and we have to prepare a large number of learning images. Therefore, commercial image collections such as Corel Image Gallery are widely used in conventional studies on generic image classification. However, they are not suitable as learning images for generic image classification, since they do not include various kinds of images. Then, in stead of commercial image collections we propose utilizing of a large number of images gathered from the World-Wide Web by a Web image-gathering system as learning images. Images on the Web have huge diversity in general. So that we take advantage of the diversity of Web images for a generic image classification task, which is the first attempt among this kinds of work. By the experiments, we show that utilizing of Web images as learning images is effective and promising for generic image classification.

Journal ArticleDOI
TL;DR: An agent community based information retrieval method, which uses agent communities to manage and look up information related to users, and the hypothesis is that a virtual agent community reduces communication loads to perform a search.
Abstract: This paper proposes an agent community based information retrieval method, which uses agent communities to manage and look up information related to users. An agent works as a delegate of its user and searches for information that the user wants by communicating with other agents. The communication between agents is carried out in a peer-to-peer computing architecture. In order to retrieve information related to a user query, an agent uses two histories : a query/retrieved document history(Q/RDH) and a query/sender agent history(Q/SAH). The former is a list of pairs of a query and retrieved documents, where the queries were sent by the agent itself. The latter is a list of pairs of a query and sender agents and shows ``who sent what query to the agent''. This is useful to find a new information source. Making use of the Q/SAH is expected to cause a collaborative filtering effect, which gradually creates virtual agent communities, where agents with the same interests stay together. Our hypothesis is that a virtual agent community reduces communication loads to perform a search. As an agent receives more queries, then more links to new knowledge are achieved. From this behavior, a ``give and take''(or positive feedback) effect for agents seems to emerge. We implemented this method with Multi-Agents Kodama which has been developed in our laboratory, and conducted preliminary experiments to test the hypothesis. The empirical results showed that the method was much more efficient than a naive method employing 'broadcast' techniques only to look up a target agent.

Journal ArticleDOI
TL;DR: In this article, a new ascending-price multi-unit auction protocol is presented, which is robust against false-name bids and does not require the auctioneer to set a reservation price.
Abstract: This paper presents a new ascending-price multi-unit auction protocol. As far as the authors are aware, this is the first protocol that has an open format, and in which sincere bidding is an equilibrium strategy, even if the marginal utilities of each agent can increase and agents can submit false-name bids. As ever-increasing numbers of companies and consumers are trading on Internet auctions, a new type of cheating called "false-name bids" has been noticed. Specifically, there may be some agents with fictitious names such as multiple e-mail addresses. The VCG is not an open format, and truth-telling is no longer a dominant strategy if agents can submit false-name bids and the marginal utilities of each agent can increase. The Iterative Reducing (IR) protocol with a sealed-bid format is robust against false-name bids, although it requires the auctioneer to carefully pre-determine a reservation price for one unit. Open format protocols, such as the Ausubel auction, outperform sealed-bid format protocols in terms of the simplicity and privacy-preservation. These two advantages are said to encourage more agents to bid sincerely and to provide the seller with higher revenue. We extend the Ausubel auction to our proposed protocol which can handle the cases where the marginal utilities of each agent can increase. Moreover, it is robust against false-name bids and does not require the auctioneer to set a reservation price. Our simulation result indicates that our protocol herein obtains a social surplus close to Pareto efficient and that it outperforms the IR with respect to the social surplus and the seller's revenue.

Journal ArticleDOI
TL;DR: A query equalization scheme based on a relevance feedback method for collaborative information retrieval between personalized concept-bases and an implementation of the method and its user interface on a personal agent framework are described.
Abstract: In this paper, we describe a collaborative information retrieval method among personal repositorie and an implementation of the method on a personal agent framework. We propose a framework for personal agents that aims to enable the sharing and exchange of information resources that are distributed unevenly among individuals. The kernel of a personal agent framework is an RDF(resource description framework)based information repository for storing, retrieving and manipulating privately collected information, such as documents the user read and/or wrote, email he/she exchanged, web pages he/she browsed, etc. The repository also collects annotations to information resources that describe relationships among information resources and records of interaction between the user and information resources. Since the information resources in a personal repository and their structure are personalized, information retrieval from other users’ is an important application of the personal agent. A vector space model with a personalized concept-base is employed as an information retrieval mechanism in a personal repository. Since a personalized concept-base is constructed from information resources in a personal repository, it reflects its user’s knowledge and interests. On the other hand, it leads to another problem while querying other users’ personal repositories; that is, simply transferring query requests does not provide desirable results. To solve this problem, we propose a query equalization scheme based on a relevance feedback method for collaborative information retrieval between personalized concept-bases. In this paper, we describe an implementation of the collaborative information retrieval method and its user interface on the personal agent framework.

Journal ArticleDOI
TL;DR: In this article, the authors describe an approach for the development of application systems for creative knowledge work, particularly for early stages of information design tasks, and demonstrate that the resulting systems encourage users to follow a certain cognitive path through graceful user experience.
Abstract: This paper describes our approach for the development of application systems for creative knowledge work, particularly for early stages of information design tasks. Being a cognitive tool serving as a means of externalization, an application system affects how the user is engaged in the creative process through its visual interaction design. Knowledge interaction design described in this paper is a framework where a set of application systems for different information design domains are developed based on an interaction model, which is designed for a particular model of a thinking process. We have developed two sets of application systems using the knowledge interaction design framework: one includes systems for linear information design, such as writing, movie-editing, and video-analysis; the other includes systems for network information design, such as file-system navigation and hypertext authoring. Our experience shows that the resulting systems encourage users to follow a certain cognitive path through graceful user experience.

Journal ArticleDOI
TL;DR: A learning environment is designed that enables participants to experience some common creative activities, and its effectiveness in a university class is evaluated.
Abstract: Recently, some educational courses focusing on creativity, hereafter called ``creativity education'' has been conducted in engineering education. We believe that such creativity education is crucial not only in engineering education, but also in general education. In this study, we designed a learning environment that enables participants to experience some common creative activities, and evaluated its effectiveness in a university class. Our educational program consists of the following three phases: (1) introduction (the participants learned the basics of Mindstorms using the instructional manuscript, and subsequently constructed and modified a moving car with four wheels using Mindstorms), (2) creative activities (they produced creative playground equipment that can move using Mindstorms), and (3) self-reflective activities on the creative processes (they reflected on their creative processes and added the information to a diagram, and they discussed advantages and disadvantages while referring to the diagram). We evaluated the effectiveness of our educational program based on comparisons of the pre- and post-tests and the contents of the participants' discussions. In particular, we confirmed the following three learning activities: (1) the participants discussed their creative activities from various viewpoints, (2) they also discussed the viewpoints considered to be important for creative activities, and (3) they realized the importance of idea generation, idea embodiment, and collaboration in creative activities.


Journal ArticleDOI
TL;DR: In this article, the authors discuss the importance and utilization of personal network in a community system through the result of management and analysis of the scheduling support system for academic conferences and find that most participants were willing to contribute to form personal networks, and personal networks can promote information exchanging among participants since personal network showed existence of participants to the others.
Abstract: In this paper, we discuss importance and utilization of personal network in a community system through the result of management and analysis of the scheduling support system for academic conferences. The important feature of the system is generation and utilization of personal network to support information exchanging and information discovery among participants. We applied this system to the academic conference called JSAI2003. We obtained 276 users and their personal networks. We found that (1) most participants were willing to contribute to form personal networks, (2) personal networks can promote information exchanging among participants since personal network showed existence of participants to the others and (3) the formed networks can was useful for them in information recommendation.


Journal ArticleDOI
TL;DR: The right-hand sides of ODEs are inferred by Genetic Programming (GP) and the least mean square (LMS) method is used along with the ordinary GP to solve the inference of a differential equation system including transdential functions.
Abstract: The ordinary differential equations (ODEs) are used as a mathematical method for the sake of modeling a complicated nonlinear system. This approach is well-known to be useful for the practical application, e.g., bioinformatics, chemical reaction models, controlling theory etc. In this paper, we propose a new evolutionary method by which to make inference of a system of ODEs. To explore the search space more effectively in the course of evolution, the right-hand sides of ODEs are inferred by Genetic Programming (GP) and the least mean square (LMS) method is used along with the ordinary GP. We apply our method to several target tasks and empirically show how successfully GP infers the systems of ODEs. We also describe how our approach is extended to solve the inference of a differential equation system including transdential functions.

Journal ArticleDOI
TL;DR: A novel method for generating accurate and simple decision trees based on symbiotic evolution and developed a system called SESAT, which has the ability to generate more simple trees than C5.0 without sacrificing predictive accuracy.
Abstract: In representing classification rules by decision trees, simplicity of tree structure is as important as predictive accuracy especially in consideration of the comprehensibility to a human, the memory capacity and the time required to classify. Trees tend to be complex when they get high accuracy. This paper proposes a novel method for generating accurate and simple decision trees based on symbiotic evolution. It is distinctive of symbiotic evolution that two different populations are evolved in parallel through genetic algorithms. In our method one's individuals are partial trees of height 1, and the other's individuals are whole trees represented by the combinations of the former individuals. Generally, overfitting to training examples prevents getting high predictive accuracy. In order to circumvent this difficulty, individuals are evaluated with not only the accuracy in training examples but also the correct answer biased rate indicating the dispersion of the correct answers in the terminal nodes. Based on our method we developed a system called SESAT for generating decision trees. Our experimental results show that SESAT compares favorably with other systems on several datasets in the UCI repository. SESAT has the ability to generate more simple trees than C5.0 without sacrificing predictive accuracy.

Journal ArticleDOI
TL;DR: Dual-Schemata model is a kind of self-organizational machine learning methods for an autonomous robot interacting with an unknown dynamical environment based on Piaget's Schema model, that is a classical psychological model to explain memory and cognitive development of human beings.
Abstract: In this paper, a new machine-learning method, called Dual-Schemata model, is presented. Dual-Schemata model is a kind of self-organizational machine learning methods for an autonomous robot interacting with an unknown dynamical environment. This is based on Piaget's Schema model, that is a classical psychological model to explain memory and cognitive development of human beings. Our Dual-Schemata model is developed as a computational model of Piaget's Schema model, especially focusing on sensori-motor developing period. This developmental process is characterized by a couple of two mutually-interacting dynamics; one is a dynamics formed by assimilation and accommodation, and the other dynamics is formed by equilibration and differentiation. By these dynamics schema system enables an agent to act well in a real world. This schema's differentiation process corresponds to a symbol formation process occurring within an autonomous agent when it interacts with an unknown, dynamically changing environment. Experiment results obtained from an autonomous facial robot in which our model is embedded are presented; an autonomous facial robot becomes able to chase a ball moving in various ways without any rewards nor teaching signals from outside. Moreover, emergence of concepts on the target movements within a robot is shown and discussed in terms of fuzzy logics on set-subset inclusive relationships.

Journal ArticleDOI
TL;DR: A psychologically-motivated selection method that adopts word familiarity as the selection criterion for fundamental vocabulary selection is proposed, and it is concluded that the proposed method is superior to conventional methods for basic vocabulary selection.
Abstract: This paper proposes a new method for selecting fundamental vocabulary. We are presently constructing the Fundamental Vocabulary Knowledge-base of Japanese that contains integrated information on syntax, semantics and pragmatics, for the purposes of advanced natural language processing. This database mainly consists of a lexicon and a treebank: Lexeed (a Japanese Semantic Lexicon) and the Hinoki Treebank. Fundamental vocabulary selection is the first step in the construction of Lexeed. The vocabulary should include sufficient words to describe general concepts for self-expandability, and should not be prohibitively large to construct and maintain. There are two conventional methods for selecting fundamental vocabulary. The first is intuition-based selection by experts. This is the traditional method for making dictionaries. A weak point of this method is that the selection strongly depends on personal intuition. The second is corpus-based selection. This method is superior in objectivity to intuition-based selection, however, it is difficult to compile a sufficiently balanced corpora. We propose a psychologically-motivated selection method that adopts word familiarity as the selection criterion. Word familiarity is a rating that represents the familiarity of a word as a real number ranging from 1 (least familiar) to 7 (most familiar). We determined the word familiarity ratings statistically based on psychological experiments over 32 subjects. We selected about 30,000 words as the fundamental vocabulary, based on a minimum word familiarity threshold of 5. We also evaluated the vocabulary by comparing its word coverage with conventional intuition-based and corpus-based selection over dictionary definition sentences and novels, and demonstrated the superior coverage of our lexicon. Based on this, we conclude that the proposed method is superior to conventional methods for fundamental vocabulary selection.

Journal ArticleDOI
TL;DR: This paper proposes a new placement method of nouns into a multi-dimensional space based on words' cooccurrence in a corpus based on the idea thatctors corresponding to nouns which cooccur with a word w in a relation f constitute a group in the multi- dimensional space.
Abstract: The semantic similarity (or distance) between words is one of the basic knowledge in Natural Language Processing. There have been several previous studies on measuring the similarity (or distance) based on word vectors in a multi-dimensional space. In those studies, high dimensional feature vectors of words are made from words' cooccurrence in a corpus or from reference relation in a dictionary, and then the word vectors are calculated from the feature vectors through the method like principal component analysis. This paper proposes a new placement method of nouns into a multi-dimensional space based on words' cooccurrence in a corpus. The proposed method doesn't use the high dimensional feature vectors of words, but is based on the idea that ``vectors corresponding to nouns which cooccur with a word w in a relation f constitute a group in the multi-dimensional space''. Although the whole meaning of nouns isn't reflected in the word vectors obtained by the pro posed method, the semantic similarity (or distance) between nouns defined with the word vectors is proper for an example-based disambiguation method.

Journal ArticleDOI
TL;DR: This paper proposes a Memory-Based Reasoning (MBR), one of classification methods, applicable to business problems, with self-determination of proper number of neighbors, proper feature weights, normalized distance metric between categorical values, high accuracy despite dependent features, and high speed prediction.
Abstract: Recently, data mining is remarkable as a practical solution for huge accumulated data. The classification, the goal of which is that a new data is classified into one of given groups, is one of the most generally used data mining techniques. In this paper, we discuss advantages of Memory-Based Reasoning (MBR), one of classification methods, and point out some problems to use it practically. To solve them, we propose a MBR applicable to business problems, with self-determination of proper number of neighbors, proper feature weights, normalized distance metric between categorical values, high accuracy despite dependent features, and high speed prediction. We experimentally compare our MBR with usual MBR and C5.0, one of the most popular classification methods. We also discuss the fitness of our MBR to business problems, through an application study of our MBR to the financial credit management.

Journal ArticleDOI
TL;DR: A system to support transmission of human focusing skill by visualized their gaze behavior in Kansei interaction by constructing a VR-space, called ``Mirror Agent System'', where some users can work together and their gaze are visualized coincidentally.
Abstract: This paper describes a system to support transmission of human focusing skill by visualized their gaze behavior in Kansei interaction. In order to share personal Kansei information with other people, we need to transform it into cleared information. If this information is expressed by visualized style and transmitted to human from human, Kansei interaction is more creative than ever before. Our research group focuses on human gaze behavior that naturally reflects human actions, intentions and knowledge. Generally, it is difficult for us to feel other people's gaze behavior. Therefore, we constructed a VR-space, called ``Mirror Agent System'', where some users can work together and their gaze are visualized coincidentally. By using this system, a user can become aware of not only a gaze history of himself, but also other user's them, while looking at the scenes in the VR-space. When he can feel other's gaze behavior, he can guess their actions, intentions and knowledge. In this way, we expected to promote human-human Kansei interaction, and to improve user's focusing skill. However, our privious system lacked to visualize only useful parts from a gaze history, because following two problems. First problem is the quantity problem. If the quantity of a gaze history increases in the VR-space gradually, the background scenes are covered with the visualized history and a user feels interfereing in his work. Second problem is the quality problem. It is very difficult for a user to interpret other user's gaze history.

Journal ArticleDOI
TL;DR: The MKBpo improves its performance by restricting the class of reduction orderings to precedence-based path orderings, representing them by logical functions in which a logical variable xfg represents the precedence f > g.
Abstract: In this paper, we propose a completion procedure (called MKBpo) for term rewriting systems. Based on the existing procedure MKB which works with multiple reduction orderings and the ATMS nodes, the MKBpo improves its performance by restricting the class of reduction orderings to precedence-based path orderings, representing them by logical functions in which a logical variable xfg represents the precedence f > g. By using BDD (binary decision diagrams) as a representation of logical functions, the procedure can be implemented efficiently. This makes it possible to save the number of quasi-parallel processes effectively and suppress the rapid increase in the amount of computation time asymptotically.

Journal ArticleDOI
TL;DR: The proposed catalog integration method is shown to be an effective combination of the instance classification approach and the category alignment approach that improves upon or is competitive with the integration method based only on category alignment or instance classification.
Abstract: With the rapid advance of information technology, we are able to easily and quickly obtain a great deal of information on almost any topic. One method by which to managing such large amounts of information is to utilize catalogs which organize information within concept hierarchies. However, the concept hierarchy for each catalog is different because one concept hierarchy is not sufficient for all purposes. In the present paper, we address the problem of integrating multiple catalogs for ease of use. The primary problem lies in finding a suitable category in a catalog for each information instance in another catalog. Three approaches can be used to solve this problem: ontology integration approach, instance classification approach and category alignment approach based on categorization similarity. The main idea of this paper is a multiple strategy approach to combine the instance classification approach and the category alignment approach. In order to evaluate the proposed method, we conducted experiments using two actual Internet directories, Yahoo! and Google. The obtained results show that the proposed method improves upon or is competitive with the integration method based only on category alignment or instance classification. Therefore, the proposed catalog integration method is shown to be an effective combination of the instance classification approach and the category alignment approach.