scispace - formally typeset
Search or ask a question

Showing papers in "International Journal on Artificial Intelligence Tools in 1996"



Journal ArticleDOI
TL;DR: A new path consistency algorithm, PC-5, is presented, which has an O(n3a2) space complexity while retaining the worst-case time complexity of PC-4 and exhibits a much better average- case time complexity.
Abstract: One of the main factors limiting the use of path consistency algorithms in real life applications is their high space complexity. Han and Lee proposed a path consistency algorithm, PC-4, with O(n3a3) space complexity, which makes it practicable only for small problems. I present a new path consistency algorithm, PC-5, which has an O(n3a2) space complexity while retaining the worst-case time complexity of PC-4. Moreover, the new algorithm exhibits a much better average-case time complexity. The new algorithm is based on the idea (due to Bessiere) that, at any time, only a minimal amount of support has to be found and recorded for a labeling to establish its viability; one has to look for a new support only if the current support is eliminated. I also show that PC-5 can be improved further to yield an algorithm, PC5++, with even better average-case performance and the same space complexity.

27 citations


Journal ArticleDOI
TL;DR: A new approach is defined and a fundamental problem in route planners, important for robotic vehicles and for planning off-road military maneuvers, is investigated, where algorithms sufficient for the small grid are totally inadequate for the large grid.
Abstract: This paper defines a new approach and investigates a fundamental problem in route planners. This capability is important for robotic vehicles(Martian Rovers, etc.) and for planning off-road military maneuvers. The emphasis throughout this paper will be on the design and analysis and hieiaichical implementation of our route planner. This work was motivated by anticipation of the need to search a grid of a trillion points for optimum routes. This cannot be done simply by scaling upward from the algorithms used to search a grid of 10,000 points. Algorithms sufficient for the small grid are totally inadequate for the large grid. Soon, the challenge will be to compute off-road routes more than 100 km long and with a one or two-meter grid. Previous efforts are reviewed and the data structures, decomposition methods and search algorithms are analyzed and limitations are discussed. A detailed discussion of a hieraichical implementation is provided and the experimental results are analyzed.

19 citations


Journal ArticleDOI
TL;DR: It is shown that generalization can lead to new and robust heuristic that perform better than the original heuristics across test cases of different characteristics.
Abstract: In this paper, we present new results on the automated generalization of performance-related heuristics learned for knowledge-lean applications. By first applying genetics-based learning to learn new heuristics for some small subsets of test cases in a problem space, we study methods to generalize these heuristics to unlearned subdomains of test cases. Our method uses a new statistical metric called probability of win. By assessing the performance of heuristics in a range-independent and distribution-independent manner, we can compare heuristics across problem subdomains in a consistent manner. To illustrate our approach, we show experimental results on generalizing heuristics learned for sequential circuit testing, VLSI cell placement and routing, branch-and-bound search, and blind equalization. We show that generalization can lead to new and robust heuristics that perform better than the original heuristics across test cases of different characteristics.

15 citations


Journal ArticleDOI
TL;DR: The hypothesis that CDM’s performance will exceed two non-domain specific al gorithms, Bayesian classification and decision tree learners, is empirically tested.
Abstract: The Category Discrimination Method (CDM) is a new machine learning algo rithm designed specifically for text categorization. The motivation is there are sta tistical problems associated with natural language text when it is applied as input to existing machine learning algorithms (too much noise, too many features, skewed distribution). The bases of the CDM are research results about the way that humans learn categories and concepts vis-a-vis contrasting concepts. The essential formula is cue validity borrowed from cognitive psychology, and used to select from all possible single word-based features the best predictors of a, given category. The, hypothesis that CDM’s performance. will exceed two non-domain specific al gorithms, Bayesian classification and decision tree learners, is empirically tested.

13 citations


Journal ArticleDOI
TL;DR: A generalized fuzzy learning machine, which is a generalized and modified type of the neo-fuzzy-neuron, which has a very high nonlinear mapping ability compared with the conventional neural networks, and it guarantees the global minimum.
Abstract: This paper describes a generalized fuzzy learning machine, which is a generalized and modified type of the neo-fuzzy-neuron presented by the authors in 1992. This machine can well grasp the nonlinear correlation of each input. It has a very high nonlinear mapping ability compared with the conventional neural networks, and it guarantees the global minimum. Furthermore, the learning speed and its accuracy are improved drastically. It was successfully applied to the identification of the nonlinear dynamical system, e.g. two dimensional Lorenz chaotic model, and to the automatic detection of the landmark locations in the roentgenographic cephalogram for an orthodontic treatment. The results were promising.

9 citations


Journal ArticleDOI
TL;DR: This paper introduces a method which contains different artificial intelligence techniques to perform requirements analysis, and a prototype knowledge-based requirements analysis system, RAKES, is presented to explain the approach.
Abstract: Requirements analysis is a knowledge intensive task and it requires an expert to understand what the clients need. In this paper, we introduce a method which contains different artificial intelligence techniques to perform this task, and a prototype knowledge-based requirements analysis system, RAKES, is presented to explain our approach. In this approach, not only the ordinary functional requirements are collected, but also some non-traditional information, such as non-functional requirements like the quality of operations or the background information for constructing the requirements, is gathered through a knowledge-based support. Different kinds of information collected are stored and organized in a knowledge base and can be used as the source of the user input in the latter phases of software development. Algorithms and procedures have been developed for constructing the interface language, organizing the knowledge base, and applying the knowledge base to different tasks. RAKES is integrated to an on-going research, called the FRORL methodology, to offer a systematic way toward requirements analysis, specification production, prototype generation, specification debugging, and code transformation.

5 citations


Journal ArticleDOI
TL;DR: The three-step GRG approach for learning decision rules from large relational databases is presented and a set of maximally general rules are derived directly from the reduct which can be used to interpret and understand the active mechanisms underlying the database.
Abstract: We present the three-step GRG approach for learning decision rules from large relational databases. In the first step, an attribute-oriented concept tree ascen sion technique is applied to generalize an information system. This step loses some information but substantially improves the efficiency of the following steps. In the second step, a reduction technique is applied to generate a minimalized information system called a reduct which contains a minimal subset of the generalized attributes and the smallest number of distinct tuples for those attributes. Finally, a set of maximally general rules are derived directly from the reduct. These rules can be used to interpret and understand the active mechanisms underlying the database.

5 citations


Journal ArticleDOI
TL;DR: The difficulty in selection of a standardized set of reuse metrics because the “best” reuse metrics are determined by unique characteristics of each reuse application is illustrated.
Abstract: This paper presents a discussion of significant issues in selection of a standardized set of the “best” software metrics to support a software reuse program. This discussion illustrates the difficulty in selection of a standardized set of reuse metrics because the “best” reuse metrics are determined by unique characteristics of each reuse application. An example of the selection of a single set of reuse metrics for a specific management situation is also presented.

5 citations


Journal ArticleDOI
TL;DR: Parts of knowledge representation and reasoning in SILO, a system integrating logic in objects, are presented and it is shown that SILO gives pre-eminence to objects.
Abstract: There have been a large number of systems that integrate logic and objects (frames or classes) for knowledge representation and reasoning. Most of those systems give pre-eminence to logic and their objects lack the structure of frames. These choices imply a number of disadvantages, as the inability to represent exceptions and perform default reasoning, and the reduction in the naturalness of representation. In this paper, aspects of knowledge representation and reasoning in SILO, a system integrating logic in objects, are presented. SILO gives pre-eminence to objects. A SILO object comprises elements from both frames and classes. A kind of many-sorted logic is used to express object internal knowledge. Message passing, alongside inheritance, plays a significant role in the reasoning process. Control knowledge, concerning both deduction and inheritance. is separately and explicitly represented via definitions of certain functions, called meta-functions.

4 citations


Journal ArticleDOI
TL;DR: Noise handling techniques developed and implemented in HCV (Version 2.0), a noise tolerant version of the HCV algorithm, are presented and a performance comparison of HCV with other inductive algorithms C4.5 and NewID in noisy and continuous domains is provided.
Abstract: HCV is a heuristic attribute-based induction algorithm based on the newly-developed extension matrix approach. By dividing the positive examples (PE) of a specific class in a given example set into intersecting groups and adopting a set of strategies to find a heuristic conjunctive formula in each group which covers all the group’s positive examples and none of the negative examples (NE), it can find a covering formula in the form of variable-valued logic for PE against NE in low-order polynomial time. The original algorithm performs quite well with those data sets where noise and continuous data are not of major concern. However, its performance decreases when the data sets are noisy and contain continuous attributes. This paper presents noise handling techniques developed and implemented in HCV (Version 2.0), a noise tolerant version of the HCV algorithm, and provides a performance comparison of HCV with other inductive algorithms C4.5 and NewID in noisy and continuous domains.

Journal ArticleDOI
TL;DR: A design for a specific-to-general learning component is proposed, as well as two methods for combining the two components, resulting in a two-way induction system that achieves higher accuracy than C4.5 and CN2 across the entire generality spectrum.
Abstract: General-to-specific learners like ID3 and CN2 perform well when the target concept descriptions are general, but often have difficulties when they are specific or mixed. This problem can be alleviated by combining them with a specific-to-general learning component, resulting in a two-way induction system. In this paper one design for such a component is proposed, as well as two methods for combining the two components. Experiments on artificial domains show the combined learner to match or outperform “pure” versions of C4.5 and CN2 across the entire generality spectrum, with the advantage increasing for greater concept specificity. Experiments on 24 real-world domains from the UCI repository confirm the utility of two-way induction: the combined learner achieves higher accuracy than C4.5 in 17 domains (at the 5% significance level in 12), and similar results are obtained with CN2. Closer observation of the system’s behavior leads to a better understanding of its ability to correct overly-general rules with specific ones, and shows that there is still room for improvement.

Journal ArticleDOI
TL;DR: A formal approach for reasoning about the relative priority by analyzing the customer’s trade-off preference among imprecise conflicting requirements is presented and a possibilistic reasoning framework for inferring the lower bound of relative priority from case analysis under uncertainty is developed.
Abstract: Priority analysis is one of the most important issues in the trade-off analysis of imprecise conflicting requirements whose elasticity is captured using fuzzy logic. Requirement analysts need to know not only the relative ordering of requirements based on their importance but also how much a requirement is more important than another requirement in order to achieve an effective trade-off. This paper presents a formal approach for reasoning about the relative priority by analyzing the customer’s trade-off preference among imprecise conflicting requirements. A possibilistic reasoning framework for inferring the lower bound of relative priority from case analysis under uncertainty is also developed. Consistency and nonredundancy criteria are established to facilitate the conversion of a possibilistic statement on a lower bound of relative priority into a relative priority. Finally, relative priorities are transformed into weights of importance so that they can be used in the aggregation of conflicting requirements to resolve conflicts.

Journal ArticleDOI
TL;DR: The paper presents the design of a VLSI fuzzy processor which is capable of supporting complex fuzzy reasoning and based on a appropriate computational model, whose main features are capability to cope with rule chaining and pre-processing of inferences.
Abstract: The paper presents the design of a VLSI fuzzy processor which is capable of supporting complex fuzzy reasoning. The architecture of the processor is based on a appropriate computational model, whose main features are: capability to cope with rule chaining; pre-processing of inferences to reduce the number of rules to be processed; parallel computation of the degree of activation of the rules; optimized representation of membership function. The processor performance is in the order of 1.5 MFLIPS (256 rule, 8 Fuzzy inputs, 4 output).

Journal ArticleDOI
TL;DR: This work describes one type, the DP envelope, that draws its decisions from a look-up table computed off-line by dynamic programming, and discusses the application of DP envelopes to a small transportation planning simulation.
Abstract: Envelopes are a form of decision rule for monitoring plan execution. We describe one type, the DP envelope, that draws its decisions from a look-up table computed off-line by dynamic programming. Based on an abstract model of agent progress, DP envelopes let a developer approach execution monitoring as a problem independent of issues in agent design. We discuss the application of DP envelopes to a small transportation planning simulation, and discuss the issues that arise in an empirical analysis of the results.

Journal ArticleDOI
TL;DR: An AI tool called the Quantitative Problem Solver (QPS), developed for building knowledge based systems which can solve quantitative problems in science and engineering, employs the familiar Problem Decomposition strategy for selecting the correct sequence of equations needed for solving problems.
Abstract: An AI tool called the Quantitative Problem Solver (QPS) has been developed for building knowledge based systems which can solve quantitative problems in science and engineering. QPS can store and manipulate quantitative knowledge comprising numerical data and scientific laws represented by formulas. The human interface is based on the symbols commonly used by scientists and engineers. All knowledge is represented as objects and classes in an object-oriented knowledge base. QPS employs the familiar Problem Decomposition strategy for selecting the correct sequence of equations needed for solving problems and it has been tested by the building of knowledge based systems to solve several simple problems in Engineering and Physics.

Journal ArticleDOI
TL;DR: A multi-agent modal logic of knowledge and belief that can be used in design of telepresence with semi-reactive robots is introduced and it is described possible worlds (“states of nature”) by fuzzy cognitive maps.
Abstract: Telepresence is constituted of a robotic system controlled by a human operator at a remote control station. In these systems the human operator is immerse in a virtual reality and the robot is controlled at distance by human operator. Often the human operator has that repeat tasks through robot. In this article we propose that the telepresence can use semi-autonomous (semi-reactive) robots, that execute the tasks that the operator repeats often, However, to create a relationship between the human operator and the semi-autonomous (semi-reactive) robot, it is necessary to develop a logic that combines the knowledge of the reactive robot and the knowledge of the human operator. On the other hand, in the last years we have seen the possibility to structure virtual worlds with Fuzzy Cognitive Maps. These maps can model virtual worlds with numerous actors. Moreover these FCMs can combine different virtual worlds. In this article we introduce a multi-agent modal logic of knowledge and belief that can be used in design of telep resence with semi-reactive robots. In this logic we describe possible worlds (“states of nature”) by fuzzy cognitive maps.