scispace - formally typeset
Search or ask a question

Showing papers in "Applied Intelligence in 1994"


Journal ArticleDOI
TL;DR: It is shown that the MBI selection process can be based upon 64 different fuzzy associative memory (FAM) rules, and the same rules are used to generate 64 training patterns for a feedforward neural network.
Abstract: To make reasonable estimates of resources, costs, and schedules, software project managers need to be provided with models that furnish the essential framework for software project planning and control by supplying important “management numbers” concerning the state and parameters of the project that are critical for resource allocation. Understanding that software development is not a “mechanistic” process brings about the realization that parameters that characterize the development of software possess an inherent “fuzziness,” thus providing the rationale for the development of realistic models based on fuzzy set or neural theories.

96 citations


Journal ArticleDOI
TL;DR: A graph-based induction algorithm that extracts typical patterns from colored digraphs is described that enables the uniform treatment of these two learning tasks to solve complex learning problems such as the construction of hierarchical knowledge bases.
Abstract: We describe a graph-based induction algorithm that extracts typical patterns from colored digraphs. The method is shown to be capable of solving a variety of learning problems by mapping the different learning problems into colored digraphs. The generality and scope of this method can be attributed to the expressiveness of the colored digraph representation, which allows a number of different learning problems to be solved by a single algorithm. We demonstrate the application of our method to two seemingly different learning tasks: inductive learning of classification rules, and learning macro rules for speeding up inference. We also show that the uniform treatment of these two learning tasks enables our method to solve complex learning problems such as the construction of hierarchical knowledge bases.

94 citations


Journal ArticleDOI
TL;DR: Different methods of optimizing the classification process of terminological representation systems are considered and their effect on three different types of test data is evaluated.
Abstract: We consider different methods of optimizing the classification process of terminological representation systems and evaluate their effect on three different types of test data. Though these techniques can probably be found in many existing systems, until now there has been no coherent description of these techniques and their impact on the performance of a system. One goal of this article is to make such a description available for future implementors of terminological systems. Building the optimizations that came off best into theKRIS system greatly enhanced its efficiency.

94 citations


Journal ArticleDOI
TL;DR: An intelligent tool for the acquisition of object-oriented schemata supporting multiple inheritance is presented, which preserves taxonomy coherence and performs taxonomic inferences and an algorithm to detect incoherence detection is presented.
Abstract: We present an intelligent tool for the acquisition of object-oriented schemata supporting multiple inheritance, which preserves taxonomy coherence and performs taxonomic inferences. Its theoretical framework is based onterminological logics, which have been developed in the area of artificial intelligence. The framework includes a rigorous formalization of complex objects, which is able to express cyclic references on the schema and instance level; asubsumption algorithm, which computes all impliedspecialization relationships between types; and an algorithm to detectincoherent types, i.e., necessarily empty types. Using results from formal analyses of knowledge representation languages, we show that subsumption and incoherence detection are computationally intractable from a theoretical point of view. However, the problems appear to be feasible in almost all practical cases.

32 citations


Journal ArticleDOI
TL;DR: A new approach is presented to deal with the problem of modelling and simulating the control mechanisms underlying planned-arm-movements using a synergetic view in which it is assumed that the movement patterns are not explicitly programmed but rather are emergent properties of a dynamic system constrained by physical laws in space and time.
Abstract: A new approach is presented to deal with the problem of modelling and simulating the control mechanisms underlying planned-arm-movements. We adopt a synergetic view in which we assume that the movement patterns are not explicitly programmed but rather are emergent properties of a dynamic system constrained by physical laws in space and time. The model automatically translates a high-level command specification into a complete movement trajectory. This is an inverse problem, since the dynamic variables controlling the current state of the system have to be calculated from movement outcomes such as the position of the arm endpoint. The proposed method is based on an optimization strategy: the dynamic system evolves towards a stable equilibrium position according to the minimization of a potential function. This system, which could well be described as a feedback control loop, obeys a set of non-linear differential equations. The gradient descent provides a solution to the problem which proves to be both numerically stable and computationally efficient. Moreover, the addition into the control loop of elements whose structure and parameters have a pertinent biological meaning allows for the synthesis of gestural signals whose global patterns keep the main invariants of human gestures. The model can be exploited to handle more complex gestures involving planning strategies of movement. Finally, the extension of the approach to the learning and control of non-linear biological systems is discussed.

32 citations


Journal ArticleDOI
TL;DR: This article approaches the problem by a query language with three faces, which presents queries as classes whose instances are the materialized answer (view) to the query.
Abstract: The ideal query language for a knowledge base will probably never be found: easy formulation and easy evaluation of queries are two conflicting goals. Easy formulation asks for a flexible, expressive language near to human language or gestures. Easy evaluation of queries requires an effective mapping to machine code, which computes the correct answer in a finite number of steps. This article approaches the problem by a query language with three faces. The first projects queries to concepts of the knowledge representation language KL-One for easy formulation and readability. The second presents queries as rules of a deductive database with fixpoint semantics. The third presents queries as classes whose instances are the materialized answer (view) to the query. The methods for maintaining and updating the views are compiled from their deductive interpretation.

28 citations


Journal ArticleDOI
TL;DR: This article gives a detailed presentation of constraint satisfaction in the hybrid LAURE language, describing the syntax and the various modes in which constraints may be used, as well as the tools that are proposed by LAURE to extend constraint resolution.
Abstract: This article gives a detailed presentation of constraint satisfaction in the hybrid LAURE language. LAURE is an object-oriented language for Artificial Intelligence (AI) applications that allows the user to combine rules, constraints, and methods that cooperate on the same objects in the same program. We illustrate why this extensibility is necessary to solve some large and difficult problems by presenting a real-life application of LAURE. We describe the syntax and the various modes in which constraints may be used, as well as the tools that are proposed by LAURE to extend constraint resolution. The resolution strategy as well as some implementation details are given to explain how we obtain good performances.

20 citations


Journal ArticleDOI
TL;DR: This article lays the groundwork for the Probabilistic multi-knowledge-base system (PMKBS), a new decision aid specifically tailored to the needs of a decision-maker faced with the derivation of a consensus diagnosis.
Abstract: This article lays the groundwork for theprobabilistic multi-knowledge-base system (PMKBS), a new decision aid specifically tailored to the needs of a decision-maker faced with the derivation of a consensus diagnosis In this article, we develop the PMKBS architecture in several ways First, we define the basic problem that it addresses, and review the fundamental tools upon which it is based Next, we describe its underlying theory, and explain how some general elicitation and modeling procedures form a viable design paradigm Finally, we describe a small family of prototype PMKBSs that address problems related to pathologies of the lymph system, and evaluate their performance Taken together, these discussions and prototypes demonstrate that the PMKBS architecture appears to be flexible, practical, and powerful

16 citations


Journal ArticleDOI
TL;DR: A scheme for the on-line adjustment of three mode controller settings based on experimental measurements of closed-loop performance is proposed based on a recently developed heuristic tuning procedure to identify estimated process parameters.
Abstract: This article proposes a scheme for the on-line adjustment of three mode controller settings based on experimental measurements of closed-loop performance. It uses a recently developed heuristic tuning procedure to identify estimated process parameters. This method may give rise to conflicting estimates. Fuzzy Set theory is applied to manage the situation in terms of a fuzzy conjunction to combine the various estimates. PID control was chosen because of its wide use in the industrial environment due to driving simplicity and robustness. The article shows design, development, and computer simulation aspects.

13 citations


Journal ArticleDOI
TL;DR: The recursive partitioning approach to classifier learning is extended to use more complex types of split at each decision node and these new split types are bivariate and can thus be interpreted visually in plots and tables.
Abstract: We extend the recursive partitioning approach to classifier learning to use more complex types of split at each decision node. The new split types we permit are bivariate and can thus be interpreted visually in plots and tables. In order to find optimal splits of these new types, a new split criterion is introduced that allows the development of divide-and-conquer type algorithms. Two experiments are presented in which the bivariate trees—both with the Gini split criterion and with the new split criterion—are compared to a traditional tree-growing procedure. With the Gini criterion, the bivariate trees show a slight improvement in predictive accuracy and a considerable improvement in tree size over univariate trees. Under the new split criterion, accuracy is also improved, but there is no consistent improvement in tree size.

10 citations


Journal ArticleDOI
TL;DR: This paper investigates a method for dealing with the problem of appropriate perception, called the Lion algorithm, and shows that it can be used to reduce complexity by decomposing perception.
Abstract: Reinforcement learning allows an agent to be both reactive and adaptive, but it requires a simple yet consistent representation of the task environment. In robotics this representation is the product of perception. Perception is a powerful simplifying mechanism because it ignores much of the complexity of the world by mapping multiple world states to each of a few representational states. The constraint of consistency conflicts with simplicity, however. A consistent representation distinguishes world states that have distinct utilities, but perception systems with sufficient acuity to do this tend to also make many unnecessary distinctions.

Journal ArticleDOI
TL;DR: An expert system that is able to design local area networks meeting the requirements specified by the user is proposed, built on an object-oriented paradigm and the hierarchical rule structure paradigm are discussed.
Abstract: Computer networks are an essential tool for people in business, industries, government, and schools. With the rapid rate of change in network technology and products, and the emergence of highly sophisticated network users, network design has become an increasingly complex task. Although the computer society aims at agreeing to a series of international standards for describing network architectures, the design of a computer network remains an ill-structured problem that lends itself perfectly to expert systems solutions. We propose an expert system that is able to design local area networks meeting the requirements specified by the user. Rules and guidelines pertaining to local area network design are formulated and incorporated into the knowledge base. The system is built on an object-oriented paradigm. The object-oriented approach and the hierarchical rule structure paradigm are discussed. We also employ the blackboard technique through which rules can access dynamic objects conveniently.

Journal ArticleDOI
TL;DR: It is demonstrated that decision (production) rule induction is practical in high dimensions, providing strong results and insightful explanations in both disk drive manufacturing quality control and the prediction of chronic problems in large-scale communication networks.
Abstract: We consider the application of several compute-intensive classification techniques to two significant real-world applications: disk drive manufacturing quality control and the prediction of chronic problems in large-scale communication networks. These applications are characterized by very high dimensions, with hundreds of features or tens of thousands of cases. The results of several learning techniques are compared, including linear discriminants, nearest-neighbor methods, decision rules, decision trees, and neural nets. Both applications described in this article are good candidates for rule-based solutions because humans currently resolve these problems, and explanations are critical to determining the causes of faults. While several learning techniques achieved competitive results, machine learning with decision rule inducton was most effective for these applications. It is demonstrated that decision (production) rule induction is practical in high dimensions, providing strong results and insightful explanations.

Journal ArticleDOI
TL;DR: A recent, promising approach for minimizing the error rate of a classifiers is reviewed and a particular application to a simple, prototype-based speech recognizer is described in which a relatively simple distance-based classifier is trained to minimize errors in speech recognition tasks.
Abstract: A key concept in pattern recognition is that a pattern recognizer should be designed so as to minimize the errors it makes in classifying patterns. In this article, we review a recent, promising approach for minimizing the error rate of a classifier and describe a particular application to a simple, prototype-based speech recognizer. The key idea is to define a smooth, differentiable loss function that incorporates all adaptable classifier parameters and that approximates the actual performance error rate. Gradient descent can then be used to minimize this loss. This approach allows but does not require the use of explicitly probabilistic models. Furthermore, minimum error training does not involve the estimation of probability distributions that are difficult to obtain reliably. This new method has been applied to a variety of pattern recognition problems, with good results. Here we describe a particular application in which a relatively simple distance-based classifier is trained to minimize errors in speech recognition tasks. The loss function is defined so as to reflect errors at the level of the final, grammar-driven recognition output. Thus, minimization of this loss directly optimizes the overall system performance.

Journal ArticleDOI
TL;DR: A new algorithm suitable for computing the transitive closure of very large database relations, in which all the page I/O operations are minimized by removing most of the redundant operations that appear in previous algorithms.
Abstract: The integration of logic rules and relational databases has recently emerged as an important technique for developing knowledge management systems. An important class of logic rules utilized by these systems is the so-called transitive closure rules, the processing of which requires the computation of the transitive closure of database relations referenced by these rules. This article presents a new algorithm suitable for computing the transitive closure of very large database relations. This algorithm proceeds in two phases. In the first phase, a general graph is condensed into an acyclic one, and at the same time a special sparse matrix is formed from the acyclic graph. The second phase is the main one, in which all the page I/O operations are minimized by removing most of the redundant operations that appear in previous algorithms. Using simulation, this article also studies and examines the performance of this algorithm and compares it with the previous algorithms.

Journal ArticleDOI
TL;DR: This paper models a Hybrid Intelligent Packing System (HIPS) by integrating Artificial Neural Networks, Artificial Intelligence, and Operations Research approaches for solving the packing problem.
Abstract: A successful solution to the packing problem is a major step toward material savings on the scrap that could be avoided in the cutting process and therefore money savings. Although the problem is of great interest, no satisfactory algorithm has been found that can be applied to all the possible situations. This paper models a Hybrid Intelligent Packing System (HIPS) by integrating Artificial Neural Networks (ANNs), Artificial Intelligence (AI), and Operations Research (OR) approaches for solving the packing problem. The HIPS consists of two main modules, an intelligent generator module and a tester module. The intelligent generator module has two components: (i) a rough assignment module and (ii) a packing module. The rough assignment module utilizes the expert system and rules concerning cutting restrictions and allocation goals in order to generate many possible patterns. The packing module is an ANN that packs the generated patterns and performs post-solution adjustments. The tester module, which consists of a mathematical programming model, selects the sets of patterns that will result in a minimum amount of scrap.

Journal ArticleDOI
TL;DR: Two new techniques for learning Relational Structures (RSs) as they occur in 2D pattern and 3D object recognition are presented and two new techniques, namely, Evidence-Based Networks (EBS-NNets) and Rulegraphs are compared.
Abstract: We present and compare two new techniques for learning Relational Structures (RSs) as they occur in 2D pattern and 3D object recognition. These techniques, namely, Evidence-Based Networks (EBS-NNets) and Rulegraphs combine techniques from computer vision with those from machine learning and graph matching. The EBS-NNet has the ability to generalize pattern rules from training instances in terms of bounds on both unary (single part) and binary (part relation) numerical features. It also learns the compatibilities between unary and binary feature states in defining different pattern classes. Rulegraphs check this compatibility between unary and binary rules by combining evidence theory with graph theory. The two systems are tested and compared using a number of different pattern and object recognition problems.

Journal ArticleDOI
TL;DR: A new approach to planning collision-free motions for general real-life six degrees of freedom (d.o.f.) manipulators is presented, based on a simple object model previously developed, and computational cost is reduced to the strictly necessary by selecting the most adequate level of representation.
Abstract: The collision-free planning of motion is a fundamental problem for artificial intelligence applications in robotics. The ability to compute a continuous safe path for a robot in a given environment will make possible the development of task-level robot planning systems so that the implementation details and the particular robot motion sequence will be ignored by the programmer.

Journal ArticleDOI
TL;DR: An assembly illustration understanding system which recognizes assembly relations among mechanical parts, and the 3D shape of the mechanical parts as well, and conjectures the structural details of mechanical parts such as insertion holes which are usually invisible.
Abstract: This paper presents an assembly illustration understanding system. The system is eventually expected to be applied to a robot which specializes in automated mechanical assembly. Assembly illustrations in an assembly manual usually have two features: 1) In addition to the figures corresponding to mechanical parts, several special line-drawings referred to as auxiliary lines in this paper are often employed for the visualization of the assembly relations among mechanical parts; 2) The assembly illustrations in an assembly manual are disposed sequentially so that the subgoal of an assembly illustration will definitely appear in its succeeding illustrations as an assemblage. Both features are important clues to the analysis and understanding of assembly illustrations. By extracting the auxiliary lines, the system recognizes assembly relations among mechanical parts, and the 3D shape of the mechanical parts as well. Moreover, based on the assembly relations, it conjectures the structural details of mechanical parts such as insertion holes which are usually invisible. After that, it characterizes the appearance of the completed assemblage described by the current illustration. The system finally verifies the result by matching with the figures in a succeeding illustration in which the completed assemblage is given as a subpart.

Journal ArticleDOI
TL;DR: The aim of the authors is to reconcile the two advantages of classical routing strategies mentioned above through the use of neural networks, one in which the routing strategy guarantees the delivery of information along almost optimal paths, but distributes calculation to the various switching nodes.
Abstract: Routing is a problem of considerable importance in a packet-switching network, because it allows both optimization of the transmission speeds available and minimization of the time required to deliver information. In classical centralized routing algorithms, each packet reaches its destination along the shortest path, although some network bandwidth is lost through overheads. By contrast, distributed routing algorithms usually limit the overloading of transmission links, but they cannot guarantee optimization of the paths between source and destination nodes on account of the mainly local vision they have of the problem. The aim of the authors is to reconcile the two advantages of classical routing strategies mentioned above through the use of neural networks. The approach proposed here is one in which the routing strategy guarantees the delivery of information along almost optimal paths, but distributes calculation to the various switching nodes. The article assesses the performance of this approach in terms of both routing paths and efficiency in bandwidth use, through comparison with classical approaches.