scispace - formally typeset
Search or ask a question

Showing papers in "Applied Intelligence in 1992"


Journal ArticleDOI
TL;DR: This work shows how the conceptual framework behind a given system determines crucial aspects of the system's behavior, and conducts an empirical study of a success model of cooperative problem solving between people in a large hardware store.
Abstract: Cooperative problem-solving systems are computer-based systems that augment a person's ability to create, reflect, design, decide, and reason. Our work focuses on supporting cooperative problem solving in the context of high-functionality computer systems. We show how the conceptual framework behind a given system determines crucial aspects of the system's behavior. Several systems are described that attempted to address specific shortcomings of prevailing assumptions, resulting in a new conceptual framework. To further test this resulting framework, we conducted an empirical study of a success model of cooperative problem solving between people in a large hardware store. The conceptual framework is instantiated in a number of new system-building efforts, which are described and discussed.

74 citations


Journal ArticleDOI
TL;DR: A highly distributed fault-tolerant control system capable of compensating for deficiencies in system-level performance even when the cause of a fault cannot be explicitly identified.
Abstract: This paper describes a highly distributed fault-tolerant control system capable of compensating for deficiencies in system-level performance even when the cause of a fault cannot be explicitly identified. Developed for an autonomous underwater vehicle that must remain operational for several weeks without human intervention, this system must be capable of dealing with events that cannot be anticipated at design time. A unique aspect of this system is that it handles such events by attempting to “do whatever works” if it is unable to diagnose and correct specific faults. The software architecture used in this approach is applicable to a wide range of complex autonomous control applications.

63 citations


Journal ArticleDOI
TL;DR: A theory of knowledge goals, or desires for knowledge, and their use in the processes of understanding and learning is presented, illustrated using a natural language understanding program that learns by reading novel or unusual newspaper stories, and a differential diagnosis program that improves its accuracy with experience.
Abstract: Combinatorial explosion of inferences has always been a central problem in artificial intelligence. Although the inferences that can be drawn from a reasoner's knowledge and from available inputs is very large (potentially infinite), the inferential resources available to any reasoning system are limited. With limited inferential capacity and very many potential inferences, reasoners must somehow control the process of inference.

42 citations


Journal ArticleDOI
TL;DR: LUKE is a tool that allows a knowledge base builder to create an English language interface by associating words and phrases with knowledge base entities by drawing its power from a large set of heuristics about how words are typically used to describe the world.
Abstract: Very large knowledge bases (KB's) constitute an important step for artificial intelligence and will have significant effects on the field of natural language processing. This thesis addresses the problem of effectively acquiring two large bodies of formalized knowledge: knowledge about the world (a KB), and knowledge about words (a lexicon). The central observation is that these two bodies of knowledge are highly redundant. For example, the syntactic behavior of a noun (or a verb) is highly correlated with certain physical properties of the object (or event) to which it refers. It should be possible to take advantage of this type of redundancy in order to greatly reduce both the time and expertise required to build large KB's and lexicons. This thesis describes LUKE, a software tool that allows a knowledge base builder to create an English language interface by associating words and phrases with KB entities. LUKE assumes no linguistic expertise on the part of the user, because that expertise is built directly into the tool itself. LUKE draws its power from a large set of heuristics about how words are typically used to describe the world. These heuristics exploit the redundancy between linguistic and world knowledge. When a word or phrase is associated with some KB entity, LUKE is able to accurately guess features of the word based on features of the KB entity. LUKE can also hypothesize new words and word senses based on the existence of others. All of LUKE's hypotheses are displayed to the user for verification, using a format designed to tap the user's basic linguistic intuitions. LUKE stores its lexicon in the KB. Truth maintenance links ensure that changes in the KB are automatically propagated to the lexicon. LUKE compiles lexical entries into data structures convenient for natural language parsing and generation programs. Lexicons acquired by LUKE have been used by KBNL, a knowledge-based natural language system, for applications in information retrieval, machine translation, and KB navigation. This work identifies several dozen heuristics that encode redundancies between linguistic representations and representations of world knowledge. It also demonstrates the usefulness of these heuristics in a working lexical acquisition system.

41 citations


Journal ArticleDOI
TL;DR: The modularity of the expectation-based 4D approach to dynamic machine vision exploiting integral spatiotemporal models of objects in the real world is demonstrated in a simulation set-up serving both the ground- and the air vehicle applications.
Abstract: The expectation-based 4D approach to dynamic machine vision exploiting integral spatiotemporal models of objects in the real world is discussed in the application domains of unmanned ground and air vehicles. The method has demonstrated superior performance over the last half decade in autonomous road vehicle guidance with three different vans and busses, with an AGV on the factory floor and with completely autonomous relative state estimation for a twin turboprop aircraft in the landing approach to a runway without any external support; in all application areas only a small set of conventional microcomputers was sufficient for realizing the system. This shows the computational efficiency of the method combining both conventional engineering type algorithms and artificial intelligence components in a well balanced way.

37 citations


Journal ArticleDOI
Ronald R. Yager1
TL;DR: From this general formulation of the Mamdani model and the logical model, two soft classes of rules aggregation are obtained, or-like and and-like aggregations.
Abstract: We look at the representation and aggregation of individual rules in the fuzzy logic control system. Two extreme paradigms for rule representation are introduced, the Mamdani model and the logical model. We look at the characteristics of these approaches. We then combine these two approaches to get a general model for the representation of rules. From this general formulation we obtain two soft classes of rules aggregation, or-like and and-like aggregations.

22 citations


Journal ArticleDOI
TL;DR: The significance of the Engineered Conditioning operator during Multiple Fault Diagnosis is shown and the improvement of diagnostic reliability when using the engineered conditioning operator with the genetic algorithm compared to results from the Genetic algorithm without the new operator is shown.
Abstract: Engineered Conditioning (EC) is a Genetic Algorithm operator that works together with the typical genetic algorithm operators: mate selection, crossover, and mutation, in order to improve convergence toward an optimal multiple fault diagnosis. When incorporated within a typical genetic algorithm, the resulting hybrid scheme produces improved reliability by exploiting the global nature of the genetic algorithm as well as “local” improvement capabilities of the Engineered Conditioning operator.

20 citations


Journal ArticleDOI
TL;DR: A flexible control mechanism can be realized that supports interactive construction, integration of case-based approaches and simulation methods and has the following characteristics: separation of control and object knowledge, declarative representation of control knowledge, and explicit control decisions in the problem solving process.
Abstract: In most expert systems for constructional tasks, the knowledge base consists of a set of facts or object definitions and a set of rules. These rules contain knowledge about correct or ideal solutions as well as knowledge on how to control the construction process. In this paper, we present an approach that avoids this type of rules and thus the disadvantages caused by them. We propose a static knowledge base consisting of a set of object definitions interconnected by is-a and part-of links. This conceptual hierarchy declaratively defines a taxonomy of domain objects and the aggregation of components to composite objects. Thus, the conceptual hierarchy describes the set of all admissible solutions to a constructional problem. Interdependencies between objects are represented by constraints. A solution is a syntactically complete and correct instantiation of the conceptual hierarchy. No control knowledge is included in the conceptual hierarchy. Instead, the control mechanism will use the conceptual hierarchy as a guideline. Thus it is possible to determine in which respects a current partial solution is incomplete simply by syntactical comparison with the conceptual hierarchy. The control architecture proposed here has the following characteristics: separation of control and object knowledge, declarative representation of control knowledge, and explicit control decisions in the problem solving process. Thus, a flexible control mechanism can be realized that supports interactive construction, integration of case-based approaches and simulation methods. This control method is part of an expert system kernel for planning and configuration tasks in technical domains. This kernel has been developed at the University of Hamburg and is currently applied to several domains.

19 citations


Journal ArticleDOI
TL;DR: Important assumptions regarding the mechanism, perception, planning, and control of the Ambler and its software system are presented and evaluated in light of experimental and theoretical research of this project.
Abstract: A high degree of mobility, reliability, and efficiency are needed for autonomous exploration of extreme terrain. These requirements have guided the development of the Ambler, a six-legged robot designed for planetary exploration. To address issues of efficiency and mobility, the Ambler is configured with a stacked arrangement of orthogonal legs and exhibits a unique circulating gait, where trailing legs recover directly from rear to front. The Ambler is designed to stably traverse a 30 degree slope while crossing meter sized features. The same three principles have provided many constraints on the design of a software system that autonomously navigates the Ambler through natural terrain using 3-D perception and a combined deliberative/reactive architecture. The software system has required research advances in real-time control, perception of rugged terrain, motion planning, task-level control, and system integration. This paper presents many of the factors that influenced the design of the Ambler and its software system. In particular, important assumptions regarding the mechanism, perception, planning, and control are presented and evaluated in light of experimental and theoretical research of this project.

15 citations


Journal ArticleDOI
TL;DR: The architecture is shown to easily generate robust vehicle control schemes which gather the required thermal vent data for a variety of vents of varying positions, velocities and extents and a test tank simulation is described.
Abstract: This paper describes a situated reasoning architecture and a programming implementation for controlling robots in naturally changing environments. The reactive portion of the architecture produces reaction plans that exploit low-level competences as operators. The low-level competences include both obstacle avoidance heuristics and control-theoretic algorithms for generating and following a velocity/acceleration trajectory. Implemented in the GAPPS/REX situated automata programming language, robot goals can be specified logically and can be compiled into runtime virtual circuits that map sensor information into actuator commands in a fashion that allows for parallel execution. Detailed programs are described for controlling underwater vehicles being developed at the Woods Hole Oceanographic Institution, specifically the Remotely Piloted Vehicle (RPV), its successor, the Hylas, and eventually the Autonomous Benthic Explorer (ABE). Experiments with the RPV in a test tank are described in detail and will be duplicated with the Hylas. The experiments show that the robot performed both pilotaided and autonomous exploration tasks while accommodating normal changes in the task environment. ABE programs are described to illustrate how the reaction plans can be used in tasks more complex than those in the RPV experiments. The ABE is required to gather scientific data from deep ocean phenomena (e.g., thermal vents) which occur sporadically over long periods of time. A test tank simulation is described wherein the architecture is shown to easily generate robust vehicle control schemes which gather the required thermal vent data for a variety of vents of varying positions, velocities and extents.

11 citations


Journal ArticleDOI
TL;DR: The objectives were to develop a system which encodes the heuristics of technicians and to provide an expandable framework for automating the technical manuals and incorporating explanations, data interpretation, as well as case history and model-based reasoning.
Abstract: Aircraft gas turbine engine maintenance is a complex task requiring not only specialized technical skill but effective integration of many sources of information. Traditionally, military maintenance technicians make extensive use of common sense knowledge, equipment manuals, pilot reports, instrument readings, engine settings and physical observations. Reasoning based upon patterns in sensor data, case histories and past maintenance is infrequently carried out. Difficulties in maintenance arise from the need to quickly restore the engines to an operational state, the frequent reassignment of technicians and the awkward access to, and interpretation of, data. There is a need to overcome these factors. This paper describes a knowledge-based diagnostic system for military gas turbine aero-engines. The objectives were to develop a system which encodes the heuristics of technicians and to provide an expandable framework for automating the technical manuals and incorporating explanations, data interpretation, as well as case history and model-based reasoning. The eventual goal is to apply the system to the maintenance of complex mechanical equipment and have it reason, in an on-line mode, with data obtained from a data acquisition system. A description of the application area and the features of the system, in its current stage of development, are discussed. This paper will be of practical benefit to those developing knowledge-based maintenance systems for complex mechanical equipment.

Journal ArticleDOI
TL;DR: The project demonstrated capabilities of existing and new technologies, but also highlighted many serious integration issues, particularly when using prototype components, which demonstrated the utility and mutual benefits of academic-industry projects.
Abstract: This paper describes the design, development, and deployment of an unmanned autonomous aerial vehicle developed at the Georgia Institute of Technology during 1990–1991. The approach taken, the system architecture, and the embedded intelligence of the project as conceived by a team of students, faculty, and industrial affiliates is reported. The project focused on engineering a vehicle which performed an intended mission in the time, space, and weight restrictions specified as part of an AUVS 1991 Competition. This paper documents the system and its various components and also provides a discussion of integration issues. The project demonstrated capabilities of existing and new technologies, but also highlighted many serious integration issues, particularly when using prototype components. The project also demonstrated the utility and mutual benefits of academic-industry projects. All members of the team benefited by working on a real and tangible project. Industrial participates gained first hand experience integrating their products with other components and many saw potential for their products and services in new markets.

Journal ArticleDOI
TL;DR: This paper reports on the status of The University of Texas at Arlington student effort to design, build and fly an Autonomous Aerial Vehicle and presents a modification of a recently published procedure for recovering a desired open-loop transfer function shape within the framework of the mixed ℋ2/ℋ∞ problem.
Abstract: This paper reports on the status of The University of Texas at Arlington student effort to design, build and fly an Autonomous Aerial Vehicle. Both the 1991 entry into the First International Aerial Robotics Competition as well as refinements being made for 1992 are described. Significant technical highlights include a real-time vision system for target objective tracking, a real-time ultrasonic locator system for position sensing, a novel mechanism for gradually moving from human to computer control, and a hierarchical control structure implemented on a 32-bit microcontroller. Detailed discussion about the design of multivariable automatic controls for stability augmentation is included. Position and attitude control loops are optimized according to a combined ℋ2 and ℋ∞ criteria. We present a modification of a recently published procedure for recovering a desired open-loop transfer function shape within the framework of the mixed ℋ2/ℋ∞ problem. This work has led to a new result that frees a design parameter related to imposing the ℋ∞ constraint. The additional freedom can be used to improve upon the performance and robustness characteristics of the system.

Journal ArticleDOI
TL;DR: This paper addresses the problem of integrating the human operator with autonomous robotic visual tracking and servoing modules and presents an experimental setup where all different schemes have been tested.
Abstract: This paper addresses the problem of integrating the human operator with autonomous robotic visual tracking and servoing modules. A CCD camera is mounted on the end-effector of a robot and the task is to servo around a static or moving rigid target. In manual control mode, the human operator, with the help of a joystick and a monitor, commands robot motions in order to compensate for tracking errors. In shared control mode, the human operator and the autonomous visual tracking modules command motion along orthogonal sets of degrees of freedom. In autonomous control mode, the autonomous visual tracking modules are in full control of the servoing functions. Finally, in traded control mode, the control can be transferred from the autonomous visual modules to the human operator and vice versa. This paper presents an experimental setup where all these different schemes have been tested. Experimental results of all modes of operation are presented and the related issues are discussed. In certain degrees of freedom (DOF) the autonomous modules perform better than the human operator. On the other hand, the human operator can compensate fast for failures in tracking while the autonomous modules fail. Their failure is due to difficulties in encoding an efficient contingency plan.

Journal ArticleDOI
Li-Min Fu1
TL;DR: A novel approach to rule refinement based upon connectionism is presented, capable of performing rule deletion, rule addition, changing rule quality, and modification of rule strengths.
Abstract: A novel approach to rule refinement based upon connectionism is presented. This approach is capable of performing rule deletion, rule addition, changing rule quality, and modification of rule strengths. The fundamental algorithm is referred to as the Consistent-Shift algorithm. Its basis for identifying incorrect connections is that incorrect connections will often undergo larger inconsistent weight shift that correct ones during training with correct samples. By properly adjusting the detection threshold, incorrect connections would be uncovered, which can then be deleted or modified. Deletion of incorrect connections and addition of correct connections then translate into various forms of rule refinement just mentioned. The viability of this approach is demonstrated empirically.

Journal ArticleDOI
TL;DR: It is demonstrated that activation pattern controlled rules facilitate the integration of data-driven and command-driven programming, support preventive programming as well, and allow for writing rule-based programs more transparently.
Abstract: The attractions and drawbacks of data-driven programming are discussed in the context of rule-based forward chaining systems. The relationships between data-driven and command-driven programming are analyzed in the context of a course-registration example. A new form of production rule, called an activation pattern controlled rule, that generalizes classical forward chaining rules is introduced. Activation pattern controlled rules are triggered by calls of commands; that is, by the intension to perform a command but not necessarily by the result of applying the command itself. We demonstrate that activation pattern controlled rules facilitate the integration of data-driven and command-driven programming, support preventive programming as well, and allow for writing rule-based programs more transparently. We also survey our experiences in implementing an inference engine for activation pattern controlled rules.

Journal ArticleDOI
TL;DR: A hybrid approach to constructing single hidden-layer feed-forward neural network classifiers in which the number of hidden units is determined by cascade-correlation and the weights are learned by back-propagation is suggested.
Abstract: For optimum statistical classification and generalization with single hidden-layer neural network models, two tasks must be performed: (a) learning the best set of weights for a network of k hidden units and (b) determining k, the best complexity fit. We contrast two approaches to construction of neural network classifiers: (a) standard back-propagation as applied to a series of single hidden-layer feed-forward nerual networks with differing number of hidden units and (b) a heuristic cascade-correlation approach that quickly and dynamically configures the hidden units in a network and learns the best set of weights for it. Four real-world applications are considered. On these examples, the back-propagation approach yielded somewhat better results, but with far greater computation times. The best complexity fit, k, for both approaches were quite similar. This suggests a hybrid approach to constructing single hidden-layer feed-forward neural network classifiers in which the number of hidden units is determined by cascade-correlation and the weights are learned by back-propagation.

Journal ArticleDOI
TL;DR: During the past 10 years NASA Langley Research Center has been involved in all three areas of remote manipulation: teleoperation to telerobotics and then to robotics, and some of the lessons learned are discussed.
Abstract: A popular conception of the evolution of remote manipulation is a progression from teleoperation to telerobotics and then to robotics. This is logical because in going from teleoperation to robotics there would appear to be a continuous decrease in manned workload, an increase in system complexity, and an increase in the amount of “intelligence” in the automated system. During the past 10 years NASA Langley Research Center (LaRC) has been involved in all three areas. The decision on which system is most suitable for a task depends not only on the task, but on the allocation of responsibility for intelligence or high level control. The operator may be responsible for all intelligence and control functions, they may be shared between the operator and the automated system, or they may be performed by an autonomous system. This paper discusses some of the experiences in each area in applications related to possible space tasks and some of the lessons learned.

Journal ArticleDOI
TL;DR: In 1985, NASA instituted a research program in Telerobotics to develop and provide the technology for applications of telerobotic to the United States Space program as discussed by the authors, which has evolved significantly in terms of its content, goals and approach.
Abstract: In 1985, NASA instituted a research program in Telerobotics to develop and provide the technology for applications of telerobotics to the United States Space program. The purpose of this paper is to describe the goals, organizing framework and content of that endeavor. The body of the paper reviews the actual tasks which comprise the content of the program which is now seven years old and has evolved significantly in terms of its content, goals and approach. The lessons learned in that time comprise the organizing framework of the current program. This organizing framework is also described.

Journal ArticleDOI
Nadim Obeid1
TL;DR: A computational model is described which is geared towards providing helpful answers to modal and hypothetical questions in knowledge base systems (KBSs) and touches on formal semantic theories on modality and question answering, intensionality, partiality and belief revision.
Abstract: This paper describes a computational model which is geared towards providing helpful answers to modal and hypothetical questions in knowledge base systems (KBSs). The model is an essential component of a more complete model which aims at giving helpful answers to questions presented in a natural language. The overall work touches on formal semantic theories on modality and question answering (which have been mainly addressed by linguists and semanticists), intensionality, partiality and belief revision. In this paper, we shall mainly be concerned with the question of partiality where we present a three-valued logic, to which we shall refer as K-T, for reasoning with incomplete information and a proof method for the logic. Along the way, we shall lightly touch on other issues such as answerhood, modality, context and helpfulness.

Journal ArticleDOI
TL;DR: This paper describes a new framework for interactive graphic interface design that will enable graphic-interface-building tools which are general purpose, inter-active, and application-specific.
Abstract: Diagrams communicate massive amounts of information at a glance. Complex domains can be simplified and extended with diagrammatic notations. Computational systems can certainly benefit from the use of diagrams. However, graphic interfaces are difficult and time consuming to write. We need a way of shortening the graphic-interface building cycle so that it is relatively easy and fast to add a graphic interface to any application that may benefit from it. A general-purpose, graphic-interface-building tool kit that a designer or user, not a programmer, can use to design and attach graphic interfaces to applications can greatly speed up and lower the costs of adding graphics to systems. In this paper, I describe a new framework for interactive graphic interface design. The framework will enable graphic-interface-building tools which are general purpose, inter-active, and application-specific. The framework consists of a taxonomy (ontology) of visual properties that span sub-object properties, full objects, and the relationships between objects. The taxonomy forms a skeleton on which to hang methods for manipulating these visual properties, objects, relations, and composites. The methods consist of the generation of prototypes, the recognition of properties in objects, and mouse manipulation functions for modifying properties in an object. Further characteristics of the framework are that the properties are composable, that objects can be explicitly, incrementally described through repeated composition and application of recognition methods, and that the composition of properties to form more fully described and more complex objects is recursive. This makes the framework and the objects within it quite flexible, incremental, uniform, and modular.

Journal ArticleDOI
TL;DR: A preliminary concurrent blackboard system, which carries out a “Penrose tiling” of a planar area is discussed, which leads to the general model—an interconnected network of such systems.
Abstract: This paper describes a proposed model for a distributed intelligence system. The system is based on a communicating network of nodes where each node is a concurrent blackboard system. A preliminary concurrent blackboard system, which carries out a “Penrose tiling” of a planar area is discussed. This work leads to the general model—an interconnected network of such systems. The “shell” for such a system has been developed in the declarative, parallel programming language STRAND. This shell is described and analyzed in some detail, specifically as it functions at a node in the system. Two applications of the proposed approach are discussed.