scispace - formally typeset
Search or ask a question

Showing papers in "Knowledge Based Systems in 1999"


Journal ArticleDOI
Ian Watson1
TL;DR: By describing four applications of case-based reasoning (CBR), that variously use: nearest neighbour, induction, fuzzy logic and SQL, the author shows that CBR is a methodology and not a technology.
Abstract: This paper asks whether case-based reasoning is an artificial intelligence (AI) technology like rule-based reasoning, neural networks or genetic algorithms or whether it is better described as a methodology for problem solving, that may use any appropriate technology. By describing four applications of case-based reasoning (CBR), that variously use: nearest neighbour, induction, fuzzy logic and SQL, the author shows that CBR is a methodology and not a technology. The implications of this are discussed.

404 citations


Journal ArticleDOI
TL;DR: This article aims at drawing attention to several factors related to rule interestingness that have been somewhat neglected in the literature, and introducing a new criterion to measure attribute surprisingness, as a factor influencing the interestingness of discovered rules.
Abstract: This paper discusses several factors influencing the evaluation of the degree of interestingness of rules discovered by a data mining algorithm. This article aims at: (1) drawing attention to several factors related to rule interestingness that have been somewhat neglected in the literature; (2) showing some ways of modifying rule interestingness measures to take these factors into account; (3) introducing a new criterion to measure attribute surprisingness, as a factor influencing the interestingness of discovered rules.

286 citations


Journal ArticleDOI
TL;DR: The current limitations of Computer Aided Design tools are discussed and the use of knowledge Based Engineering (KBE) in the creation of a concept development tool, to organise information flow and as an architecture for the effective implementation of rapid design solutions is reported on.
Abstract: This paper discusses the current limitations of Computer Aided Design (CAD) tools and reports on the use of knowledge Based Engineering (KBE) in the creation of a concept development tool, to organise information flow and as an architecture for the effective implementation of rapid design solutions The KBE tool along with supporting analytical solutions has been applied to the Body-In-White area of automotive design The present methods of using CAD and Finite Element Analysis (FEA) systems do not use a unified product/process model representation and lead to the creation of separate non-relation data models that only capture the result of the engineering process The KBE method unifies the engineering intent into a single model that allows for existing or novel design solutions to be assessed These design solutions can then represent themselves in the correct form to the analysis systems Automeshing is achieved using a rule-base that meshes the model with respect to the analysis solution required, materials and processes

198 citations


Journal ArticleDOI
TL;DR: A general computational framework for solving the mapping problem in model-based systems is proposed and an implementation of the framework within the MOBI-D (Model-Based Interface Designer) interface development environment is shown.
Abstract: Model-based interface development systems have not been able to progress beyond producing narrowly focused interface designs of restricted applicability. We identify a level-of-abstraction mismatch in interface models, which we call the mapping problem, as the cause of the limitations in the usefulness of model-based systems. We propose a general computational framework for solving the mapping problem in model-based systems. We show an implementation of the framework within the MOBI-D (Model-Based Interface Designer) interface development environment. The MOBI-D approach to solving the mapping problem enables for the first time with model-based technology the design of a wide variety of types of user interfaces.

165 citations


Journal ArticleDOI
TL;DR: Let's Browse is an experiment in building an agent to assist a group of people in browsing, by suggesting new material likely to be of common interest, built as an extension to the single user Web browsing agent Letizia.
Abstract: Web browsing, like most of today's desktop applications, is usually a solitary activity. Other forms of media, such as watching television, are often done by groups of people, such as families or friends. What would it be like to do collaborative Web browsing? Could the computer provide assistance to group browsing by trying to help find mutual interests among the participants? Let's Browse is an experiment in building an agent to assist a group of people in browsing, by suggesting new material likely to be of common interest. It is built as an extension to the single user Web browsing agent Letizia. Let's Browse features automatic detection of the presence of users, automated “channel surfing” browsing, and dynamic display of the user profiles and explanation of recommendations.

116 citations


Journal ArticleDOI
TL;DR: This paper presents an external method, MVC (Missing Values Completion), to improve performances of completion and also declarativity and interactions with the user for this problem, and to use it for the data cleaning step of the Knowledge Discovery in Databases (KDD) process.
Abstract: Many of analysis tasks have to deal with missing values and have developed specific and internal treatments to guess them. In this paper we present an external method, MVC (Missing Values Completion), to improve performances of completion and also declarativity and interactions with the user for this problem. Such qualities will allow to use it for the data cleaning step of the KDD1 process[6]. The core of MVC, is the RAR2 algorithm that we have proposed in [15]. This algorithm extends the concept of association rules[l] for databases with multiple missing values. It allows MVC to be an efficient preprocessing method: in our experiments with the c4.5[13] decision tree program, MVC has permitted to divide, up to two, the error rate in classification, independently of a significant gain of declarativity.

69 citations


Journal ArticleDOI
TL;DR: It is argued that in some domains this problem analysis process can be significant and proposed an iterative methodology for addressing it and described the application of case-based reasoning to the problem of aircraft conflict resolution in a system called ISAC.
Abstract: Case-Based Reasoning (CBR) has emerged from research in cognitive psychology as a model of human memory and remembering. It has been embraced by researchers of AI applications as a methodology that avoids some of the knowledge acquisition and reasoning problems that occur with other methods for developing knowledge-based systems. In this paper we propose that, in developing knowledge based systems, knowledge engineering addresses two tasks. There is a problem analysis task that produces the problem representation and there is the task of developing the inference mechanism. CBR has an impact on the second of these tasks but helps less with the first. We argue that in some domains this problem analysis process can be significant and propose an iterative methodology for addressing it. To evaluate this, we describe the application of case-based reasoning to the problem of aircraft conflict resolution in a system called ISAC. We describe the application of this iterative methodology and assess the knowledge engineering impact of CBR.

60 citations


Journal ArticleDOI
TL;DR: This article shows how existing knowledge base verification techniques can be applied to verify the commitment of a knowledge-based system to a given ontology, by incorporating translation into the verification procedure.
Abstract: An ontology defines the terminology of a domain of knowledge: the concepts that constitute the domain, and the relationships between those concepts. In order for two or more knowledge-based systems to interoperate—for example, by exchanging knowledge, or collaborating as agents in a co-operative problem-solving process—they must commit to the definitions in a common ontology. Verifying such commitment is therefore a prerequisite for reliable knowledge-based system interoperability. This article shows how existing knowledge base verification techniques can be applied to verify the commitment of a knowledge-based system to a given ontology. The method takes account of the fact that an ontology will typically be expressed using a different knowledge representation language to the knowledge base, by incorporating translation into the verification procedure. While the representation languages used are specific to a particular project, their features are general and the method has broad applicability.

53 citations


Journal ArticleDOI
TL;DR: The high-level architecture of the CIM is described, with emphasis on its pilot-perceptible behaviors: Crew Intent Estimation, Page Selection, Symbol Selection/Declutter, Intelligent Window Location, Automated Pan and Zoom, and Task Allocation.
Abstract: The US Army's Rotorcraft Pilot's Associate (RPA) program is developing an advanced, intelligent "associate" system for flight demonstration in a future attack/scout helicopter. A significant RPA component is the intelligent user interface known as the Cockpit Information Manager (CIM). This paper describes the high-level architecture of the CIM, with emphasis on its pilot-perceptible behaviors: Crew Intent Estimation, Page Selection, Symbol Selection/Declutter, Intelligent Window Location, Automated Pan and Zoom, and Task Allocation. We then present the subjective results of recent full mission simulation studies using the CIM to illustrate pilots' attitudes toward these behaviors and their perceived effectiveness.

49 citations


Journal ArticleDOI
TL;DR: ESSE is presented, a prototype expert system for software evaluation that embodies various aspects of the Multiple-Criteria Decision Aid (MCDA) methodology and its main features are the flexibility in problem modeling and the built-in knowledge about software problem solving and software attribute assessment.
Abstract: Solving software evaluation problems is a particularly difficult software engineering process and many different—often contradictory—criteria must be considered in order to reach a decision. This paper presents ESSE, a prototype expert system for software evaluation that embodies various aspects of the Multiple-Criteria Decision Aid (MCDA) methodology. Its main features are the flexibility in problem modeling and the built-in knowledge about software problem solving and software attribute assessment. Evaluation problems are modeled around top-level software attributes, such as quality and cost. Expert assistants guide the evaluator in feeding values to the decision model. ESSE covers all important dimensions of software evaluation through the integration of different technologies.

49 citations


Journal ArticleDOI
TL;DR: PCONFIG is a modern Web-based constraint system managing the complex configuration requirements of one product range from a specific computer manufacturer, covering multiple CPU types, Operating Systems, Option cards and all the diverse multi-way relationships found in the assembly of complex computers.
Abstract: PCONFIG is a modern Web-based constraint system managing the complex configuration requirements of one product range from a specific computer manufacturer. The range spans multiple CPU types, Operating Systems, Option cards and all the diverse multi-way relationships found in the assembly of complex computers. A principled approach to configuration was adopted from the outset with special attention given to ongoing product enhancement. Computer Parts and Engineering Constraints are separated in this unique configuration engine allowing each to be updated independently; typically by different people. The complex modelling essential in capturing multi-way relationships is dealt with by coding configuration information using a tool boasting a patented pattern-matching algorithm. This highly versatile rules-based, object-orientated development tool encouraged the simplification of a potentially difficult and complex problem into a relatively straightforward and extensible system. This online configuration system goes live on the Digital customer Web pages early next year. Future enhancements to PCONFIG includes a parts editor, a constraints editor and ordering methods allowing users to place orders not only by part numbers, but also system functions and system benefits.

Journal ArticleDOI
TL;DR: A pruning-based method for mapping decision trees to neural networks is described, which can compress the network by removing unimportant and redundant units and connections.
Abstract: There exist several methods for transforming decision trees to neural networks. These methods typically construct the networks by directly mapping decision nodes or rules to the neural units. As a result, the networks constructed are often larger than necessary. This article describes a pruning-based method for mapping decision trees to neural networks, which can compress the network by removing unimportant and redundant units and connections. In addition, equivalent decision trees extracted from the pruned networks are simpler than those induced by well-known algorithms such as ID3 and C4.5.

Journal ArticleDOI
Valerie Barr1
TL;DR: In this article, the authors introduce an approach to testing of rule-based systems, which uses coverage measures to guide and evaluate the testing process, and also introduce a complexity metric for rule-bases.
Abstract: Often a rule-based system is tested by checking its performance on a number of test cases with known solutions, modifying the system until it gives the correct results for all or a sufficiently high proportion of the test cases. This method cannot guarantee that the rule-base has been adequately or completely covered during the testing process. We introduce an approach to testing of rule-based systems, which uses coverage measures to guide and evaluate the testing process. In addition, the coverage measures can be used to assist rule-base pruning and identification of class dependencies, and serve as the foundation for a set of test data selection heuristics. We also introduce a complexity metric for rule-bases.

Journal ArticleDOI
David McSherry1
TL;DR: An algorithm for decision-tree induction is presented in which attribute selection is based on the evidence-gathering strategies used by doctors in sequential diagnosis, and an implementation of the algorithm in an environment providing integrated support for incremental learning, problem solving and explanation is presented.
Abstract: An algorithm for decision-tree induction is presented in which attribute selection is based on the evidence-gathering strategies used by doctors in sequential diagnosis. Since the attribute selected by the algorithm at a given node is often the best attribute according to the Quinlan's information gain criterion, the decision tree it induces is often identical to the ID3 tree when the number of attributes is small. In problem-solving applications of the induced decision tree, an advantage of the approach is that the relevance of a selected attribute or test can be explained in strategic terms. An implementation of the algorithm in an environment providing integrated support for incremental learning, problem solving and explanation is presented.

Journal ArticleDOI
TL;DR: A hybrid-reasoning algorithm which employs a number of statistical models derived from analysis of the entire dataset as an alternative reasoning method is proposed and results have shown that the use of these models enable the experimental system to propose better solutions than answers proposed based only on the closest matched cases.
Abstract: Case-based reasoning (CBR) has been widely used in many real-world applications. In general, CBR systems propose their answers based on solutions attached with the most similar cases retrieved from their case bases. However, in our vehicle insurance domain where the dataset contains a large amount of inconsistencies, proposing solutions based only on the most similar cases results in unacceptable answers. In this article, we propose a hybrid-reasoning algorithm which employs a number of statistical models derived from analysis of the entire dataset as an alternative reasoning method. Results of our experiments have shown that the use of these models enable our experimental system to propose better solutions than answers proposed based only on the closest matched cases.

Journal ArticleDOI
TL;DR: This paper focuses on the different type of events supported by an active database CLOSE, and a set of primitive and composite events is presented together with examples of their application.
Abstract: Traditional database systems are designed to manage large volumes of data, but rule support is generally not a strong feature. On the other hand, expert systems have the deductive capability to manage rule processing. A coupling between these two systems to support advanced database applications results in what is termed an active database. An approach to implementing active database systems is to represent knowledge as Event–Condition–Action (ECA) rules. ECA rules determine when and how to react to different kinds of events. This paper focuses on the different type of events supported by an active database CLOSE. A set of primitive and composite events is presented together with examples of their application. Event detection in this system is also discussed.

Journal ArticleDOI
TL;DR: The implementation of a case-based reasoning (CBR) system to support heating ventilation and air conditioning systems sales staff operating in remote locations using the new standard of XML as a communications protocol between client and server side Java applets.
Abstract: This paper describes the implementation of a case-based reasoning (CBR) system to support heating ventilation and air conditioning systems (HVAC) sales staff operating in remote locations. The system operates on the world wide web and uses the new standard of XML as a communications protocol between client and server side Java applets. The paper also describes the motivation for the system, its implementation, trial and roll-out detailing the benefits it has provided to the company.

Journal ArticleDOI
TL;DR: An approximate declarative semantics for rule-based knowledge bases is developed and a formal definition and analysis of knowledge base inconsistency, redundancy, circularity and incompleteness is provided in terms of theories in the first order predicate logic.
Abstract: Despite the fact that there has been a surge of publications in verification and validation of knowledge-based systems and expert systems in the past decade, there are still gaps in the study of verification and validation (V&V) of expert systems, not the least of which is the lack of appropriate semantics for expert system programming languages. Without a semantics, it is hard to formally define and analyze knowledge base anomalies such as inconsistency and redundancy, and it is hard to assess the effectiveness of V&V tools, methods and techniques that have been developed or proposed. In this paper, we develop an approximate declarative semantics for rule-based knowledge bases and provide a formal definition and analysis of knowledge base inconsistency, redundancy, circularity and incompleteness in terms of theories in the first order predicate logic. In the paper, we offer classifications of commonly found cases of inconsistency, redundancy, circularity and incompleteness. Finally, general guidelines on how to remedy knowledge base anomalies are given.

Journal ArticleDOI
TL;DR: An overview of a research program that emphasizes empirically based understanding of situationally determined limitations in users' "resources" and the use of an explicit model of the relevant causal relationships forms the basis of the Ready prototype.
Abstract: Situationally determined limitations in users' "resources" (e.g. time and working memory) constitute an increasingly important challenge to adaptive interfaces—one which does not yield easily to straightforward solutions. This article gives an overview of a research program that emphasizes empirically based understanding of this problem and the use of an explicit model of the relevant causal relationships. After introducing the challenge and comparing several possible approaches to it, we summarize related work on adaptive systems and the empirical research that forms the basis of the Ready prototype. The structure and workings of this prototype are discussed and illustrated with examples. We conclude by summarizing how the results obtained so far can form the basis for practically applied systems.

Journal ArticleDOI
TL;DR: In this work an integrated and distributed ES is developed which supervises the control system of the whole treatment plant and has the capability to learn from the correct or wrong solutions given to previous cases.
Abstract: The activated sludge process is a commonly used method for treating wastewater. Due to the biological nature of the process it is characterized by poorly understood basic biological behavior mechanisms, a lack of reliable on-line instrumentation, and by control goals that are not always clearly stated. It is generally recognized that an Expert System (ES) can cope with many of the common problems relative with the operation and control of the activated sludge process. In this work an integrated and distributed ES is developed which supervises the control system of the whole treatment plant. The system has the capability to learn from the correct or wrong solutions given to previous cases. The structure of the suggested ES is analyzed and the supervision of the local controllers is described. In this way, the main problems of conventional control strategies and individual knowledge-based systems are overcome.

Journal ArticleDOI
TL;DR: A knowledge perspective is developed which does justice to the impact of KBS on both articulated and tacit knowledge at the strategic, tactical and operational level.
Abstract: The fact that knowledge-based systems (KBS) may have considerable impact when introduced into an organisation is beyond dispute. The assessments of this impact in the literature, however, are not satisfactory. They overlook the main discriminating characteristic of KBS, i.e. the fact that KBS claim to store and handle knowledge. The article explores ways for bringing ‘knowledge’ into discussions of the impact of KBS. A knowledge perspective is developed which does justice to the impact of KBS on both articulated and tacit knowledge at the strategic, tactical and operational level. Possible applications of this perspective are explored with illustrations from an empirical investigation of KBS in 17 organisations.

Journal ArticleDOI
TL;DR: A number of techniques for pruning a classifier ensemble which is overfit on its training set are examined and it is found that a real valued GA is at least as good as the best heuristic search algorithm for choosing an ensemble weighting.
Abstract: Ensemble classifiers and algorithms for learning ensembles have recently received a great deal of attention in the machine learning literature (R.E. Schapire, Machine Learning 5(2) (1990) 197–227;N. Cesa-Bianchi, Y. Freund, D. Haussler, D.P. Helbold, R.E. Schapire, M.K. Warmuth, Proceedings of the 25th Annual ACM Symposium on the Theory of Computing, 1993, pp. 382–391; L. Breiman, Bias, Technical Report 460, Statistics Department, University of California, Berkeley, CA, 1996; J.R. Quinlan, Proceedings of the 14th International Conference on Machine Learning, Italy, 1997; Y. Freund, R.E. Schapire, Proceedings of the 13th International Conference on Machine Learning ICML96, Bari, Italy 1996, pp. 148–157; A.J.C. Sharkey, N.E. Sharkey, Combining diverse neural nets, The Knowledge Engineering Review 12 (3) (1997) 231–247). In particular, boosting has received a great deal of attention as a mechanism by which an ensemble of classifiers that has a better generalisation characteristic than any single classifier derived using a particular technique can be discovered. In this article, we examine and compare a number of techniques for pruning a classifier ensemble which is overfit on its training set and find that a real valued GA is at least as good as the best heuristic search algorithm for choosing an ensemble weighting.

Journal ArticleDOI
TL;DR: The Protocol Assistant can be used either as a "wizard" which guides users through the decision making process, or as a “hypertext manual” which leads them to the information relevant to the decision they are making.
Abstract: The Protocol Assistant is a knowledge-based system, developed by the Department of Artificial Intelligence and AIAI at the University of Edinburgh, which advises on the treatment of parotid tumours. It has been developed to support both adherence to a clinical protocol based on the latest evidence and the use of clinical judgment where the evidence is weak or inconsistent. It was developed using a knowledge modelling technique named PROforma, which is specifically designed for representing best practice guidelines; the PROforma models were used as the basis for a user interface, which was implemented in HTML. A set of rules were developed in JESS (the Java Expert System Shell) which were capable of “running” the protocol; a simple method of reasoning with certainties, based on the “goodness” of each relevant item of published evidence, was used to recommend which path to follow at choice points. However, the user is also supplied with access to the abstracts of all relevant published articles, using the hypertext facilities of HTML. The Protocol Assistant can thus be used either as a "wizard" which guides users through the decision making process, or as a “hypertext manual” which leads them to the information relevant to the decision they are making. This dual-role capability is crucial for the acceptance of KBS in the real world.

Journal ArticleDOI
TL;DR: RDR is described and its approach to V&V concentrating particularly on recent extensions which use Rough Set Theory for verification and Formal Concept Analysis for validation.
Abstract: Ripple-Down Rules (RDR) are an alternative from mainstream approaches to the building of knowledge based systems (KBS). RDR use simple, but reliable, techniques for Knowledge Acquisition (KA) and representation which have been shown to support the on-line development, maintenance and validation of KBS. Key features of RDR that affect the approach to Verification and Validation (V&V) are the incremental nature of KA and the maintenance, use of cases for KA and validation, the use of an exception structure for knowledge representation and the development of KBS by experts. This article describes RDR and its approach to V&V concentrating particularly on recent extensions which use Rough Set Theory for verification and Formal Concept Analysis for validation.

Journal ArticleDOI
TL;DR: The coverage tool is described—an extension of cover designed to perform anomaly detection on multiple-agent systems to verify that the whole group of agent forms a complete and coherent team.
Abstract: The increasing development of distributed knowledge-based systems based upon the multiple-agent paradigm demands techniques for the verification of these systems. As a minimum requirement, it is necessary to verify that the individual agents are capable of fulfilling their advertised capabilities, and that the whole group of agent forms a complete and coherent team. Anomaly detection, as performed by the cover tool, has proven to be a useful method for verifying logical properties of stand-alone knowledge-based systems. This article describes the coverage tool—an extension of cover designed to perform anomaly detection on multiple-agent systems. C overage checks a multiple-agent system at several levels to verify that the system forms a coherent and complete team. The article includes a running example of a multiple-agent system verified using coverage .

Journal ArticleDOI
TL;DR: The proposed enhanced nearest neighbour learning algorithm allows one to define similarity on a wide spectrum of attribute types and automatically assigns to each attribute a weight of its importance with respect to the target attribute.
Abstract: Nearest neighbour algorithms classify a previously unseen input case by finding similar cases to make predictions about the unknown features of the input case. The usefulness of the nearest neighbour algorithms has been demonstrated in many real-world domains. Unfortunately, most of the similarity measures discussed in the current nearest neighbour learning literature handle only limited data types, thus limiting their applicability to relational database applications. In this paper, we propose an enhanced nearest neighbour learning algorithm that is applicable to relational databases. The proposed method allows one to define similarity on a wide spectrum of attribute types. It automatically assigns to each attribute a weight of its importance with respect to the target attribute. The method has been implemented as a computer program and its effectiveness has been tested on four publicly available machine learning databases. Its performance is compared to another well-known machine learning method, C4.5. Our experimentation with the system demonstrates that the classification accuracy of the proposed system was superior to that of C4.5 in most cases.

Journal ArticleDOI
TL;DR: Results from trials indicate that, although not entirely sufficient, process-related knowledge, in the form of argumentation, is useful to software designers who have reason to re-use that knowledge.
Abstract: This paper presents a discussion of the facilitation of expertise sharing among software design stakeholders. LOUIS, a prototype research tool, helps to capture much of the contextual information and knowledge in early design meetings that are very often lost soon after the meetings are over. Using LOUIS as a basis, we consider knowledge and skill transfer effects in software design tasks. Results from trials indicate that, although not entirely sufficient, process-related knowledge, in the form of argumentation, is useful to software designers who have reason to re-use that knowledge. A list of recommendations on how those knowledge transfer effects can be further facilitated is provided.

Journal ArticleDOI
TL;DR: A system which used rhetorical reasoning rules such as fairness, reciprocity, and deterrence which was used to simulate the text of a debate to be modelled using modern argumentation theory.
Abstract: The article introduces argumentation theory, some examples of computational models of argumentation, some application examples, considers the significance and problems of argumentation systems, and outlines the significance and difficulties of the field. Also, the article describes a system which used rhetorical reasoning rules such as fairness, reciprocity, and deterrence which was used to simulate the text of a debate. The text was modelled using modern argumentation theory, and this model was used to build the system. The article discusses the system with regard to several aspects: its ability to deal with meaningful contradiction such as claim X supporting claim Y, yet claim Y attacking claim X; recursive arguments; inconsistency maintenance; expressiveness; encapsulation, the use of definitions as the basis for rules, and making generalisations using taxonomies. The article concludes with a discussion of domain dependence, rule plausibility, and some comparisons with formal logic.

Journal ArticleDOI
TL;DR: In this article, two related reviews are presented: the first review examines some aspects of idea processors and compares them with some existing work from AI research community, and the second review is concerned with using retrospective analysis for technical invention (particularly the invention of artifacts).
Abstract: In this article idea processors are studied as creativity supporting systems. Two related reviews are presented. The first review examines some aspects of idea processors and compares them with some existing work from AI research community. The second review is concerned with using retrospective analysis for technical invention (particularly the invention of artifacts). Retrospective analysis not only opens new opportunities for knowledge-based approaches in idea generation, but also offers new mechanisms to deal with some problems faced by idea processors using object-oriented features. To illustrate our new approach of idea generation, we also provide a brief sketch of our project which uses a Prolog program to implement some key ideas presented in this article.

Journal ArticleDOI
TL;DR: CONSTRAINTCAM is a real-time camera visualization interface for dynamic 3D worlds that allows users to indicate which object(s) to view, how each should be viewed, what cinematic style and pace to employ, and how to respond when a single satisfactory view is not possible.
Abstract: In next-generation virtual 3D simulation, training, and entertainment environments, intelligent visualization interfaces must respond to user-specified viewing requests so users can follow salient points of the action and monitor the relative locations of objects. Users should be able to indicate which object(s) to view, how each should be viewed, what cinematic style and pace to employ, and how to respond when a single satisfactory view is not possible. When constraints fail, weak constraints can be relaxed or multi-shot solutions can be displayed in sequence or as composite shots with simultaneous viewports. To address these issues, we have developed CONSTRAINTCAM, a real-time camera visualization interface for dynamic 3D worlds.