scispace - formally typeset
Search or ask a question

Showing papers in "Knowledge Based Systems in 1992"


Journal ArticleDOI
TL;DR: The paper considers what it means to capture design knowledge by embodying it in procedures that are expressible in a computer program, distinguishing several possible purposes for such an exercise and describes design phenomena that a computational strategy of this kind would have to reproduce.
Abstract: The paper considers what it means to capture design knowledge by embodying it in procedures that are expressible in a computer program, distinguishing several possible purposes for such an exercise. Following the lead of David Marr's computational approach to vision, emphasis is placed on 'phenomenological equivalence' — that is, first defining the functions of designing, and then specifying how people design. The paper goes on to describe design phenomena that a computational strategy of this kind would have to reproduce. All of them are integral to a view of designing as reflective conversation with the materials of a design situation, and depend on the idea of distinctive design worlds constructed by the designer. These phenomena include: the designer's seeing-moving-seeing, the construction of figures from marks on a page, the appreciation of design qualities, the evolution of design intentions in the course of the design process, the recognition of unintended consequences of move experiments, the storage and deployment of prototypes, which must be placed in transaction with the design situation, and communication across divergent design worlds. Considered as performance criteria for a phenomenologically equivalent computational designer, these phenomena are formidable and threatening. Considered as performance criteria for the construction of a computer-based design assistant, however, they may be highly evocative.

668 citations


Journal ArticleDOI
TL;DR: The paper focuses on the integration of specification, construction, and a catalogue of prestored design objects in those environments to illustrate how such integrated design environments empower human designers.
Abstract: Designers deal with ill defined and wicked problems that are characterized by fluctuating and conflicting requirements. Traditional design methodologies that are based on the separation between problem setting (analysis) and problem solving (synthesis) are inadequate for the solution of these problems. The supporting of design with computers requires a cooperative problem-solving approach that empowers designers with integrated, domain-oriented, knowledge-based design environments. The paper describes the motivation for the latter approach, and introduces an architecture for such design environments. It focuses on the integration of specification, construction, and a catalogue of prestored design objects in those environments to illustrate how such integrated design environments empower human designers. The Catalog Explorer system component, which is described in detail, assists designers in the location of examples in the catalogue that are relevant to the task at hand, as partially articulated by the current specification and construction. Users are thereby relieved of the tasks of forming queries or navigating in information spaces. The last part of the paper discusses the relationship of the work with the conceptual framework developed by Donald Schon.

106 citations


Journal ArticleDOI
TL;DR: The role of search processes and knowledge used in the activity of designing has been receiving increasing attention in the areas of artificial intelligence, the study of design, and cognitive psychology.
Abstract: The role of search processes and the knowledge used in the activity of designing has been receiving increasing attention in the areas of artificial intelligence, the study of design, and cognitive psychology. More recently, with the development of expert systems, considerable attention has been focused on the representation and retrieval of expert knowledge. What has not been addressed, however, is whether or not an expert represents, accesses and utilizes all knowledge equivalently. The design-fixation effect, where pictures of design instances presented as part of a design problem result in the reproduction, by student and practising designers, of aspects of the presented instance, indicates that certain types of knowledge are more privileged in the design process than others. The results of a preliminary experiment, which examined the role of pictures of different types of instances, verbal descriptions of those instances, and variations in the familiarity of instances in the fixation effect, are reported. Pictorial information was shown to have no effect if the instance was unfamiliar, and equally familiar pictures were found to produce both design fixation and increased variety in design. Verbal descriptions of two of the pictured designs also produced effects, although these were much reduced in comparison with those of the pictorial material that produced fixation effects. While preliminary, these results indicate a particularly important direction for research, the results of which could be important for artificial intelligence, the study of design, and the psychology of design processes.

105 citations


Journal ArticleDOI
TL;DR: The paper presents the type of analysis that cognitive psychology makes of the mental activities involved in design at three levels: the way in which designers organize their activity, the main strategies that they adopt, and the problem-solving processes that they use.
Abstract: Through examples of data from three empirical design studies, the paper presents the type of analysis that cognitive psychology makes of the mental activities involved in design These activities are analysed at three levels: the way in which designers organize their activity, the main strategies that they adopt, and the problem-solving processes that they use Different types of design task are presented, ie functional-specification, software and composite-structure design tasks The relevance of the results to artificial intelligence is discussed from an assistance viewpoint

54 citations


Journal ArticleDOI
TL;DR: A novel method ``the multi-level hierarchical retrieval method" that exploits redundancy to improve both space and execution time efficiency is described.
Abstract: As large databases of conceptual graphs are developed for complex domains, efficient retrieval techniques must be developed to manage the complexity of graph-matching while maintaining reasonable space requirements. This paper describes a novel method ``the multi-level hierarchical retrieval method" that exploits redundancy to improve both space and execution time efficiency. The method involves search in multiple partially ordered (by ``more-general-than") hierarchies such that search in a simpler hierarchy reduces the search time in the hierarchy of next complexity. The specific hierarchies used are: the traditional partial order over conceptual graphs; a partial order over node descriptors; a partial order over ``descriptor units"; and finally, the simplest partial order is the traditional type hierarchy.

49 citations


Journal ArticleDOI
TL;DR: The authors believe that word meanings and utterance meanings are isomorphic, in the sense that words, sentences and texts are simply different units for conveying a message, and the core meanings of words and texts can be expressed by the same formalism: conceptual graphs.
Abstract: A good lexical component is a vital part of any natural-language system. The paper discusses an implemented lexical component that is part of a larger system currently being developed for information retrieval. Two special features of the system are its unique knowledge-representation formalism on the various levels, conceptual graphs, and a unique lexicon for the parser and the generator. Lexical choices depend on various knowledge sources (pragmatic, conceptual, linguistic etc). The conceptual component, i.e. the words' underlying meanings or definitions, are discussed in the paper. The authors believe that word meanings and utterance meanings are isomorphic, in the sense that (a) words, sentences and texts are simply different units for conveying a message (words being shorthand labels for larger conceptual chunks), and (b) the core meanings of words and texts (sentences) can be expressed by the same formalism: conceptual graphs. This view allows the process of lexical choice to be modelled by matching definition graphs (word definitions) on an utterance graph (conceptual input). Further, it provides a natural basis for paraphrases and for explanations concerning the conceptual differences between a set of words.

40 citations


Journal ArticleDOI
Bernard Moulin1
TL;DR: A framework for representing discourse temporal structures on the basis of the conceptual graph approach is proposed and it is shown how the discourse conceptual structure can be enriched so that the various linguistic properties of discourses can be integrated.
Abstract: ‘Time’ has been studied by researchers in several disciplines, such as philosophy, logic, linguistics, artificial intelligence and physics. The temporal models that have been proposed usually present significant differences which reflect the different research purposes of their authors. This variety of approaches can be benefited from in the study of temporal knowledge in discourse. The paper proposes a framework for representing discourse temporal structures on the basis of the conceptual graph approach. Three modelling levels are distinguished that help in the classification of the origin of temporal information that is found in a discourse: the world, conceptual and linguistic levels. A model is proposed for the specification of world-level representations, i.e. temporal situations, time intervals and temporal relationships. Conceptual-level notions that are used to model time-coordinate systems in discourses, i.e. temporal perspectives and localizations, are introduced. These different knowledge structures allow the representation of the discourse conceptual structure. It is shown how the discourse conceptual structure can be enriched so that the various linguistic properties of discourses (anaphoras, sentence markers) can be integrated.

25 citations


Journal ArticleDOI
TL;DR: The authors presents conceptual graphs as a synthesis of the logicist and AI representations designed to support the requirements of both the model-theoretic tradition and the more computational AI tradition, and evaluates their adequacy for various kinds of semantic information that must be stored in the lexicon.
Abstract: The lexical entry for a word must contain all the information needed to construct a semantic representation for sentences that contain the word. Because of that requirement, the formats for lexical representations must be as detailed as the semantic forms. Simple representations, such as features and frames, are adequate for resolving many syntactic ambiguities. However, since those notations cannot represent all of logic, they are incapable of supporting all the function needed for semantics. Richer semantic-based approaches have been developed in both the model-theoretic tradition and the more computational AI tradition. Although superficially in conflict, these two traditions have a great deal in common at a deeper level. Both of them have developed semantic structures that are capable of representing a wide range of linguistic phenomena. The paper compares these approaches, and evaluates their adequacy for various kinds of semantic information that must be stored in the lexicon. It presents conceptual graphs as a synthesis of the logicist and AI representations designed to support the requirements of both.

23 citations



Journal ArticleDOI
TL;DR: The nature of design and how the coordination of diverse resources is required to enable the many different factors that affect a product throughout its life to be considered are considered.
Abstract: The paper considers the nature of design and how the coordination of diverse resources is required to enable the many different factors that affect a product throughout its life to be considered. A discussion of related work illustrates some of the methods for organizing knowledge and controlling design activity. A software architecture that is based on a blackboard model with knowledge sources related in a hierarchy is presented as a means of supporting design coordination. Its use is demonstrated with an example application for the electromagnetic design of a turbine generator.

11 citations


Journal ArticleDOI
TL;DR: The current working ICADS prototype is a distributive system that can run with a variety of hardware processors in a Unix environment and includes an expert design advisor that has six knowledge-based systems working as domain experts, a blackboard-coordination expert, two knowledge bases, and several sources of reference data.
Abstract: An Intelligent Computer Aided Design System ( ICADS ) is under long-term development. Central to this work is the philosophy that the system should be a valuable assistant to the designer throughout the entire design activity. The current working ICADS prototype is a distributive system that can run with a variety of hardware processors in a Unix environment. It includes an expert design advisor that has six knowledge-based systems working as domain experts, a blackboard-coordination expert, two knowledge bases, and several sources of reference data. The expert design advisor in the prototype interprets a drawing as it is being made by a designer working in a CAD environment. It reacts in real time to monitor the evolving floorplan from the viewpoints of experts in the domains of access, climate, cost, lighting, sound and structure. The system also provides for additional interaction with the designer outside the CAD system to help in the realization of a resultant design that is free from conflicts in the six areas listed above.

Journal ArticleDOI
TL;DR: The AGENTS system combines constructs from both object-oriented and deductive systems with Prolog-like syntax and semantics and proves its logical simplicity for fast prototyping.
Abstract: Engineering domains such as manufacturing design are complicated, while expert systems are dedicated to the solution of problems within narrow domains. The range of problem solving can be extended by cooperating knowledge-based systems, each of which may be confined to a narrow and specialized domain. A tandem architecture is proposed where a knowledge source is composed of two parts: agent and system. A system is developed in a private proof language mainly for computation. On the other hand, an agent is a representative of a system developed in a common argument language mainly for cooperation. This meets requirements of modularity and commonality. The paper presents a tool called AGENTS, which is an object-oriented Prolog system for this purpose. The AGENTS system combines constructs from both object-oriented and deductive systems with Prolog-like syntax and semantics. It proves its logical simplicity for fast prototyping.

Journal ArticleDOI
TL;DR: The paper revisits the issue of the 'neat' versus scruffy philosophies applied to the building of AI systems, and characterizes the nature of recent relevant work, with particular emphasis on the Cyc project.
Abstract: Two different roads to the modelling of intelligence and the building of intelligence systems have been proposed. These are the 'neat' (logical) versus scruffy (ad hoc) philosophies applied to the building of AI systems. The paper revisits this issue, and characterizes the nature of recent relevant work, with particular emphasis on the Cyc project. A constructionist perspective that is akin to Piagetian work and Sowa's crystallizing of theory is espoused. This perspective notes that scruffy, bottom-up methods provide a developmental basis for more formal theories which, in turn, provide a further bootstrapping of subsequent scruffy development of important formal elements, such as conceptual catalogues. Particular emphasis is placed on conceptual analysis results that are available from the Cyc project, which is attempting to achieve robust intelligence using vast pools of handcrafted knowledge. This effort, like a good conceptual catalogue, is an attempt to provide an empirical basis for knowledge acquisition via automated understanding of documentation and machine learning.

Journal ArticleDOI
TL;DR: The Castlemaine project has adopted the philosophy of design support, and it evaluates the model of design exploration within the domain of pharmaceutical small-molecule design through the specification of a knowledge-based design-support system.
Abstract: A notion of design has been developed that is fundamentally different from others in the field. Creative design across a number of domains has been focused on, and a model of design as an exploratory activity rather than as a form of search has been developed. The exploration of a design problem's characteristics is an activity that creates and bounds the space within which possible design solutions can be located. The seeing of design as an exploration and mapping of parameter space highlights the inherent complexity of the creative design process, and it has implications for the specification of knowledge-based design systems. The resulting design philosophy places the human designer at the heart of the exploration process with the computer system, using integrated AI techniques, acting to support him/her throughout the design process. The Castlemaine project has adopted the philosophy of design support, and it evaluates the model of design exploration within the domain of pharmaceutical small-molecule design through the specification of a knowledge-based design-support system. The paper describes the back-ground to the Castlemaine project, the research programme, and the status of the project in 1991.

Journal ArticleDOI
TL;DR: The analysis highlights the importance of human and organizational perspectives in the understanding of why expert systems succeed, or indeed fail, and points to the need for greater attention to be paid to the process issues of innovation that help companies to define and manage appropriate development pathways in their organization.
Abstract: The development of applications with proven benefits, the availability of better development methodologies and tools, realism about what expert systems are capable of, and the integration of expert systems into mainstream information technology are all factors that, in the 1990s, will promote increased exploitation of this technology in the manufacturing sector. Despite these developments, however, there still remains some uncertainty as to precisely how expert systems are being used in manufacturing other than in the ‘show-case’ and large-scale demonstration systems, and in the more general attempts to disseminate selected companies' experiences in developing applications in an attempt to define ‘best practice’. Further, little attention has been paid to the managerial context and human and organizational processes involved in expert-systems innovation. The paper is one of two that review the management of expert-systems technology in manufacturing. It begins by evaluating current experiences with expert systems through a study of 145 manufacturing companies in the UK, from which an agenda of business, technical and managerial issues is raised. The analysis highlights the importance of human and organizational perspectives in the understanding of why expert systems succeed, or indeed fail, and it points to the need for greater attention to be paid to the process issues of innovation that help companies to define and manage appropriate development pathways in their organization — appropriate in the sense that they are meaningful within a specific organizational context, and adaptable given a company's specific needs and constraints. The second paper extends these arguments to a consideration of the wider processes of technology (and knowledge) transfer, and it develops a management framework to facilitate successful innovation in expert systems, and, more importantly, to enable desirable organizational change to take place.

Journal ArticleDOI
TL;DR: The paper presents a discussion of the application of connectionism to design, and some important features of connectionist systems are presented, and their use as the basis of design tools is explored.
Abstract: The paper presents a discussion of the application of connectionism to design. The understanding provided by connectionism as to how designers actually design (cognitive modelling) is considered to be relatively minor. A challenge is therefore presented to the mechanistic metaphors on which cognitive modelling is based. In keeping with post-rationalistic views of cognition, design is well described in terms of noncomputable metaphors such as 'play' and 'dialogue'. This view relegates connectionism (and other artificial intelligence paradigms) to the realm of techniques for the development of tools, and it curbs their use as explanatory devices. In this light, some important features of connectionist systems are presented, and their use as the basis of design tools is explored.

Journal ArticleDOI
TL;DR: The paper identifies the limitations of ordinary Petri nets for modeling DISs and proposes extensions, which include colored tokens, inhibition arcs, non-primitive places and transitions, multiple copies of tokens and cumulative places, called a distributed problem-solving Petri net.
Abstract: Petri nets have the basic concepts necessary to model distributed systems with asynchronous processes. Petri nets are not directly applicable to certain kinds of systems like distributed intelligent systems (DISs). These are complex systems where multiple intelligent agents cooperate through communication to achieve the solution to a problem. The paper identifies the limitations of ordinary Petri nets for modeling DISs and proposes extensions. The extended Petri net incorporates colored tokens, inhibition arcs, non-primitive places and transitions, multiple copies of tokens and cumulative places. It is called a distributed problem-solving Petri net. The definitions and analysis techniques are given and illustrated by means of an example.

Journal ArticleDOI
TL;DR: The paper discusses on step toward the integration of Chomsky's government-binding theory of syntax with Sowa's conceptual-graph theory of knowledge representation, which provides a parsing technology that surpasses that of phrase-structure grammars.
Abstract: While much research in natural-language processing (NLP) has been devoted to microlevel analyses of constrained text, many applications, such as machine translation, message understanding and information retrieval, call for capabilities in the understanding of unconstrained text. The paper discusses on step toward this type of NLP system: the integration of Chomsky's government-binding (GB) theory of syntax with Sowa's conceptual-graph (CG) theory of knowledge representation. GB theory provides a parsing technology that surpasses that of phrase-structure grammars, and the CG theory offers a formalism that is suitable for handling natural-language semantics and pragmatics. Their marriage is most natural and synergistic. Not only can their respective strengths be enjoyed, but also most intermediate steps required to build CGs from parse trees can be eliminated, because of the fact that, when it is done independently, a great deal of common knowledge is required both for generating parse trees with a GB-based parser and for translating a parse tree into a CG representation.

Journal ArticleDOI
TL;DR: In this article, the authors used genetic algorithms to self-learn diagnostic rules for a pilot-scale mixing process and a continuous stirred-tank reactor system, where the training data is divided into various groups corresponding to various faults and the normal operating condition.
Abstract: The self learning of diagnostic rules can ease knowledge-acquisition effort, and it is more desirable in cases where experience about certain faults is not available. Applications of genetic algorithms to the self learning of diagnostic rules for a pilot-scale mixing process and a continuous stirred-tank reactor system are described in the paper. In this method, a set of training data, which could be obtained from simulations and/or from the recorded data of the previous operations of the real process, is required. The training data is divided into various groups corresponding to various faults and the normal operating condition. Corresponding to each fault, there is a group of initial rules which are coded into binary strings. These rules are evaluated by a genetic algorithm which contains the three basic operators, reproduction, crossover and mutation, and an added operator which preserves the best rule ever discovered. Through this biological-type evaluation, new fitted rules are discovered. The results demonstrate that diagnostic rules fitted with a given set of training data can be efficiently discovered through genetic learning, and, hence, that genetic algorithms provide a means for the automatic creation of rules from a set of training data. It is also demonstrated that bad training data and the inappropriate formulation of rules could degrade the performance of the learning system.

Journal ArticleDOI
TL;DR: An experiment is described to measure the effectiveness of three different approaches to expert system development, using different types of tool: use of a commercial expert system shell, use ofA general-purpose high-level programming language, or use of an custom-built shell.
Abstract: Although a wide range of different kinds of tool is now available for building expert systems, there is little published guidance on how to select appropriate tools for an application. An experiment is described to measure the effectiveness of three different approaches to expert system development, using different types of tool: use of a commercial expert system shell, use of a general-purpose high-level programming language, or use of a custom-built shell. By recording the development effort in each case, and applying metrics to these data, the advantages and disadvantages of each approach are identified.

Journal ArticleDOI
G. H. Abdou1
TL;DR: An integrated approach to process-plan generation with an expert system that takes full advantage of the programming language(s) embedded in existing CAD software to develop a CAM database system that is building-oriented rather than drawing-oriented, and from which drawings are produced as reports.
Abstract: An integrated approach is presented to process-plan generation with an expert system. First, the research takes full advantage of the programming language(s) embedded in existing CAD software to develop a CAM database system that is building-oriented rather than drawing-oriented, and from which drawings are produced as reports. Then, a knowledge-base system is designed to automate the development of the process plan. The application of the expert system to a mechanical part is illustrated.

Journal ArticleDOI
TL;DR: A type of 1st-order fuzzy logic that incorporates a complete set of quantifiers, qualifiers and modifiers is introduced that consists of an alphabet, a syntax and a set of semantics for the language.
Abstract: Traditional logic and logic programming languages cannot handle uncertainty. Fuzzy logic can, but nobody has yet devised a readily computable form. One possible way to achieve this is to define a propositional fuzzy logic, extend this to a 1st-order form, convert it to Horn-clause form, and, finally, to devise a theorem prover to manipulate the Horn clauses. The authors of the paper have already achieved the first step. The paper formally develops the second step, namely a type of 1st-order fuzzy logic that incorporates a complete set of quantifiers, qualifiers and modifiers. The fuzzy entities that represent the language are described, and a 1st-order theory is introduced that consists of an alphabet, a syntax and a set of semantics for the language.

Journal ArticleDOI
TL;DR: How the mechanisms of metaphor can be used to develop a more flexible means of knowledge representation for artificial intelligence is explored.
Abstract: Metaphor is not just a surface phenomenon, a particular form of speech, but an indication of some underlying cognitive mechanisms. Metaphor provides a way of violating the usual semantic constraints and categories, and joining together ideas that were previously seen as dissimilar, and, in so doing, it extends one's conceptual framework. The paper explores how the mechanisms of metaphor can be used to develop a more flexible means of knowledge representation for artificial intelligence. The theory of metaphor used in the paper is based on Max Black's interaction theory, and has been developed by Way. The knowledge-representation scheme is that of John Sowa's conceptual graphs. The techniques for representing knowledge in machines have provided a new language and a new set of conceptual tools which can make sense of Black's often vague and intuitive theory.

Journal ArticleDOI
TL;DR: The design and implementation of flexible machining cells calls for a systematic modelling methodology that is capable of modelling alternative facilities over multiple levels of detail and the presentation of the manufacturing knowledge embedded within the three levels of modelling is placed.
Abstract: The design and implementation of flexible machining cells calls for a systematic modelling methodology. An AI-based modelling system is briefly described that is capable of modelling alternative facilities over multiple levels of detail. Emphasis is then placed on the presentation of the manufacturing knowledge embedded within the three levels of modelling. The operational assumptions, the state transformation and the decision making at each level are closely described, and typical behavioural and decision rules are discussed.

Journal ArticleDOI
L.-M. Fu1
TL;DR: A novel approach that applies the technique of back propagation to the recognition of semantically incorrect rules is presented and the viability of this technique has been demonstrated in a practical domain.
Abstract: A novel approach that applies the technique of back propagation to the recognition of semantically incorrect rules is presented. When the rule strengths of most rules are semantically correct, semantically incorrect rules can be recognized if their strengths are weakened or change signs after training with correct samples. In each training cycle, the discrepancies in the belief values of goal hypotheses are propagated backwards, and the strengths of rules responsible for such discrepancies are modified appropriately. A function called consistent-shift is defined for measuring the shift of a rule strength in the direction that is consistent with the strength assigned before training, and this is a critical component of this technique. A formal analysis of this approach is provided. The viability of this technique has been demonstrated in a practical domain.


Journal ArticleDOI
TL;DR: A new conceptual basis for technology transfer is defined which stresses a ‘needs-driven’ process of change and the importance of context as well as content in expert systems transfer and implementation is highlighted.
Abstract: Current approaches to expert systems technology transfer have tended to focus upon the marketing and servicing of technology capabilities and potential whilst remaining uncertain about the process factors which determine how this technology may be applied and adopted effectively. Furthermore, much of current expert systems research work and literature addresses these issues from the viewpoint of the supplier or donor whilst overlooking the importance of human and organisational perspectives which shed light on the means of delivery and take-up within the recipient organisation . The paper, the second of two that look at expert systems innovation in manufacturing, argues for greater consideration of the characteristics, processes and mechanisms of technology transfer. It defines a new conceptual basis for technology transfer which stresses a ‘needs-driven’ process of change; this highlights the importance of context as well as content in expert systems transfer and implementation. From this, a management framework is outlined and is used to rationalise the transfer problems and needs described in the first paper following a survey of 145 manufacturing users. It is also shown how this framework may be used to understand more about the multi-level and multi-dimensional needs and effects of technology induced change and therefore how it may be used to help senior management strategically plan and co-ordinate expert systems programmes in their organisations.


Journal ArticleDOI
TL;DR: The paper addresses some of the consequences of the application of the TRH to the representation and dynamics of natural and role types, attributes and characteristics, and discusses additional, metaknowledge structures which support the temporal consistency of a concept with the rest of the knowledge base.
Abstract: It has been noted on a number of occasions that the knowledge-representation structures and processing functions provided by Sowa's theory of conceptual graphs is complementary with the semantic system developed by Ray Jackendoff. Jackendoff's system, incorporating Gruber's Thematic Relations Hypothesis (TRH), develops a principled 'conceptual' semantics and its integration with syntax. Also, as a byproduct, the TRH provides a motivation and foundations for a deep-knowledge-based knowledge-acquisition methodology. The paper addresses some of the consequences of the application of the TRH to the representation and dynamics of natural and role types, attributes and characteristics. It also deals with the representation of the existence of individuals and type labels in the knowledge base, and discusses additional, metaknowledge structures which support the temporal consistency of a concept with the rest of the knowledge base.

Journal ArticleDOI
TL;DR: A set-theoretic equation solver based on PRESS is described, and linked to an assumption-based truth-maintained blackboard system, which contributes to the equation solving by allowing the definition of simple problem-solving axioms which can combine to produce complex behaviour.
Abstract: A set-theoretic equation solver based on PRESS is described, and linked to an assumption-based truth-maintained blackboard system. The methods used in PRESS are extended to solve set-theoretic equations in one unknown where the unknown occurs only once. Problems arising from increasingly underspecified sets being delivered from the system are eliminated by defining subsumption and using most general unification to combine partial solutions. The ATMS contributes to the equation solving by allowing the definition of simple problem-solving axioms which can combine to produce complex behaviour.