scispace - formally typeset
Search or ask a question

Showing papers on "Knowledge representation and reasoning published in 1993"


Book
01 Jan 1993
TL;DR: Case-based reasoning as discussed by the authors is one of the fastest growing areas in the field of knowledge-based systems and the first comprehensive text on the subject is presented by a leader in this field.
Abstract: Case-based reasoning is one of the fastest growing areas in the field of knowledge-based systems and this book, authored by a leader in the field, is the first comprehensive text on the subject. Case-based reasoning systems are systems that store information about situations in their memory. As new problems arise, similar situations are searched out to help solve these problems. Problems are understood and inferences are made by finding the closest cases in memory, comparing and contrasting the problem with those cases, making inferences based on those comparisons, and asking questions when inferences can't be made. This book presents the state of the art in case-based reasoning. The author synthesizes and analyzes a broad range of approaches, with special emphasis on applying case-based reasoning to complex real-world problem-solving tasks such as medical diagnosis, design, conflict resolution, and planning. The author's approach combines cognitive science and engineering, and is based on analysis of both expert and common-sense tasks. Guidelines for building case-based expert systems are provided, such as how to represent knowledge in cases, how to index cases for accessibility, how to implement retrieval processes for efficiency, and how to adapt old solutions to fit new situations. This book is an excellent text for courses and tutorials on case-based reasoning. It is also a useful resource for computer professionals and cognitive scientists interested in learning more about this fast-growing field.

4,672 citations


Journal ArticleDOI
TL;DR: It is argued that keeping in mind all five of these roles that a representation plays provides a usefully broad perspective that sheds light on some longstanding disputes and can invigorate both research and practice in the field.
Abstract: Although knowledge representation is one of the central and, in some ways, most familiar concepts in AI, the most fundamental question about it -- What is it? -- has rarely been answered directly. Numerous papers have lobbied for one or another variety of representation, other papers have argued for various properties a representation should have, and still others have focused on properties that are important to the notion of representation in general. In this article, we go back to basics to address the question directly. We believe that the answer can best be understood in terms of five important and distinctly different roles that a representation plays, each of which places different and, at times, conflicting demands on the properties a representation should have. We argue that keeping in mind all five of these roles provides a usefully broad perspective that sheds light on some longstanding disputes and can invigorate both research and practice in the field.

1,199 citations


Journal ArticleDOI
TL;DR: A computational model is described that takes a step toward addressing the cognitive science challenge and resolving the artificial intelligence paradox and shows how a connectionist network can encode millions of facts and rules involving n-ary predicates and variables and perform a class of inferences in a few hundred milliseconds.
Abstract: Human agents draw a variety of inferences effortlessly, spontaneously, and with remarkable efficiency – as though these inferences were a reflexive response of their cognitive apparatus. Furthermore, these inferences are drawn with reference to a large body of background knowledge. This remarkable human ability seems paradoxical given the complexity of reasoning reported by researchers in artificial intelligence. It also poses a challenge for cognitive science and computational neuroscience: How can a system of simple and slow neuronlike elements represent a large body of systemic knowledge and perform a range of inferences with such speed? We describe a computational model that takes a step toward addressing the cognitive science challenge and resolving the artificial intelligence paradox. We show how a connectionist network can encode millions of facts and rules involving n-ary predicates and variables and perform a class of inferences in a few hundred milliseconds. Efficient reasoning requires the rapid representation and propagation of dynamic bindings. Our model (which we refer to as SHRUTI) achieves this by representing (1) dynamic bindings as the synchronous firing of appropriate nodes, (2) rules as interconnection patterns that direct the propagation of rhythmic activity, and (3) long-term facts as temporal pattern-matching subnetworks. The model is consistent with recent neurophysiological evidence that synchronous activity occurs in the brain and may play a representational role in neural information processing. The model also makes specific psychologically significant predictions about the nature of reflexive reasoning. It identifies constraints on the form of rules that may participate in such reasoning and relates the capacity of the working memory underlying reflexive reasoning to biological parameters such as the lowest frequency at which nodes can sustain synchronous oscillations and the coarseness of synchronization.

652 citations


Journal ArticleDOI
TL;DR: This paper describes how both the domain and the information sources are modeled, shows how a query at the domain level is mapped into a set of queries to individual information sources, and presents algorithms for automatically improving the efficiency of queries using knowledge about both the Domain and the Information sources.
Abstract: With the current explosion of data, retrieving and integrating information from various sources is a critical problem. Work in multidatabase systems has begun to address this problem, but it has primarily focused on methods for communicating between databases and requires significant effort for each new database added to the system. This paper describes a more general approach that exploits a semantic model of a problem domain to integrate the information from various information sources. The information sources handled include both databases and knowledge bases, and other information sources (e.g. programs) could potentially be incorporated into the system. This paper describes how both the domain and the information sources are modeled, shows how a query at the domain level is mapped into a set of queries to individual information sources, and presents algorithms for automatically improving the efficiency of queries using knowledge about both the domain and the information sources. This work is implemented in a system called SIMS and has been tested in a transportation planning domain using nine Oracle databases and a Loom knowledge base.

506 citations


Journal ArticleDOI
TL;DR: It is argued that the problem of plan recognition, inferring an agent's plan from observations, is largely a problem of inference under conditions of uncertainty and an approach to the plan recognition problem that is based on Bayesian probability theory is presented.

483 citations


Journal ArticleDOI
TL;DR: This paper analyzes the correctness of the subsumption algorithm used in CLASSIC, a description logic-based knowledge representation system that is being used in practical applications, and provides a variant semantics for descriptions with respect to which the current implementation is complete, and which can be independently motivated.
Abstract: This paper analyzes the correctness of the subsumption algorithm used in CLASSIC, a description logic-based knowledge representation system that is being used in practical applications. In order to deal efficiently with individuals in CLASSIC descriptions, the developers have had to use an algorithm that is incomplete with respect to the standard, model-theoretic semantics for description logics. We provide a variant semantics for descriptions with respect to which the current implementation is complete, and which can be independently motivated. The soundness and completeness of the polynomial-time subsumption algorithm is established using description graphs, which are an abstracted version of the implementation structures used in CLASSIC, and are of independent interest.

261 citations


Book
01 Jul 1993
TL;DR: This chapter discusses knowledge acquisition, legal issues in knowledge-Based Systems, and the Software Lifecycle in Knowledge-based Systems.
Abstract: 1. Introduction to Knowledge-Based Systems. 2. Structure. 3. Logic and Automated Reasoning. 4. Forward Reasoning Rule-Based Systems. 5. Backward-Reasoning Systems. 6. Associative Networks, Frames, and Objects. 7. Blackboard Architectures. 8. Uncertainty Management. 9. Advanced Reasoning Techniques. 10. The Software Lifecycle in Knowledge-based Systems. 11. Feasibility Analysis. 12. Requirements Specification and Design. 13. Knowledge Acquisition and System Implementation. 14. Practical Considerations in Knowledge Acquisition. 15. Alternative Knowledge Acquisition Means. 16. Verification and Validation. 17. Legal Issues in Knowledge-Based Systems. Appendix A: The CLIPS System. Appendix B: The Personal Consultant Shell System.

257 citations


Patent
29 Jan 1993
TL;DR: In this article, the authors define a database engine constituting a method for modeling knowledge as a network of concepts and a plurality of relationships between the concepts comprising the network, each concept is represented as a record in the database which is identified by a unique record reference number.
Abstract: A system for knowledge representation in a computer, together with the ability to recognize, store and use patterns in the knowledge representation, together with the ability for Natural Language Interaction with the knowledge representation system, together with systems to automatically transform information in the knowledge representation into a multitude of documents or other human interpretable displays in a plurality of different formats or views. User interaction with the knowledge representation through the view documents is achievable through a multitude of various possible formats. The Knowledge Representation system defines a novel database engine constituting a method for modeling knowledge as a network of concepts and a plurality of relationships between the concepts comprising the network. Each concept is represented as a record in the database which is identified by a unique record reference number. The unique record reference numbers are stored within the records comprising the database to record the plurality of relationships between concepts.

254 citations


Journal Article
TL;DR: Medical librarians are involved heavily in the direction of the UMLS project, in the development of the Knowledge Sources, and in their experimental application, increasing the likelihood that the U MLS project will achieve its goal of improving access to machine-readable biomedical information.
Abstract: Conceptual connections between users and information sources depend on an accurate representation of the content of available information sources, an accurate representation of specific user information needs, and the ability to match the two. Establishing such connections is a principal function of medical librarians. The goal of the National Library of Medicine's Unified Medical Language System (UMLS) project is to facilitate the development of conceptual connections between users and relevant machine-readable information. The UMLS model involves a combination of three centrally developed Knowledge Sources (a Metathesaurus, a Semantic Network, and an Information Sources Map) and a variety of smart interface programs that make use of these Knowledge Sources to help users in different environments find machine-readable information relevant to their particular practice or research problems. The third experimental edition of the UMLS Knowledge Sources was issued in the fall of 1992. Current priorities for the UMLS project include developing applications that make use of the Knowledge Sources and using feedback from these applications to guide ongoing enhancement and expansion of the Knowledge Sources. Medical librarians are involved heavily in the direction of the UMLS project, in the development of the Knowledge Sources, and in their experimental application. The involvement of librarians in reviewing, testing, and providing feedback on UMLS products will increase the likelihood that the UMLS project will achieve its goal of improving access to machine-readable biomedical information.

223 citations


Journal ArticleDOI
TL;DR: The decidability of a number of desirable TKRS-deduction services are proved through a sound, complete and terminating calculus for reasoning in ALCNR-knowledge bases, and the result that inclusion statements in A LCNR can be simulated by terminological cycles, if descriptive semantics is adopted.
Abstract: Terminological knowledge representation systems (TKRSs) are tools for designing and using knowledge bases that make use of terminological languages (or concept languages). We analyze from a theoretical point of view a TKRS whose capabilities go beyond the ones of presently available TKRSs. The new features studied, often required in practical applications, can be summarized in three main points. First, we consider a highly expressive terminological language, called ALCNR, including general complements of concepts, number restrictions and role conjunction. Second, we allow to express inclusion statements between general concepts, and terminological cycles as a particular case. Third, we prove the decidability of a number of desirable TKRS-deduction services (like satisfiability, subsumption and instance checking) through a sound, complete and terminating calculus for reasoning in ALCNR-knowledge bases. Our calculus extends the general technique of constraint systems. As a byproduct of the proof, we get also the result that inclusion statements in ALCNR can be simulated by terminological cycles, if descriptive semantics is adopted.

216 citations


Journal ArticleDOI
TL;DR: Relationships between conceptual and computational platforms of fuzzy sets and neurocomputations and the proposed architecture of logic processors implements the paradigm of distributed processing with the aid of logic-driven neurons is discussed.

Journal ArticleDOI
TL;DR: SALT uses its knowledge of the intended problem-solving strategy in identifying relevant domain knowledge, in detecting weaknesses in the knowledge base in order to guide its interrogation of the domain expert, in generating an expert system that can perform the task and explain its line of reasoning, and in analyzing test case coverage.

Book
01 Apr 1993
TL;DR: Based on the author's course at Stanford University, Essentials of Artificial Intelligence is an integrated, cohesive introduction to the field that combines clear presentations with humor and AI anecdotes.
Abstract: Since its publication, Essentials of Artificial Intelligence has been adopted at numerous universities and colleges offering introductory AI courses at the graduate and undergraduate levels. Based on the author's course at Stanford University, the book is an integrated, cohesive introduction to the field. The author has a fresh, entertaining writing style that combines clear presentations with humor and AI anecdotes. At the same time, as an active AI researcher, he presents the material authoritatively and with insight that reflects a contemporary, first hand understanding of the field. Pedagogically designed, this book offers a range of exercises and examples. Table of Contents 1 Introduction: What is AI? 2 Overview 3 Blind Search 4 Heuristic Search 5 Adversary Search 6 Introduction to Knowledge Representation 7 Predicate Logic 8 First-Order Logic 9 Putting Logic to Work: Control of Reasoning 10 Assumption-Based Truth Maintenance 11 Nonmonotonic Reasoning 12 Probability 13 Putting Knowledge to Work: Frames and Semantic Nets 14 Planning 15 Learning 16 Vision 17 Nature Language 18 Expert Systems 19 Concluding Remarks

Posted Content
TL;DR: In this paper, the authors consider a highly expressive terminological language, called ALCNR, including general complements of concepts, number restrictions and role conjunction, and prove the decidability of a number of desirable TKRS-deduction services (like satisfiability, subsumption and instance checking) through a sound, complete and terminating calculus for reasoning in aLCNR-knowledge bases.
Abstract: Terminological knowledge representation systems (TKRSs) are tools for designing and using knowledge bases that make use of terminological languages (or concept languages). We analyze from a theoretical point of view a TKRS whose capabilities go beyond the ones of presently available TKRSs. The new features studied, often required in practical applications, can be summarized in three main points. First, we consider a highly expressive terminological language, called ALCNR, including general complements of concepts, number restrictions and role conjunction. Second, we allow to express inclusion statements between general concepts, and terminological cycles as a particular case. Third, we prove the decidability of a number of desirable TKRS-deduction services (like satisfiability, subsumption and instance checking) through a sound, complete and terminating calculus for reasoning in ALCNR-knowledge bases. Our calculus extends the general technique of constraint systems. As a byproduct of the proof, we get also the result that inclusion statements in ALCNR can be simulated by terminological cycles, if descriptive semantics is adopted.

Journal ArticleDOI
01 Nov 1993
TL;DR: A novel approach is proposed for defining, representing, and using functional knowledge which play a fundamental role both from the representation and reasoning perspectives in the representation of physical systems.
Abstract: The basic concepts of the multimodeling approach to the representation of physical systems are presented. Emphasis is placed on the exploitation of many, diverse models of a system for the execution of complex problem solving tasks, such as interpretation, diagnosis, design, simulation, etc. The considered models are based on different ontologies, representational assumptions, epistemological types, and aggregation levels. After a brief survey of the techniques adopted for representing structural and behavioral knowledge, attention is focused on function and teleology. A novel approach is proposed for defining, representing, and using these two types of knowledge which play a fundamental role both from the representation and reasoning perspectives. The fundamental claim is that while teleological knowledge concerns the specific purposes for which the system has been designed, functional knowledge is devoted to bridge the gap between such abstract purposes and the actual structure and behavior of the system, through the concepts of phenomena, processes, and functional roles. A clear definition is provided of all the various epistemological and ontological links existing between the different models. >

Journal ArticleDOI
TL;DR: An explanation of programming skill is suggested that integrates ideas about knowledge representation with a strategic model, enabling one to make predictions about how changes in knowledge representation might give rise to particular strategies and to the strategy changes associated with developing expertise.
Abstract: Much of the literature concerned with understanding the nature of programming skill has focused explicitly upon the declarative aspects of programmers' knowledge. This literature has sought to describe the nature of stereotypical programming knowledge structures and their organization. However, one major limitation of many of these knowledge-based theories is that they often fail to consider the way in which knowledge is used or applied. Another strand of literature is less well represented. This literature deals with the strategic elements of programming skill and is directed towards an analysis of the strategies commonly employed by programmers in the generation and the comprehension of programs. In this paper an attempt is made to unify various analyses of programming strategy. This paper presents a review of the literature in this area, highlighting common themes and concerns, and proposes a model of strategy development which attempts to encompass the central findings of previous research in this area. It is suggested that many studies of programming strategy are descriptive and fail to explain why strategies take the form they do or to explain the typical strategy shifts which are observed during the transitions between different levels of skill. This paper suggests that what is needed is an explanation of programming skill that integrates ideas about knowledge representation with a strategic model, enabling one to make predictions about how changes in knowledge representation might give rise to particular strategies and to the strategy changes associated with developing expertise. This paper concludes by making a number of brief suggestions about the possible nature of this model and its implications for theories of programming expertise.

Journal ArticleDOI
TL;DR: The formalism of Bayesian networks provides a very elegant solution, in a probabilistic framework, to the problem of integrating top-down and bottom-up visual processes, as well as serving as a knowledge base, to create a composite organization hypothesis.
Abstract: The formalism of Bayesian networks provides a very elegant solution, in a probabilistic framework, to the problem of integrating top-down and bottom-up visual processes, as well serving as a knowledge base. The formalism is modified to handle spatial data, and thus the application of Bayesian networks is extended to visual processing. The modified form is called the perceptual inference network (PIN). The theoretical background of a PIN is presented, and its viability is demonstrated in the context of perceptual organization. Perceptual organization imparts robustness, efficiency, and a qualitative and holistic nature to vision. Thus far, the approaches to the problem of perceptual organization have been purely bottom up, without much top-down knowledge-base influence, and are therefore entirely dependent on the inputs, which are obviously imperfect. The knowledge base, besides coping with such input imperfection, also makes it possible to integrate multiple organizations and form a composite organization hypothesis. The PIN imparts an active inferential and integrating nature to perceptual organization in an elegant probabilistic framework. >

Proceedings ArticleDOI
TL;DR: The NATURE project develops a theory of knowledge representation that embraces subject, usage and development worlds surrounding the system, including expressive freedoms, and a process engineering theory that promotes context and decision-based control of the development process.
Abstract: NATURE is a collaborative basic research project on theories underlying requirements engineering funded by the ESPRIT III program of the European communities. Its goals are to develop a theory of knowledge representation that embraces subject, usage and development worlds surrounding the system, including expressive freedoms; a theory of domain engineering that facilitates the identification, acquisition and formalization of domain knowledge as well as similarity-based matching and classifying of software engineering knowledge; and a process engineering theory that promotes context and decision-based control of the development process. These theories are integrated and evaluated in a prototype environment constructed around an extended version of the conceptual modeling language Telos. >

Book ChapterDOI
01 Nov 1993
TL;DR: A generic reasoning method that utilises a presumably extensive and dense model of general domain knowledge as explanatory support for case-based problem solving and learning is described.
Abstract: Problem solving in weak theory domains should compensate for the lack of strong theories by combining the various other knowledge types involved. Such methods should be able to effectively combine general domain knowledge with specific case knowledge. A method is described that utilises a presumably extensive and dense model of general domain knowledge as explanatory support for case-based problem solving and learning. A generic reasoning method — captured in what is called the Activate-explain-focus cycle — is able to utilise a rich knowledge model in producing context-dependent explanations. A specialisation of this method for each of the main subprocesses of case-based reasoning is presented, and illustrated with examples.

Journal ArticleDOI
TL;DR: The PROSE architecture is general and is not tied to any specific telecommunications product, as such, it is being reused to develop configurators for several different products.
Abstract: PROSE is a knowledge-based configurator platform for telecommunications products. Its outstanding feature is a product knowledge base written in C-classIC, a frame-based knowledge representation system in the KL-ONE family of languages. It is one of the first successful products using a KL-ONE style language. Unlike previous configurator applications, the PROSE knowledge base is in a purely declarative form that provides developers with the ability to add knowledge quickly and consistently. The PROSE architecture is general and is not tied to any specific telecommunications product. As such, it is being reused to develop configurators for several different products. Finally, PROSE not only generates configurations from just a few high-level parameters, but it can also verify configurations produced manually by customers, engineers, or salespeople. The same product knowledge, encoded in C-classIC, supports both the generation and the verification of product configurations.

Journal ArticleDOI
01 May 1993
TL;DR: In this article, the authors derive reasonable constraints that enable a natural partition of a domain and its representation by separate Bayesian subnets, such that evidential reasoning takes place at only one of them at a time; and marginal probabilities obtained are identical to those that would be obtained from the homogeneous network.
Abstract: Bayesian networks provide a natural, concise knowledge representation method for building knowledge-based systems under uncertainty. We consider domains representable by general but sparse networks and characterized by incremental evidence where the probabilistic knowledge can be captured once and used for multiple cases. Current Bayesian net representations do not consider structure in the domain and lump all variables into a homogeneous network. In practice, one often directs attention to only part of the network within a period of time; i.e., there is “localization” of queries and evidence. In such case, propagating evidence through a homogeneous network is inefficient since the entire network has to be updated each time. This paper derives reasonable constraints, which can often be easily satisfied, that enable a natural {localization preserving) partition of a domain and its representation by separate Bayesian subnets. The subnets are transformed into a set of permanent junction trees such that evidential reasoning takes place at only one of them at a time; and marginal probabilities obtained are identical to those that would be obtained from the homogeneous network. We show how to swap in a new junction tree, and absorb previously acquired evidence. Although the overall system can be large, computational requirements are governed by the size of one junction tree.

Book
27 Sep 1993
TL;DR: The Knowledge Acquisition Framework, a model-Driven Rule Discovery framework for knowledge representation environment, and Practical Experiences, a guide to knowledge revision.
Abstract: The Knowledge Acquisition Framework The Knowledge Representation Environment The Inference Im-2 The Sort Taxonomy The Predicate Structure Model-Driven Rule Discovery Knowledge Revision Concept Formation Practical Experiences Bibliography Author Index Name Index Subject Index

Book
01 Oct 1993
TL;DR: This paper presents components of neural networks and a comparison with expert systems Neural network architectures and hybrid methods, systems, and tools for expert systems: Level 5 object a hybrid tool.
Abstract: Why are expert systems and neural networks needed? The theoretical foundation of expert systems: knowledge representation based on logic Inference and knowledge processing Practical aspects in applying expert systems: deductive reasoning tools and Level 5 Inductive reasoning with 1st-class System development and knowledge acquisition Object-oriented representation and hybrid methods: object-oriented representation and design Hybrid methods, systems, and tools for expert systems: Level 5 object: a hybrid tool Advanced topics in expert systems: uncertainty in expert systems software evaluation in expert systems Neural networks:components of neural networks and a comparison with expert systems Neural network architectures.

Proceedings Article
28 Aug 1993
TL;DR: This paper considers the problem of endowing hybrid KL-ONE-style logics with capabilities for default inheritance reasoning, a kind of default reasoning that is specifically oriented to reasoning on taxonomies.
Abstract: Hybrid KL-ONE-style logics are knowledge representation formalisms of considerable applicative interest, as they are specifically oriented to the vast class of application domains that are describable by means of taxonomic organizations of complex objects. In this paper we consider the problem of endowing such logics with capabilities for default inheritance reasoning, a kind of default reasoning that is specifically oriented to reasoning on taxonomies. The formalism that results from our work has a reasonable and simple behaviour when dealing with the interplay of defeasible and strict inheritance of properties of complex objects.

Proceedings ArticleDOI
21 May 1993
TL;DR: The authors generalize and formalize the definition of a ViewPoint to facilitate its manipulation for composite system development and the communication model presented straddles both the method construction stage during which inter-ViewPoint relationships are expressed, and the method application stage duringWhich these relationships are enacted.
Abstract: The authors generalize and formalize the definition of a ViewPoint to facilitate its manipulation for composite system development. A ViewPoint is defined to be a loosely-coupled, locally managed object encapsulating representation knowledge, development process knowledge and partial specification knowledge about a system and its domain. In attempting to integrate multiple requirements specification ViewPoints, overlaps must be identified and expressed, complementary participants made to interact and cooperate, and contradictions resolved. The notion of inter-ViewPoint communication is addressed as a vehicle for ViewPoint integration. The communication model presented straddles both the method construction stage during which inter-ViewPoint relationships are expressed, and the method application stage during which these relationships are enacted. >

Journal ArticleDOI
TL;DR: The role of knowledge engineering is not merely “capturing knowledge” in a program delivered by technicians to users, rather, it seeks to develop tools that help people in a community in their everyday practice of creating new understandings and capabilities, new forms of knowledge.
Abstract: Knowledge acquisition is a process of developing qualitative models of systems in the world—physical, social, technological—often for the first time, not extracting facts and rules that are already written down and filed away in an expert's mind. Models of reasoning describe how people behave—how they interactively gather evidence by looking and asking questions, represent a situation by saying and writing things, and plan to act in some environment. But such models are inherently brittle mechanisms: Human reinterpretation of rules and procedures is metaphorical, based on pre-linguistic perceptual categorization and non-deliberated sensory-motor coordination. This view of people relative to computer models yields an alternative view of what tools can be and the tool design process. Knowledge engineers are called to participate with social scientists and workers in the co-design of the workplace and tools for enhancing worker creativity and response to unanticipated situations. The emphasis is on augmenting human capabilities as they interact with each other to construct new conceptualizations—facilitating conversations—not just automating routine behavior. Software development in the context of use maintains connection to non-technical, social factors such as ownership of ideas and authority to participate. The role of knowledge engineering is not merely "capturing knowledge" in a program delivered by technicians to users. Rather, we seek to develop tools that help people in a community, in their everyday practice of creating new understandings and capabilities, new forms of knowledge.

Book ChapterDOI
26 Oct 1993
TL;DR: The consequences of introducing a constructor for building a concept from a set of enumerated individuals in the concept description language are investigated and some complexity results are provided on it.
Abstract: One of the main characteristics of knowledge representation systems based on the description of concepts is the clear distinction between terminological and assertional knowledge Although this characteristic leads to several computational and representational advantages, it usually limits the expressive power of the system For this reason, some attempts have been done, allowing for a limited form of amalgamation between the two components and a more complex interaction between them In particular, one of these attempts is based on letting the individuals to be referenced in the concept expressions This is generally performed by admitting a constructor for building a concept from a set of enumerated individuals In this paper we investigate on the consequences of introducing this type of constructor in the concept description language and we provide some complexity results on it

Journal Article
TL;DR: The focus of the effort is the development of SPECIALIST, an experimental natural language processing system for the biomedical domain that includes a broad coverage parser supported by a large lexicon, modules that provide access to the extensive Unified Medical Language System Knowledge Sources, and a retrieval module that permits experiments in information retrieval.
Abstract: This paper describes efforts to provide access to the free text in biomedical databases. The focus of the effort is the development of SPECIALIST, an experimental natural language processing system for the biomedical domain. The system includes a broad coverage parser supported by a large lexicon, modules that provide access to the extensive Unified Medical Language System (UMLS) Knowledge Sources, and a retrieval module that permits experiments in information retrieval. The UMLS Metathesaurus and Semantic Network provide a rich source of biomedical concepts and their interrelationships. Investigations have been conducted to determine the type of information required to effect a map between the language of queries and the language of relevant documents. Mappings are never straightforward and often involve multiple inferences.

Journal ArticleDOI
Ronald R. Yager1
01 Jul 1993
TL;DR: A new structure for the representation of rules in fuzzy systems is introduced that is called the hierarchical prioritized structure (HPS), which in addition to providing a useful structure for representing knowledge allows for a natural framework for learning rules.
Abstract: The fuzzy logic controller is examined and the basic assumptions inherent in the Mamdani model is described. Distinctions are made between rule firing based upon possibility and certainty qualification. Rules are looked at as a partitioning of the input space. The author discusses the use of certainty qualification in determining the firing level of a rule. Different representations of the rule consequent are discussed. A new structure for the representation of rules in fuzzy systems is introduced that is called the hierarchical prioritized structure (HPS). This new HPS in addition to providing a useful structure for representing knowledge allows for a natural framework for learning rules. >

Proceedings ArticleDOI
01 May 1993
TL;DR: A tool is built that serves as a living design memory for a large software development organization that delivers knowledge to developers effectively and is embedded in organizational practice to ensure that the knowledge it contains evolves as necessary.
Abstract: We identify an important type of software design knowledge that we call community specific folklore and show problems with current approaches to managing it. We built a tool that serves as a living design memory for a large software development organization. The tool delivers knowledge to developers effectively and is embedded in organizational practice to ensure that the knowledge it contains evolves as necessary. This work illustrates important lessons in building knowledge management systems, integrating novel technology into organizational practice, and managing research-development partnerships.