scispace - formally typeset
Search or ask a question

Showing papers on "Knowledge representation and reasoning published in 1991"


Book
31 Oct 1991
TL;DR: Theoretical Foundations.
Abstract: I. Theoretical Foundations.- 1. Knowledge.- 1.1. Introduction.- 1.2. Knowledge and Classification.- 1.3. Knowledge Base.- 1.4. Equivalence, Generalization and Specialization of Knowledge.- Summary.- Exercises.- References.- 2. Imprecise Categories, Approximations and Rough Sets.- 2.1. Introduction.- 2.2. Rough Sets.- 2.3. Approximations of Set.- 2.4. Properties of Approximations.- 2.5. Approximations and Membership Relation.- 2.6. Numerical Characterization of Imprecision.- 2.7. Topological Characterization of Imprecision.- 2.8. Approximation of Classifications.- 2.9. Rough Equality of Sets.- 2.10. Rough Inclusion of Sets.- Summary.- Exercises.- References.- 3. Reduction of Knowledge.- 3.1. Introduction.- 3.2. Reduct and Core of Knowledge.- 3.3. Relative Reduct and Relative Core of Knowledge.- 3.4. Reduction of Categories.- 3.5. Relative Reduct and Core of Categories.- Summary.- Exercises.- References.- 4. Dependencies in Knowledge Base.- 4.1. Introduction.- 4.2. Dependency of Knowledge.- 4.3. Partial Dependency of Knowledge.- Summary.- Exercises.- References.- 5. Knowledge Representation.- 5.1. Introduction.- 5.2. Examples.- 5.3. Formal Definition.- 5.4. Significance of Attributes.- 5.5. Discernibility Matrix.- Summary.- Exercises.- References.- 6. Decision Tables.- 6.1. Introduction.- 6.2. Formal Definition and Some Properties.- 6.3. Simplification of Decision Tables.- Summary.- Exercises.- References.- 7. Reasoning about Knowledge.- 7.1. Introduction.- 7.2. Language of Decision Logic.- 7.3. Semantics of Decision Logic Language.- 7.4. Deduction in Decision Logic.- 7.5. Normal Forms.- 7.6. Decision Rules and Decision Algorithms.- 7.7. Truth and Indiscernibility.- 7.8. Dependency of Attributes.- 7.9. Reduction of Consistent Algorithms.- 7.10. Reduction of Inconsistent Algorithms.- 7.11. Reduction of Decision Rules.- 7.12. Minimization of Decision Algorithms.- Summary.- Exercises.- References.- II. Applications.- 8. Decision Making.- 8.1. Introduction.- 8.2. Optician's Decisions Table.- 8.3. Simplification of Decision Table.- 8.4. Decision Algorithm.- 8.5. The Case of Incomplete Information.- Summary.- Exercises.- References.- 9. Data Analysis.- 9.1. Introduction.- 9.2. Decision Table as Protocol of Observations.- 9.3. Derivation of Control Algorithms from Observation.- 9.4. Another Approach.- 9.5. The Case of Inconsistent Data.- Summary.- Exercises.- References.- 10. Dissimilarity Analysis.- 10.1. Introduction.- 10.2. The Middle East Situation.- 10.3. Beauty Contest.- 10.4. Pattern Recognition.- 10.5. Buying a Car.- Summary.- Exercises.- References.- 11. Switching Circuits.- 11.1. Introduction.- 11.2. Minimization of Partially Defined Switching Functions.- 11.3. Multiple-Output Switching Functions.- Summary.- Exercises.- References.- 12. Machine Learning.- 12.1. Introduction.- 12.2. Learning From Examples.- 12.3. The Case of an Imperfect Teacher.- 12.4. Inductive Learning.- Summary.- Exercises.- References.

7,826 citations


Journal ArticleDOI
TL;DR: It is shown that deciding coherence and subsumption of such descriptions are PSPACE-complete problems that can be decided with linear space.

1,105 citations


Journal ArticleDOI
01 Jun 1991
TL;DR: In this article, a hierarchical system architecture is proposed that integrates knowledge from research in both natural and artificial systems, and it is defined as that which produces successful behavior, which is assumed to result from natural selection.
Abstract: Intelligence is defined as that which produces successful behavior. Intelligence is assumed to result from natural selection. A model is proposed that integrates knowledge from research in both natural and artificial systems. The model consists of a hierarchical system architecture wherein: (1) control bandwidth decreases about an order of magnitude at each higher level, (2) perceptual resolution of spatial and temporal patterns contracts about an order-of-magnitude at each higher level, (3) goals expand in scope and planning horizons expand in space and time about an order-of-magnitude at each higher level, and (4) models of the world and memories of events expand their range in space and time by about an order-of-magnitude at each higher level. At each level, functional modules perform behavior generation (task decomposition planning and execution), world modeling, sensory processing, and value judgment. Sensory feedback control loops are closed at every level. >

644 citations


Journal ArticleDOI
TL;DR: In this article, the authors used a neural network to model the behavior of concrete in the state of plane stress under monotonic biaxial bialyclic stresses.
Abstract: To date, material modeling has involved the development of mathematical models of material behavior derived from human observation of, and reasoning with, experimental data. An alternative, discussed in this paper, is to use a computation and knowledge representation paradigm, called neural networks, developed by researchers in connectionism (a subfield of artificial intelligence) to model material behavior. The main benefits in using a neural‐network approach are that all behavior can be represented within a unified environment of a neural network and that the network is built directly from experimental data using the self‐organizing capabilities of the neural network, i.e., the network is presented with the experimental data and “learns” the relationships between stresses and strains. Such a modeling strategy has important implications for modeling the behavior of modern, complex materials, such as composites. In this paper, the behaviors of concrete in the state of plane stress under monotonic biaxial ...

535 citations


Book ChapterDOI
01 Jan 1991
TL;DR: Classical as mentioned in this paper is a recently developed knowledge representation system that concentrates on the definition of structured concepts, their organization into taxonomies, the creation and manipulation of individual instances of such concepts, and the key inferences of subsumption and classification.
Abstract: CLASSIC is a recently developed knowledge representation system that follows the paradigm originally set out in the KL-ONE system: It concentrates on the definition of structured concepts, their organization into taxonomies, the creation and manipulation of individual instances of such concepts, and the key inferences of subsumption and classification. Rather than simply presenting a description of CLASSIC, we complement a brief system overview with a discussion of how to live within the confines of a limited object-oriented deductive system. By analyzing the representational strengths and weaknesses of CLASSIC, we consider the circumstances under which it is most appropriate to use (or not use) it. We elaborate a knowledge engineering methodology for building KL-ONE-style knowledge bases, with emphasis on the modeling choices that arise in the process of describing a domain. We also address some of the key difficult issues encountered by new users, including primitive vs. defined concepts, and differences between roles and concepts, as well as representational “tricks-of-the-trade,” which we believe to be generally useful. Much of the discussion should be relevant to many of the current systems based on KL-ONE.

493 citations


Journal ArticleDOI
TL;DR: This paper has built a system called LaSSIE, which uses knowledge representation and reasoning technology to directly address each of these three issues of invisibility and thereby help with the invisibility problem.
Abstract: The authors discuss the important problem of invisibility that is inherent in the task of developing large software systems It is pointed out that there are no direct solutions to this problem; however, there are several categories of systems-relational code analyzers, reuse librarians, and project management databases-that can be seen as addressing aspects of the invisibility problem It is argued that these systems do not adequately deal with certain important aspects of the problem of invisibility-semantic proliferation, multiple views, and the need for intelligent indexing A system called LaSSIE, which uses knowledge representation and reasoning technology to address each of these three issues directly and thereby help with the invisibility problem, has been built The authors conclude with an evaluation of the system and a discussion of open problems and ongoing work >

378 citations


Proceedings Article
01 Jan 1991
TL;DR: In this article, a complexity analysis of concept satisfiability and subsumption for a wide class of concept languages is presented, together with algorithms for these inferences that comply with the worst-case complexity of the reasoning task they perform.
Abstract: Abstract A basic feature of Terminological Knowledge Representation Systems is to represent knowledge by means of taxonomies, here called terminologies, and to provide a specialized reasoning engine to do inferences on these structures. The taxonomy is built through a representation language called a concept language (or description logic ), which is given a well-defined set-theoretic semantics. The efficiency of reasoning has often been advocated as a primary motivation for the use of such systems. The main contributions of the paper are: (1) a complexity analysis of concept satisfiability and subsumption for a wide class of concept languages; (2) algorithms for these inferences that comply with the worst-case complexity of the reasoning task they perform.

366 citations


Journal ArticleDOI
TL;DR: The history of case-based reasoning is reviewed, including research conducted at the Yale AI Project and elsewhere, which addresses many of the technological shortcomings of standard rule-based expert systems.
Abstract: Expertise comprises experience. In solving a new problem, we rely on past episodes. We need to remember what plans succeed and what plans fail. We need to know how to modify an old plan to fit a new situation. Case-based reasoning is a general paradigm for reasoning from experience. It assumes a memory model for representing, indexing, and organizing past cases and a process model for retrieving and modifying old cases and assimilating new ones. Case-based reasoning provides a scientific cognitive model. The research issues for case-based reasoning include the representation of episodic knowledge, memory organization, indexing, case modification, and learning. In addition, computer implementations of case-based reasoning address many of the technological shortcomings of standard rule-based expert systems. These engineering concerns include knowledge acquisition and robustness. In this article, I review the history of case-based reasoning, including research conducted at the Yale AI Project and elsewhere.

335 citations


Book ChapterDOI
01 Jan 1991
TL;DR: Some cognitive aspects of perception and knowledge representation are explored and it is suggested that ‘spatial inference engines’ provide the basis for rather general cognitive capabilities inside and outside the spatial domain.
Abstract: Physical space has unique properties which form the basis of fundamental capabilities of cognitive systems. This paper explores some cognitive aspects of perception and knowledge representation and explains why spatial knowledge is of particular interest for cognitive science. It is suggested that ‘spatial inference engines’ provide the basis for rather general cognitive capabilities inside and outside the spatial domain. The role of abstraction in spatial reasoning and the advantages of qualitative spatial knowledge over quantitative knowledge are discussed. The usefulness of spatial representations with a low degree of abstraction is shown. An example from vision (the aquarium domain) is used to illustrate in which ways knowledge about space may be uncertain or incomplete. Parallels are drawn between the spatial and the temporal domains. A concrete approach for the representation of qualitative spatial knowledge on the basis of ‘conceptual neighborhood’ is suggested and some potential application areas are mentioned.

303 citations


Book
01 Jan 1991
TL;DR: Part 1 Rule base organization: design considerations for a rule based system conceptual frameworks for geographical knowledge knowledge engineering for generalization and data modelling issues: suitable representation schema for geographic information knowledge classification and organization object modelling and phenomenon-based generalization.
Abstract: Part 1 Rule base organization: design considerations for a rule based system conceptual frameworks for geographical knowledge knowledge engineering for generalization. Part 2 Data modelling issues: suitable representation schema for geographic information knowledge classification and organization object modelling and phenomenon-based generalization. Part 3 Formulation of rules: constraints on rule formation rule section for small scale map generalizations a rule for describing feature geometry amplified intelligence and rule based systems. Part 4 Computational and representational issues: role of interpolation in feature displacement parallel software and computation integration and evaluation of map generalization.

289 citations


Journal ArticleDOI
TL;DR: An automated tool called the Requirements Apprentice (RA) which assists a human analyst in the creation and modification of software requirements is presented, which develops a coherent internal representation of a requirement from an initial set of disorganized imprecise statements.
Abstract: An automated tool called the Requirements Apprentice (RA) which assists a human analyst in the creation and modification of software requirements is presented. Unlike most other requirements analysis tools, which start from a formal description language, the focus of the RA is on the transition between informal and formal specifications. The RA supports the earliest phases of creating a requirement, in which ambiguity, contradiction, and incompleteness are inevitable. From an artificial intelligence perspective, the central problem the RA faces is one of knowledge acquisition. The RA develops a coherent internal representation of a requirement from an initial set of disorganized imprecise statements. To do so, the RA relies on a variety of techniques, including dependency-directed reasoning, hybrid knowledge representations and the reuse of common forms (cliches). An annotated transcript showing an interaction with a working version of the RA is given. >


Journal ArticleDOI
TL;DR: This book explores logical formalisms for representing and reasoning with probabilistic information that will be of particular value to researchers in nonmonotonic reasoning, applications of probabilities, and knowledge representation.
Abstract: Probabilistic information has many uses in an intelligent system. This book explores logical formalisms for representing and reasoning with probabilistic information that will be of particular value to researchers in nonmonotonic reasoning, applications of probabilities, and knowledge representation. It demonstrates that probabilities are not limited to particular applications, like expert systems; they have an important role to play in the formal design and specification of intelligent systems in general.Fahiem Bacchus focuses on two distinct notions of probabilities: one propositional, involving degrees of belief, the other proportional, involving statistics. He constructs distinct logics with different semantics for each type of probability that are a significant advance in the formal tools available for representing and reasoning with probabilities. These logics can represent an extensive variety of qualitative assertions, eliminating requirements for exact point-valued probabilities, and they can represent first-order logical information. The logics also have proof theories which give a formal specification for a class of reasoning that subsumes and integrates most of the probabilistic reasoning schemes so far developed in AI.Using the new logical tools to connect statistical with propositional probability, Bacchus also proposes a system of direct inference in which degrees of belief can be inferred from statistical knowledge and demonstrates how this mechanism can be applied to yield a powerful and intuitively satisfying system of defeasible or default reasoning.Contents: Introduction. Propositional Probabilities. Statistical Probabilities. Combining Statistical and Propositional Probabilities Default Inferences from Statistical Knowledge.

Patent
Richard D. Skeirik1
25 Jul 1991
TL;DR: In this paper, a neural network/expert system process control system and method combines the decision-making capabilities of expert systems with the predictive capabilities of neural networks for improved process control.
Abstract: A neural network/expert system process control system and method combines the decision-making capabilities of expert systems with the predictive capabilities of neural networks for improved process control. Neural networks provide predictions of measurements which are difficult to make, or supervisory or regulatory control changes which are difficult to implement using classical control techniques. Expert systems make decisions automatically based on knowledge which is well-known and can be expressed in rules or other knowledge representation forms. Sensor and laboratory data is effectively used. In one approach, the output data from the neural network can be used by the controller in controlling the process, and the expert system can make a decision using sensor or lab data to control the controller(s). In another approach, the output data of the neural network can be used by the expert system in making its decision, and control of the process carried out using lab or sensor data. In another approach, the output data can be used both to control the process and to make decisions.


Journal ArticleDOI
TL;DR: A symbol level account of some of the representation and reasoning structures within the LOOM knowledge representation system, which is unique in that it constructs a separate taxonomy for each of seven kinds of non-composite descriptions, and uses a marker passing algorithm to replace the quadratic time subsumption test found in most classifiers with a linear time test.
Abstract: This paper presents a symbol level account of some of the representation and reasoning structures within the LOOM knowledge representation system. Reasoning in LOOM centers around a classifier whose primary function is to construct a taxonomy of all descriptions that have been entered into the system. The LOOM classifier is unique in that it constructs a separate taxonomy for each of seven kinds of non-composite descriptions, and uses a marker passing algorithm to replace the quadratic time subsumption test found in most classifiers with a linear time test. We briefly illustrate how the selection of data structures within LOOM impacts the completeness of the classification algorithm, and we describe the LOOM option that allows concepts to be reasoned with in either a forward-chaining or a backward-chaining mode.

Journal ArticleDOI
TL;DR: It is argued that logical soundness, completeness, and worst-case complexity are inadequate measures for evaluating the utility of representation services, and that this evaluation should employ the broader notions of utility and rationality found in decision theory.

Book ChapterDOI
01 Jan 1991
TL;DR: This chapter attempts to characterize the technology that has evolved within the KL-ONE family of knowledge representation systems, which are logic based, they support a specialized term-forming language, and they implement a specialized reasoner called a term classifier.
Abstract: This chapter attempts to characterize the technology that has evolved within the KL-ONE family of knowledge representation systems. Key features of these systems are that they are logic based, they support a specialized term-forming language, and they implement a specialized reasoner called a term classifier. We begin by introducing some of the concepts commonly used in the KL-ONE literature, and we provide a brief sketch of the history of this family of systems. Next, we summarize the current state of this technology, and identify some of its major contributions. We close with a look at some issues that are still regarded as controversial within the research community.

Book
01 Mar 1991
TL;DR: Representation and models - knowledge representation, general aspects, logic and objects, situational versus analytical knowledge symbolic reasoning - search, production systems, problem solving uncertainty and belief revision - representation of uncertainty, belief revision human-machine interaction - sharing intelligence, user interfaces, advanced interaction media, knowledge acquisition.
Abstract: Representation and models - knowledge representation, general aspects, logic and objects, situational versus analytical knowledge symbolic reasoning - search, production systems, problem solving uncertainty and belief revision - representation of uncertainty, belief revision human-machine interaction - sharing intelligence, user interfaces, advanced interaction media, knowledge acquisition.

Journal ArticleDOI
TL;DR: The explainable expert systems framework (EES), in which the focus is on capturing those design aspects that are important for producing good explanations, including justifications of the system's actions, explications of general problem-solving strategies, and descriptions of the systems' terminology, is discussed.
Abstract: The explainable expert systems framework (EES), in which the focus is on capturing those design aspects that are important for producing good explanations, including justifications of the system's actions, explications of general problem-solving strategies, and descriptions of the system's terminology, is discussed. EES was developed as part of the Strategic Computing Initiative of the US Dept. of Defense's Defense Advanced Research Projects Agency (DARPA). both the general principles from which the system was derived and how the system was derived from those principles can be represented in EES. The Program Enhancement Advisor, which is the main prototype on which the explanation work has been developed and tested, is presented. PEA is an advice system that helps users improve their Common Lisp programs by recommending transformations that enhance the user's code. How EES produces better explanations is shown. >

Book
John Hughes1
01 Jun 1991
TL;DR: Semantic data modelling principles of object-oriented systems object- oriented data modelling classes and inheritance - advanced features persistence concurrency implementation issuesobject-oriented knowledge representation.
Abstract: Semantic data modelling principles of object-oriented systems object-oriented data modelling classes and inheritance - advanced features persistence concurrency implementation issues object-oriented knowledge representation.

Journal ArticleDOI
TL;DR: AI and decision theory both emerged from research on systematic methods for problem solving and decision making that first blossomed in the 1940s and share a common progenitor, John von Neumann.
Abstract: Decision analysis and expert systems are technologies intended to support human reasoning and decision making by formalizing expert knowledge so that it is amenable to mechanized reasoning methods. Despite some common goals, these two paradigms have evolved divergently, with fundamental differences in principle and practice. Recent recognition of the deficiencies of traditional AI techniques for treating uncertainty, coupled with the development of belief nets and influence diagrams, is stimulating renewed enthusiasm among AI researchers in probabilistic reasoning and decision analysis. We present the key ideas of decision analysis and review recent research and applications that aim toward a marriage of these two paradigms. This work combines decision-analytic methods for structuring and encoding uncertain knowledge and preferences with computational techniques from AI for knowledge representation, inference, and explanation. We end by outlining remaining research issues to fully develop the potential of this enterprise.


Proceedings ArticleDOI
22 Apr 1991
TL;DR: This paper presents a subsumption algorithm for this language, which is sound and complete, and discusses why the subsumption problem in this language is rather hard from a computational point of view, which leads to an idea of how to recognize concepts which cause tractable problems.
Abstract: We investigate the subsumption problem in logic-based knowledge representation languages of the KL-ONE family. The language presented in this paper provides the constructs for conjunction, disjunction, and negation of concepts, as well as qualifying number restrictions. The latter ones generalize the well-known role quantifications (such as value restrictions) and ordinary number restrictions, which are present in almost all KL-ONE based systems. Until now, only little attempts were made to integrate qualifying number restrictions into concept languages. It turns out that all known subsumption algorithms which try to handle these constructs are incomplete, and thus detecting only few subsumption relations between concepts. We present a subsumption algorithm for our language which is sound and complete. Subsequently we discuss why the subsumption problem in this language is rather hard from a computational point of view. This leads to an idea of how to recognize concepts which cause tractable problems.

Book ChapterDOI
01 Jan 1991
TL;DR: None of the different styles of semantics seems to be completely satisfying for all purposes in terminological knowledge representation formalisms.
Abstract: Terminological knowledge representation formalisms are intended to capture the analytic relationships between terms of a vocabulary intended to describe a domain. A term whose definition refers, either directly or indirectly, to the term itself presents a problem for most terminological representation systems because it is not obvious whether such a term is meaningful, nor how it could be handled by a knowledge representation system in a satisfying manner. After some examples of intuitively sound terminological cycles are given, different formal semantics are investigated and evaluated with respect to the examples. As it turns out, none of the different styles of semantics seems to be completely satisfying for all purposes. Finally, consequences in terms of computational complexity and decidability are discussed.

Journal ArticleDOI
TL;DR: Progress toward a device representation that organizes knowledge based on functionality is described, and the functional representation described provides a package that shows the relationship among structure, function, and behavior.
Abstract: Progress toward a device representation that organizes knowledge based on functionality is described. Device representation involves theories about languages for representing structure, commitments for representing behavior, and kinds of causation needed to represent behaviors. The current focus is on the first two issues. The functional representation described provides a package that shows the relationship among structure, function, and behavior. Knowledge of this relationship provides basic, task-independent, intrinsic capabilities: simulation, i.e., given changes in a devices structure, what can be determined about changes in functionality; identification of structural cause. i.e. given changes in function (malfunction or reduced effects), what changes in structure could account for them; and identification of functional components, i.e. given a specific component, what functional purpose it provides. The structure of this functional representation, organized around functional packages, provides the means by which these capabilities can be accomplished. >

Proceedings Article
Bart Selman1, Henry Kautz1
14 Jul 1991
TL;DR: This work introduces a knowledge compilation method that allows the user to enter statements in a general, unrestricted representation language, which the system compiles into a restricted language that allows for efficient inference.
Abstract: We present a new approach to developing fast and efficient knowledge representation systems. Previous approaches to the problem of tractable inference have used restricted languages or incomplete inference mechanisms -- problems include lack of expressive power, lack of inferential power, and/or lack of a formal characterization of what can and cannot be inferred. To overcome these disadvantages, we introduce a knowledge compilation method. We allow the user to enter statements in a general, unrestricted representation language, which the system compiles into a restricted language that allows for efficient inference. Since an exact translation into a tractable form is often impossible, the system searches for the best approximation of the original information. We will describe how the approximation can be used to speed up inference without giving up correctness or completeness. We illustrate our method by studying the approximation of logical theories by Horn theories. Following the formal definition of Horn approximation, we present "anytime" algorithms for generating such approximations. We subsequently discuss extensions to other useful classes of approximations.

Proceedings Article
24 Aug 1991
TL;DR: A new version of the Lin/Shoham logic, similar in spirit to the Levesque/Reiter theory of epistemic queries, is described, which can give meaning to Epistemic queries in the context of nonmonotonic databases, including logic programs with negation as failure.
Abstract: The approach to database query evaluation developed by Levesque and Reiter treats databases as first order theories, and queries as formulas of the language which includes, in addition to the language of the database, an epistemic modal operator. In this epistemic query language, one can express questions not only about the external world described by the database, but also about the database itself-- about what the database knows. On the other hand, epistemic formulas are used in knowledge representation for the purpose of expressing defaults. Autoepistemic logic is the best known epistemic nonmonotonic formalism; the logic of grounded knowledge, proposed recently by Lin and Shoham, is another such system. This paper brings these two directions of research together. We describe a new version of the Lin/Shoham logic, similar in spirit to the Levesque/Reiter theory of epistemic queries. Using this formalism, we can give meaning to epistemic queries in the context of nonmonotonic databases, including logic programs with negation as failure.

Book
31 Mar 1991
TL;DR: The Dynamic Type Hierarchy Theory of Metaphor (DTH) as mentioned in this paper is a theory of the type hierarchy of metaphorical expressions. But it is not a formal approach to metaphor analysis.
Abstract: 1. The Literal and the Metaphoric.- 2. Views of Metaphor.- 3. Knowledge Representation.- 4. Representation Schemes and Conceptual Graphs.- 5. The Dynamic Type Hierarchy Theory of Metaphor.- 6. Computational Approaches to Metaphor.- 7. The Nature and Structure of Semantic Hierarchies.- 8. Language Games, Open Texture and Family Resemblances.- 9. Programming the Dynamic Type Hierarchy.- Author Index.

Proceedings Article
John Yen1
24 Aug 1991
TL;DR: The generalized knowledge representation framework not only alleviates the difficulty of conventional AI knowledge representation schemes in handling imprecise and vague information, but also extends the application of fuzzy logic to complex intelligent systems that need to perform highlevel analyses using conceptual abstractions.
Abstract: During the past decade, knowledge representation research in AI has generated a class of languages called term subsumption languages (TSL), which is a knowledge representation formalism with a well-defined logic-based semantics-Due to its formal semantics, a term subsumption system can automatically infer the subsumption relationships between concepts defined in the system. However, these systems are very limited in handling vague concepts in the knowledge base. In contrast, fuzzy logic directly deals with the notion of vagueness and imprecision using fuzzy predicates, fuzzy quantifiers, linguistic variables, and other constructs. Hence, fuzzy logic offers an appealing foundation for generalizing the semantics of term subsumption languages. Based on a test score semantics in fuzzy logic, this paper first generalizes the semantics of term subsumption languages. Then, we discuss impacts of such a generalization to the reasoning capabilities of term subsumption systems. The generalized knowledge representation framework not only alleviates the difficulty of conventional AI knowledge representation schemes in handling imprecise and vague information, but also extends the application of fuzzy logic to complex intelligent systems that need to perform highlevel analyses using conceptual abstractions.