scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Commonsense Knowledge Representation Based on Fuzzy Logic

01 Oct 1983-IEEE Computer (IEEE)-Vol. 16, Iss: 10, pp 61-65
TL;DR: The approach to the representation of commonsense knowledge described in this article is based on the idea that propositions characterizing commomsense knowledge are for the most part, dispositions -- that is, propositions with implied fuzzy quantifiers.
Abstract: The approach to the representation of commonsense knowledge described in this article is based on the idea that propositions characterizing commomsense knowledge are for the most part, dispositions -- that is, propositions with implied fuzzy quantifiers. To deal with dispositions systematically the author uses Fuzzy-Logic -- The logic underlying approximate or fuzzy reasoning
Citations
More filters
Book
08 Sep 2000
TL;DR: This book presents dozens of algorithms and implementation examples, all in pseudo-code and suitable for use in real-world, large-scale data mining projects, and provides a comprehensive, practical look at the concepts and techniques you need to get the most out of real business data.
Abstract: The increasing volume of data in modern business and science calls for more complex and sophisticated tools. Although advances in data mining technology have made extensive data collection much easier, it's still always evolving and there is a constant need for new techniques and tools that can help us transform this data into useful information and knowledge. Since the previous edition's publication, great advances have been made in the field of data mining. Not only does the third of edition of Data Mining: Concepts and Techniques continue the tradition of equipping you with an understanding and application of the theory and practice of discovering patterns hidden in large data sets, it also focuses on new, important topics in the field: data warehouses and data cube technology, mining stream, mining social networks, and mining spatial, multimedia and other complex data. Each chapter is a stand-alone guide to a critical topic, presenting proven algorithms and sound implementations ready to be used directly or with strategic modification against live data. This is the resource you need if you want to apply today's most powerful data mining techniques to meet real business challenges. * Presents dozens of algorithms and implementation examples, all in pseudo-code and suitable for use in real-world, large-scale data mining projects. * Addresses advanced topics such as mining object-relational databases, spatial databases, multimedia databases, time-series databases, text databases, the World Wide Web, and applications in several fields. *Provides a comprehensive, practical look at the concepts and techniques you need to get the most out of real business data

23,600 citations

01 Jan 2006
TL;DR: There have been many data mining books published in recent years, including Predictive Data Mining by Weiss and Indurkhya [WI98], Data Mining Solutions: Methods and Tools for Solving Real-World Problems by Westphal and Blaxton [WB98], Mastering Data Mining: The Art and Science of Customer Relationship Management by Berry and Linofi [BL99].
Abstract: The book Knowledge Discovery in Databases, edited by Piatetsky-Shapiro and Frawley [PSF91], is an early collection of research papers on knowledge discovery from data. The book Advances in Knowledge Discovery and Data Mining, edited by Fayyad, Piatetsky-Shapiro, Smyth, and Uthurusamy [FPSSe96], is a collection of later research results on knowledge discovery and data mining. There have been many data mining books published in recent years, including Predictive Data Mining by Weiss and Indurkhya [WI98], Data Mining Solutions: Methods and Tools for Solving Real-World Problems by Westphal and Blaxton [WB98], Mastering Data Mining: The Art and Science of Customer Relationship Management by Berry and Linofi [BL99], Building Data Mining Applications for CRM by Berson, Smith, and Thearling [BST99], Data Mining: Practical Machine Learning Tools and Techniques by Witten and Frank [WF05], Principles of Data Mining (Adaptive Computation and Machine Learning) by Hand, Mannila, and Smyth [HMS01], The Elements of Statistical Learning by Hastie, Tibshirani, and Friedman [HTF01], Data Mining: Introductory and Advanced Topics by Dunham, and Data Mining: Multimedia, Soft Computing, and Bioinformatics by Mitra and Acharya [MA03]. There are also books containing collections of papers on particular aspects of knowledge discovery, such as Machine Learning and Data Mining: Methods and Applications edited by Michalski, Brakto, and Kubat [MBK98], and Relational Data Mining edited by Dzeroski and Lavrac [De01], as well as many tutorial notes on data mining in major database, data mining and machine learning conferences.

2,591 citations

Book
01 Jan 2004
TL;DR: This landmark text takes the central concepts of knowledge representation developed over the last 50 years and illustrates them in a lucid and compelling way, and offers the first true synthesis of the field in over a decade.
Abstract: Knowledge representation is at the very core of a radical idea for understanding intelligence. Instead of trying to understand or build brains from the bottom up, its goal is to understand and build intelligent behavior from the top down, putting the focus on what an agent needs to know in order to behave intelligently, how this knowledge can be represented symbolically, and how automated reasoning procedures can make this knowledge available as needed. This landmark text takes the central concepts of knowledge representation developed over the last 50 years and illustrates them in a lucid and compelling way. Each of the various styles of representation is presented in a simple and intuitive form, and the basics of reasoning with that representation are explained in detail. This approach gives readers a solid foundation for understanding the more advanced work found in the research literature. The presentation is clear enough to be accessible to a broad audience, including researchers and practitioners in database management, information retrieval, and object-oriented systems as well as artificial intelligence. This book provides the foundation in knowledge representation and reasoning that every AI practitioner needs. *Authors are well-recognized experts in the field who have applied the techniques to real-world problems * Presents the core ideas of KR&R in a simple straight forward approach, independent of the quirks of research systems *Offers the first true synthesis of the field in over a decade Table of Contents 1 Introduction * 2 The Language of First-Order Logic *3 Expressing Knowledge * 4 Resolution * 5 Horn Logic * 6 Procedural Control of Reasoning * 7 Rules in Production Systems * 8 Object-Oriented Representation * 9 Structured Descriptions * 10 Inheritance * 11 Numerical Uncertainty *12 Defaults *13 Abductive Reasoning *14 Actions * 15 Planning *16 A Knowledge Representation Tradeoff * Bibliography * Index

938 citations

Journal ArticleDOI
TL;DR: A survey of methods for representing and reasoning with imperfect information can be found in this paper, where a classification of different types of imperfections and sources of such imperfections are discussed.
Abstract: This paper surveys methods for representing and reasoning with imperfect information. It opens with an attempt to classify the different types of imperfection that may pervade data, and a discussion of the sources of such imperfections. The classification is then used as a framework for considering work that explicitly concerns the representation of imperfect information, and related work on how imperfect information may be used as a basis for reasoning. The work that is surveyed is drawn from both the field of databases and the field of artificial intelligence. Both of these areas have long been concerned with the problems caused by imperfect information, and this paper stresses the relationships between the approaches developed in each.

293 citations

Journal ArticleDOI
TL;DR: This paper is a tentative survey of quantitative approaches in the modeling of uncertainty and imprecision including recent theoretical proposals as well as more empirical techniques such as the ones developed in expert systems such as MYCIN or PROSPECTOR, the management of Uncertainty and Imprecision in reasoning patterns being a key issue in artificial intelligence.
Abstract: The intended purpose of this paper is twofold: proposing a common basis for the modeling of uncertainty and imprecision, and discussing various kinds of approximate and plausible reasoning schemes in this framework. Together with probability, different kinds of uncertainty measures (credibility and plausibility functions in the sense of Shafer, possibility measures in the sense of Zadeh and the dual measures of necessity, Sugeno's g?-fuzzy measures) are introduced in a unified way. The modeling of imprecision in terms of possibility distribution is then presented, and related questions such as the measure of the uncertainty of fuzzy events, the probability and possibility qualification of statements, the concept of a degree of truth, and the truth qualification of propositions, are discussed at length. Deductive inference from premises weighted by different kinds of measures by uncertainty, or by truth-values in the framework of various multivalued logics, is fully investigated. Then, deductive inferences from imprecise or fuzzy premises are dealt with; patterns of reasoning where both uncertainty and imprecision are present are also addressed. The last section is devoted to the combination of uncertain or imprecise pieces of information given by different sources. On the whole, this paper is a tentative survey of quantitative approaches in the modeling of uncertainty and imprecision including recent theoretical proposals as well as more empirical techniques such as the ones developed in expert systems such as MYCIN or PROSPECTOR, the management of uncertainty and imprecision in reasoning patterns being a key issue in artificial intelligence.

243 citations

References
More filters
Book
01 Jan 2011
TL;DR: This book effectively constitutes a detailed annotated bibliography in quasitextbook style of the some thousand contributions deemed by Messrs. Dubois and Prade to belong to the area of fuzzy set theory and its applications or interactions in a wide spectrum of scientific disciplines.
Abstract: (1982). Fuzzy Sets and Systems — Theory and Applications. Journal of the Operational Research Society: Vol. 33, No. 2, pp. 198-198.

5,861 citations

Journal ArticleDOI
TL;DR: The computational approach to fuzzy quantifiers which is described in this paper may be viewed as a derivative of fuzzy logic and test-score semantics.
Abstract: The generic term fuzzy quantifier is employed in this paper to denote the collection of quantifiers in natural languages whose representative elements are: several, most, much, not many, very many, not very many, few, quite a few, large number, small number, close to five, approximately ten, frequently, etc. In our approach, such quantifiers are treated as fuzzy numbers which may be manipulated through the use of fuzzy arithmetic and, more generally, fuzzy logic. A concept which plays an essential role in the treatment of fuzzy quantifiers is that of the cardinality of a fuzzy set. Through the use of this concept, the meaning of a proposition containing one or more fuzzy quantifiers may be represented as a system of elastic constraints whose domain is a collection of fuzzy relations in a relational database. This representation, then, provides a basis for inference from premises which contain fuzzy quantifiers. For example, from the propositions “Most U's are A's” and “Most A's are B's,” it follows that “Most2 U's are B's,” where most2 is the fuzzy product of the fuzzy proportion most with itself. The computational approach to fuzzy quantifiers which is described in this paper may be viewed as a derivative of fuzzy logic and test-score semantics. In this semantics, the meaning of a semantic entity is represented as a procedure which tests, scores and aggregates the elastic constraints which are induced by the entity in question.

1,736 citations

Journal Article

1,405 citations

Journal ArticleDOI
01 Aug 1996-Synthese
TL;DR: F fuzzy logic is used in this paper to describe an imprecise logical system, FL, in which the truth-values are fuzzy subsets of the unit interval with linguistic labels such as true, false, not true, very true, quite true, not very true and not very false, etc.
Abstract: The term fuzzy logic is used in this paper to describe an imprecise logical system, FL, in which the truth-values are fuzzy subsets of the unit interval with linguistic labels such as true, false, not true, very true, quite true, not very true and not very false, etc. The truth-value set, ℐ, of FL is assumed to be generated by a context-free grammar, with a semantic rule providing a means of computing the meaning of each linguistic truth-value in ℐ as a fuzzy subset of [0, 1]. Since ℐ is not closed under the operations of negation, conjunction, disjunction and implication, the result of an operation on truth-values in ℐ requires, in general, a linguistic approximation by a truth-value in ℐ. As a consequence, the truth tables and the rules of inference in fuzzy logic are (i) inexact and (ii) dependent on the meaning associated with the primary truth-value true as well as the modifiers very, quite, more or less, etc. Approximate reasoning is viewed as a process of approximate solution of a system of relational assignment equations. This process is formulated as a compositional rule of inference which subsumes modus ponens as a special case. A characteristic feature of approximate reasoning is the fuzziness and nonuniqueness of consequents of fuzzy premisses. Simple examples of approximate reasoning are: (a) Most men are vain; Socrates is a man; therefore, it is very likely that Socrates is vain. (b) x is small; x and y are approximately equal; therefore y is more or less small, where italicized words are labels of fuzzy sets.

1,273 citations

Journal ArticleDOI
01 Apr 1969-Synthese
TL;DR: This paper examines problems of vagueness, ambiguity, and ambivalence by suggesting a method for constructing and studying models of the way the authors use words, called a 'logic of inexact concepts'.
Abstract: The 'hard' sciences, such as physics and chemistry, construct exact mathematical models of empirical phenomena, and then use these models to make predictions. Certain aspects of reality always escape such models, and we look hopefully to future refinements. But sometimes there is an elusive fuzziness, a readjustment to context, or an effect of observer upon observed. These phenomena are particularly indigenous to natural language, and are common in the 'soft' sciences, such as biology and psychology. This paper examines problems of f i t zz iness , i.e., vagueness, ambiguity, and ambivalence. Although the theory is called a 'logic of inexact concepts', we do not endorse belief in some Platonic ideal 'concepts' or logic embodying their essence. Rather we suggest a method for constructing and studying models of the way we use words; and we use the word 'concept' metaphorically in discussing meaning. 'Exact concepts' are the sort envisaged in pure mathematics, while 'inexact concepts' are rampant in everyday life. This distinction is complicated by the fact that whenever a human being interacts with mathematics, it becomes part of his ordinary experience, and therefore subject to inexactness. Ordinary logic is much used in mathematics, but applications to everyday life have been criticized because our normal language habits seem so different. Various modifications of orthodox logic have been suggested as remedies, particularly omission of the Law of the Excluded Middle. Ordinary logic represents exact concepts syntactically: that is a concept is given a name (such as 'man') which becomes an object for manipulation in a formal language. Aristotelian logic and the predicate calculus are both such systems, in which rules are given to distinguish valid from

859 citations