scispace - formally typeset
Search or ask a question

Showing papers on "Commonsense reasoning published in 2009"


Journal ArticleDOI
TL;DR: A method that uses singular value decomposition to aid in the integration of systems or representations, and can be harnessed to find and exploit correlations between different resources, enabling commonsense reasoning over a broader domain.
Abstract: Understanding the world we live in requires access to a large amount of background knowledge: the commonsense knowledge that most people have and most computer systems don't. Many of the limitations of artificial intelligence today relate to the problem of acquiring and understanding common sense. The Open Mind Common Sense project began to collect common sense from volunteers on the Internet starting in 2000. The collected information is converted to a semantic network called ConceptNet. Reducing the dimensionality of ConceptNet's graph structure gives a matrix representation called AnalogySpace, which reveals large-scale patterns in the data, smoothes over noise, and predicts new knowledge. Extending this work, we have created a method that uses singular value decomposition to aid in the integration of systems or representations. This technique, called blending, can be harnessed to find and exploit correlations between different resources, enabling commonsense reasoning over a broader domain.

96 citations


Book ChapterDOI
01 Jan 2009
TL;DR: This chapter will give an overview of the PSL Ontology, including its formal characterization as a set of theories in first-order logic and the range of concepts that are axiomatized in these theories.
Abstract: Representing activities and the constraints on their occurrences is an integral aspect of commonsense reasoning, particularly in manufacturing, enterprise modelling, and autonomous agents or robots. In addition to the traditional concerns of knowledge representation and reasoning, the need to integrate software applications in these areas has become increasingly important. However, interoperability is hindered because the applications use different terminology and representations of the domain. These problems arise most acutely for systems that must manage the heterogeneity inherent in various domains and integrate models of different domains into coherent frameworks. For example, such integration occurs in business process reengineering, where enterprise models integrate processes, organizations, goals and customers. Even when applications use the same terminology, they often associate different semantics with the terms. This clash over the meaning of the terms prevents the seamless exchange of information among the applications. translators between every pair of applications that must cooperate. What is needed is some way of explicitly specifying the terminology of the applications in an unambiguous fashion. The Process Specification Language (PSL) ([10], [7]) has been designed to facilitate correct and complete exchange of process information among manufacturing systems . Included in these applications are scheduling, process modeling, process planning, production planning, simulation, project management, workflow, and business process reengineering. This chapter will give an overview of the PSL Ontology, including its formal characterization as a set of theories in first-order logic and the range of concepts that are axiomatized in these theories.

44 citations


Journal ArticleDOI
TL;DR: It is found that analogy with examples can be used to learn how to solve AP Physics style problems and the process is called analogical model formulation, which is implemented in the Companion cognitive architecture.

36 citations


Book
31 Oct 2009
TL;DR: This book presents an effective classification of logical rules used in the modeling of commonsense reasoning and transforms traditional considerations of data and knowledge communications.
Abstract: The reduction of machine learning algorithms to commonsense reasoning processes is now possible due to the reformulation of machine learning problems as searching the best approximation of a given classification on a given set of examples. Machine Learning Methods for Commonsense Reasoning Processes: Interactive Models provides a unique view on classification as a key to human commonsense reasoning and transforms traditional considerations of data and knowledge communications. Containing leading research evolved from international investigations, this book presents an effective classification of logical rules used in the modeling of commonsense reasoning.

29 citations


Proceedings ArticleDOI
14 Jun 2009
TL;DR: This paper considers the needs of commonsense causality and suggests Fuzzy Cognitive Maps as an alternative to DAGs, the most widespread causal representation of directed acyclic graphs.
Abstract: Causal reasoning occupies a central position in human reasoning. In order to algorithmically consider causal relations, the relations must be placed into a representation that supports manipulation. The most widespread causal representation is directed acyclic graphs (DAGs). However, DAGs are severely limited in what portion of the common sense world they can represent. This paper considers the needs of commonsense causality and suggests Fuzzy Cognitive Maps as an alternative to DAGs.

26 citations


Journal ArticleDOI
Erik T. Mueller1
TL;DR: The automation of commonsense reasoning, long a goal of the field of artificial intelligence and an area of active research in the last decade, is attaining a level of maturity.
Abstract: Commonsense reasoning is the human ability to make inferences about properties and events in the everyday world. The automation of commonsense reasoning, long a goal of the field of artificial intelligence [3] and an area of active research in the last decade [8], is attaining a level of maturity. Automating commonsense reasoning allows us to build applications that are more user-friendly and more understanding of the world [2]. Several major computational approaches to commonsense reasoning have been explored. Analogical processing implements the notion that people reason about novel situations by analogy to familiar ones. Probability theory allows us to reason given uncertain knowledge of the state of the world and how the world works. Qualitative reasoning focuses on reasoning about physical systems. Methods based on natural language make use of large textual corpora of commonsense knowledge. Society of mind approaches stress the use of multiple interacting methods and representations. One approach that has achieved a high degree of success because of its steadfast focus on hard benchmark problems of commonsense reasoning, is logic. One logic-based formalism that stands out as both comprehensive and easy to use is the event calculus [4, 9].

23 citations


Book ChapterDOI
05 Jun 2009
TL;DR: This paper presents a novel approach to model argument accrual in the context of P-DeLP in a constructive way and takes into account possibilistic uncertainty when accruing arguments.
Abstract: Argumentation frameworks have proven to be a successful approach to formalizing commonsense reasoning. Recently, some argumentation frameworks have been extended to deal with possibilistic uncertainty, notably Possibilistic Defeasible Logic Programming (P-DeLP). At the same time, modelling argument accrual has gained attention from the argumentation community. Even though some preliminary formalizations have been advanced, they do not take into account possibilistic uncertainty when accruing arguments. In this paper we present a novel approach to model argument accrual in the context of P-DeLP in a constructive way.

19 citations


Proceedings ArticleDOI
21 May 2009
TL;DR: The paper introduces basic definitions of Fuzzy Default Logic (FDL), presents the inference scheme together with the conclusion assessment procedure and problems of hypotheses stability, and compares it to the answer set programming and other disjunction logic based approaches.
Abstract: The work concerns a new formalism of common-sense reasoning modeling. It combines “classical” Reiter's default logic and Brewka's cumulative default logic with Zadeh's generalized theory of uncertainty and granular reasoning. Various aspects of uncertainty and ignorance are discussed with respect to intelligent humans' behavior patterns. The paper introduces basic definitions of Fuzzy Default Logic (FDL) and presents the inference scheme together with the conclusion assessment procedure and problems of hypotheses stability. PROLOG implementation of the entire inference engine is given. The problems of hypotheses generation and revision of beliefs are discussed. Finally, FDL is compared to the answer set programming and other disjunction logic based approaches.

11 citations


01 Jun 2009
TL;DR: 9th International Symposium on Logical Formalization of Commonsense Reasoning: Commonsense 2009, Toronto, Canada, 1-3 June 2009
Abstract: 9th International Symposium on Logical Formalization of Commonsense Reasoning: Commonsense 2009, Toronto, Canada, 1-3 June 2009

10 citations


Proceedings ArticleDOI
28 Dec 2009
TL;DR: The rapidly increasing amount of video collections, available on the web or via broadcasting, motivated research towards building intelligent tools for searching, rating, indexing and retrieval purposes, and the need for unified generic commonsense knowledgebase for visual applications was emphasized.
Abstract: The rapidly increasing amount of video collections, available on the web or via broadcasting, motivated research towards building intelligent tools for searching, rating, indexing and retrieval purposes. Establishing a semantic representation of visual data, mainly in textual form, is one of the important tasks. The time needed for building and maintaining Ontologies and knowledge, especially for wide domain, and the efforts for integrating several approaches emphasize the need for unified generic commonsense knowledgebase for visual applications.

7 citations


Book ChapterDOI
01 Jan 2009
TL;DR: A unified model of commonsense reasoning is proposed and it is demonstrated that a large class of inductive machine learning algorithms can be transformed into the Commonsense reasoning processes based on wellknown deduction and induction logical rules.
Abstract: One of the most important tasks in database technology is to combine the following activities: data mining or inferring knowledge from data and query processing or reasoning on acquired knowledge. The solution of this task requires a logical language with unified syntax and semantics for integrating deductive (using knowledge) and inductive (acquiring knowledge) reasoning. In this paper, we propose a unified model of commonsense reasoning. We also demonstrate that a large class of inductive machine learning (ML) algorithms can be transformed into the commonsense reasoning processes based on wellknown deduction and induction logical rules. The concept of a good classification (diagnostic) test (Naidenova & Polegaeva, 1986) is the basis of our approach to combining deductive and inductive reasoning. The unique role of the good test’s concept is explained by the equivalence of the following relationships (Cosmadakis et al., 1986):

Proceedings ArticleDOI
01 Sep 2009
TL;DR: The proposed algorithm searches automatically in Extended WordNet for all concepts that have a given property and generates axioms linking those concepts with the seed commonsense rule, and shows that using 27 commonsense rules, the algorithm generated 2596 axiomatic of which 98% were validated by human.
Abstract: This paper presents a semiautomatic method for generating commonsense axioms. The method relies on three metarules that process a few commonsense rules referring to some concept properties. The proposed algorithm searches automatically in Extended WordNet for all concepts that have a given property and generates axioms linking those concepts with the seed commonsense rule. The results show that using 27 commonsense rules, the algorithm generated 2596 axioms of which 98% were validated by human. The generation of commonsense axioms is useful to many natural language applications that require reasoning.

Proceedings ArticleDOI
19 Oct 2009
TL;DR: Using CPRs as underlying knowledge structure for rule mining provides an excellent mechanism for exception handling and approximate reasoning, and discovering exceptions through CPRs enhances the predictive accuracy of the classifier.
Abstract: It is interesting to discover exceptions, as they dispute the existing knowledge and have elements of unexpectedness and surprise. As exceptions focus on a very small portion of data, discovering exceptions still remains a great challenge. A Censored Production Rule (CPR) is a special kind of knowledge structure that augments exceptions to their corresponding commonsense rules of high generality and support. This paper proposes discovery of decision rules in the form of Censored Production Rules by employing a genetic algorithm approach. Results confirm that the proposed discovery of decision rules in the form of CPRs is comprehensible and interesting. Using CPRs as underlying knowledge structure for rule mining provides an excellent mechanism for exception handling and approximate reasoning. Moreover, discovering exceptions through CPRs enhances the predictive accuracy of the classifier.


Journal ArticleDOI
TL;DR: The principles of heuristic problem-solving approach are explained and how they can be applied to building knowledge-based systems for animal science problem solving are demonstrated.
Abstract: Biological systems are surprising flexible in processing information in the real world. Some biological organisms have a central unit processing named brain. The human's brain, consisting of 1011 neurons, realizes intelligent information processing based on exact and commonsense reasoning. Artificial intelligence (AI) has been trying to implement biological intelligence in computers in various ways, but is still far from real one. Therefore, there are approaches like Symbolic AI, Artificial Neural Network and Fuzzy system that partially successful in implementing heuristic from biological intelligence. Many recent applications of these approaches show an increased interest in animal science research. The main goal of this article is to explain the principles of heuristic problem-solving approach and to demonstrate how they can be applied to building knowledge-based systems for animal science problem solving.

Dissertation
01 Jan 2009
TL;DR: The development of a framework for constructing commonsense reasoning systems within constrained time and resources, from present day technology, named Comirit, integrates simulation, logical deduction and machine learning techniques into a coherent whole and is an open-ended framework that allows the integration of any number of additional mechanisms.
Abstract: While robots and software agents have been applied with spectacular success to challenging problems in our world, these same successes often translate into spectacular failures when these systems encounter situations that their engineers never conceived. These failures stand in stark contrast to the average person who, while lacking the speed and accuracy of such machines, can draw on commonsense intuitions to effortlessly improvise novel solutions to unexpected problems. The objective of artificial commonsense is to bring some measure of this powerful mental agility and understanding to robots and software systems. In this dissertation, I offer a practical perspective on the problem of constructing systems with commonsense. Starting with philosophical underpinnings and working through formal models, object oriented design and implementation, I revisit prevailing assumptions with a pragmatic focus on the goals of constructing effective, efficient, affordable and real commonsense reasoning systems. I begin with a formal analysis—the first formal analysis—of the Symbol Grounding Problem, in which I develop an ontology of representational classes. This analysis serves as motivation for the development of a hybrid reasoning system that combines iconic and symbolic representations. I then proceed to the primary contribution of this dissertation: the development of a framework for constructing commonsense reasoning systems within constrained time and resources, from present day technology. This hybrid reasoning framework, named Comirit, integrates simulation, logical deduction and machine learning techniques into a coherent whole. It is, furthermore, an open-ended framework that allows the integration of any number of additional mechanisms. An evaluation of Comirit demonstrates the value of the framework and highlights the advantages of having developed with a practical perspective. Not only is Comirit an efficient and affordable working system (rather than pure theory) but also it is found to be more complete, elaboration tolerant and capable of autonomous independent learning when applied to standard benchmark problems of commonsense reasoning. Please use this identifier to cite or link to this item: http://hdl.handle.net/10453/20296  Publication Type: Thesis Issue Date: 2009 OPEN ACCESS COPYRIGHT CLEARANCE PROCESS This item is open access.

Proceedings ArticleDOI
14 Sep 2009
TL;DR: This paper will highlight the need to couple statistical approaches with deep linguistic processing and will focus on “implicit” or lexically unexpressed linguistic elements that are nonetheless necessary for a complete semantic interpretation of a text.
Abstract: Semantic processing represents the new challenge for all applications that require text understanding, as for instance Q/A. In this paper we will highlight the need to couple statistical approaches with deep linguistic processing and will focus on “implicit” or lexically unexpressed linguistic elements that are nonetheless necessary for a complete semantic interpretation of a text. We will address the following types of “implicit” entities and events: - grammatical ones, as suggested by a linguistic theories like LFG or similar generative theories; - semantic ones suggested in the FrameNet project, i.e. CNI, DNI, INI; - pragmatic ones: here we will present a theory and an implementation for the recovery of implicit entities and events of (non-) standard implicatures. In particular we will show how the use of commonsense knowledge may fruitfully contribute in finding relevant implied meanings. We will also briefly explore the Subject of Point of View which is computed by Semantic Informational Structure and contributes the intended entity from whose point of view is expressed a given subjective statement. We also present an evaluation based on section 24 of Penn Treebank as encoded by LFG people in the PARC-700 treebank where lexically unexpressed are adequately classified and diversified.

Book ChapterDOI
04 Feb 2009
TL;DR: A formal extension of atl is provided, called Coalitional atl ( coalATL for short), in which the actual computation of the coalition is modelled in terms of argumentation semantics, and it is shown that coalATl 's proof theory can be understood as a natural extension of the model checking procedure used in atl.
Abstract: During the last decade argumentation has evolved as a successful approach to formalize commonsense reasoning and decision making in multiagent systems. In particular, recent research has shown that argumentation can be used to provide a framework for reasoning about coalition formation , formalizing the adoption of coalitions by the agents in association with different argumentation semantics. At the same time Alternating-time Temporal Logic ( atl for short) has been successfully used to reason about the behavior and abilities of coalitions of agents. However, an important limitation of atl operators is that they account only for the existence of successful strategies of coalitions, not considering whether coalitions can be actually formed. This paper is an attempt to combine both frameworks in order to develop a logical system through which we can reason at the same time (1) about abilities of coalitions of agents and (2) about the formation of coalitions. In order to achieve this, we provide a formal extension of atl , called Coalitional atl ( coalATL for short), in which the actual computation of the coalition is modelled in terms of argumentation semantics. Moreover, we integrate goals as agents' incentive to join coalitions. We show that coalATL 's proof theory can be understood as a natural extension of the model checking procedure used in atl .

Book ChapterDOI
16 Dec 2009
TL;DR: It is shown how difficult, in terms of semantic (computational) complexity and data complexity (i.r.t. the number of instances declared in a knowledge base), such reasoning problems are.
Abstract: Pratt and Third's syllogistic fragments of English can be used to capture, in addition to syllogistic reasoning, many other kinds of common sense reasoning, and, in particular (i) knowledge base consistency and (ii) knowledge base query answering, modulo their FO semantic representations We show how difficult, in terms of semantic (computational) complexity and data complexity (ie, computational complexity wrt the number of instances declared in a knowledge base), such reasoning problems are In doing so, we pinpoint also those fragments for which the reasoning problems are tractable (in PTime) or intractable (NP-hard or coNP-hard)

Proceedings Article
18 Mar 2009
TL;DR: This paper presents a significantly novel methodology for QSTR application validation, inspired by research in software engineering, and presents two critical boundary concepts, a methodology for isolating the units under testing from other parts of the model, and methods to assist the designer in integrating the critical boundary unit testing approach with a broader validation plan.
Abstract: Commonsense reasoning, in particular qualitative spatial and temporal reasoning (QSTR), provides flexible and intuitive methods for reasoning about vague and uncertain information including spatial orientation, topology and proximity. Despite a number of theoretical advances in QSTR, there are relatively few applications that employ these methods. The central problem is a significant lack of application level standards and validation methods for supporting developers in adapting and integrating QSTR with their domain specific qualitative spatial and temporal models. To address this we present a significantly novel methodology for QSTR application validation, inspired by research in software engineering. In this paper we focus on unit testing, and adapt the software engineering strategy of defining boundary cases. We present two critical boundary concepts, a methodology for isolating the units under testing from other parts of the model, and methods to assist the designer in integrating our critical boundary unit testing approach with a broader validation plan.

Book ChapterDOI
01 Apr 2009
TL;DR: In this article, the authors consider a number of real-world applications drawn from the domains of diagnosis, reliability, genetics, channel coding, and commonsense reasoning, and discuss the process of constructing the required network and then identify the specific queries that need to be applied.
Abstract: We address in this chapter a number of problems that arise in real-world applications, showing how each can be solved by modeling and reasoning with Bayesian networks. Introduction We consider a number of real-world applications in this chapter drawn from the domains of diagnosis, reliability, genetics, channel coding, and commonsense reasoning. For each one of these applications, we state a specific reasoning problem that can be addressed by posing a formal query with respect to a corresponding Bayesian network. We discuss the process of constructing the required network and then identify the specific queries that need to be applied. There are at least four general types of queries that can be posed with respect to a Bayesian network. Which type of query to use in a specific situation is not always trivial and some of the queries are guaranteed to be equivalent under certain conditions. We define these query types formally in Section 5.2 and then discuss them and their relationships in more detail when we go over the various applications in Section 5.3. The construction of a Bayesian network involves three major steps. First, we must decide on the set of relevant variables and their possible values. Next, we must build the network structure by connecting the variables into a DAG. Finally, we must define the CPT for each network variable. The last step is the quantitative part of this construction process and can be the most involved in certain situations.

Proceedings ArticleDOI
30 Oct 2009
TL;DR: This paper considers the needs of commonsense causality and suggests Fuzzy Cognitive Maps as an alternative to DAGs, which are severely limited in what portion of the common sense world they can be sent.
Abstract: The target of many studies in the health sciences is the discovery of cause-effect relationships among observed variables of interest, for example: treatments, exposures, preconditions, and outcomes. Causal modeling and causal discovery are central to medical science. In order to algorithmically consider causal relations, the relations must be placed into a representation that supports manipulation. Knowledge of at least some causal effects is imprecise. The most widespread causal representation is di- rected acyclic graphs (DAGs). However, DAGs are severely lim- ited in what portion of the common sense world they can repre- sent. This paper considers the needs of commonsense causality and suggests Fuzzy Cognitive Maps as an alternative to DAGs.


Book ChapterDOI
15 Jun 2009
TL;DR: This chapter gives examples of objective real-world fuzziness and provides an explanation of this fuzziness – in terms of cognizability of the world.
Abstract: Most traditional examples of fuzziness come from the analysis of commonsense reasoning. When we reason, we use words from natural language like “young”, “well”. In many practical situations, these words do not have a precise true-or-false meaning, they are fuzzy. One may therefore be left with an impression that fuzziness is a subjective characteristic, it is caused by the specific way our brains work. However, the fact that that we are the result of billions of years of successful adjusting-to-the-environment evolution makes us conclude that everything about us humans is not accidental. In particular, the way we reason is not accidental, this way must reflect some real-life phenomena – otherwise, this feature of our reasoning would have been useless and would not have been abandoned long ago. In other words, the fuzziness in our reasoning must have an objective explanation – in fuzziness of the real world. In this chapter, we first give examples of objective real-world fuzziness. After these example, we provide an explanation of this fuzziness – in terms of cognizability of the world.


01 Jan 2009
TL;DR: Yamauchi et al. as discussed by the authors showed that the strength of inductive arguments come from the similarity between concepts in a premise (American) and conclusion (football), and the degree of coverage of a premise concept (American), over a conclusion concept (football).
Abstract: Categorical Knowledge and Commonsense Reasoning Takashi Yamauchi (tya@psyc.tamu.edu) Department of Psychology, Mail Stop 4235 Texas AM American=[ a 1 , a 2 ..., a n ] T and Football=[ b 1 , b 2 ..., b n ] T , where individual elements of the dimensions correspond to some values associated with each feature. If the two vectors are sufficiently similar, then the argument described above is perceived as strong. If the premise concept, American, is more inclusive than the conclusion concept, Football, the a x ( C | P 1 )  F ( P 1 )  F ( C ) | F ( C ) | 2 | F ( P 1 ) | [ F ( P 1 )  F ( C )] | F ( P 1 ) | sim ( P 1 , C ) | F ( P 1 ) || F ( C ) || F ( C ) | | F ( C ) | As equation 3 shows, the Sloman model delineates that the strength of inductive arguments come from (a) the similarity between concepts in a premise (American) and conclusion (football), and (b) the degree of coverage of a premise concept (American) over a conclusion concept (football). A variant of similarity-based reasoning algorithms have been shown to account for a wide range of human reasoning, including legal judgment (Rissland, 2006), categorization (Love et al., 2005), and inference (Yamauchi & Markman, 1998, 2000). Is this similarity-based account sufficient to explain commonsense reasoning? The similarity-based approach assumes that concepts, such as American or football, consist of a set of features, and reasoning operates over concepts that exist prior to the operation. Categorical knowledge specifies relationships among instances and properties, but it may also help create new properties. For example, a categorical statement such as “Jane is a feminist” not only activates our general pre-existing knowledge about “feminist,” but it also leads us to seek some properties to

Book ChapterDOI
23 Apr 2009
TL;DR: An Event Calculus-based knowledge framework is extended with a method for sensing world features of different types in a uniform and transparent to the agent manner, which results in the modeling of agents that remember and forget, a cognitive skill particularly suitable for the implementation of real-world applications.
Abstract: Knowledge and causality play an essential role in the attempt to achieve commonsense reasoning in cognitive robotics. As agents usually operate in dynamic and uncertain environments, they need to acquire information through sensing inertial aspects, such as the state of a door, and continuously changing aspects, such as the location of a moving object. In this paper, we extend an Event Calculus-based knowledge framework with a method for sensing world features of different types in a uniform and transparent to the agent manner. The approach results in the modeling of agents that remember and forget, a cognitive skill particularly suitable for the implementation of real-world applications.

01 Jan 2009
TL;DR: An analysis of inference structure shows that inductive and deductive rules communicate in reasoning and an automated model for detecting the types of woodland from incomplete descriptions of some evidences is given.
Abstract: Some examples of natural human common sense reasoning both in scientific pattern recognition problems and in solving logical games are given. An analysis of inference structure shows that inductive and deductive rules communicate in reasoning. An automated model for detecting the types of woodland from incomplete descriptions of some evidences is also given in this paper. The important part of this model is a small knowledge base of experts' knowledge about natural woodlands as biological formation.

Proceedings ArticleDOI
25 Aug 2009
TL;DR: A qualitative cluster reasoning framework for reasoning about clusters is suggested that combines two post-clustering activities: cluster polygonization and qualitative spatial reasoning.
Abstract: Clustering is a core technique in many disciplines that assigns objects into similar groups. It provides answers for where, when and what objects are aggregated. As clustering becomes more important and popular, post-clustering activities that attempt to answer why they (what objects) are there (spatial) and/or then (temporal) need a great attention. This paper suggests a qualitative cluster reasoning framework for reasoning about clusters. Our proposed framework combines two post-clustering activities: cluster polygonization and qualitative spatial reasoning. Experimental results demonstrate that proposed qualitative cluster reasoning reveals interesting cluster structures and rich cluster relations.

Journal ArticleDOI
TL;DR: The research community has long recognized the study of non-monotonic reasoning (NMR) as a promising approach to model features of commonsense reasoning and this paper indicates that the p-stable semantics for strong kernel programs is the same as the stable semantics.