scispace - formally typeset
Search or ask a question

Showing papers on "Commonsense reasoning published in 2008"


Book
17 Apr 2008
TL;DR: This book provides the first comprehensive, systematic and uniform account of the state-of-the-art of second-order quantifier elimination in classical and non-classical logics.
Abstract: In recent years there has been an increasing use of logical methods and significant new developments have been spawned in several areas of computer science, ranging from artificial intelligence and software engineering to agent-based systems and the semantic web. In the investigation and application of logical methods there is a tension between: * the need for a representational language strong enough to express domain knowledge of a particular application, and the need for a logical formalism general enough to unify several reasoning facilities relevant to the application, on the one hand, and * the need to enable computationally feasible reasoning facilities, on the other hand. Second-order logics are very expressive and allow us to represent domain knowledge with ease, but there is a high price to pay for the expressiveness. Most second-order logics are incomplete and highly undecidable. It is the quantifiers which bind relation symbols that make second-order logics computationally unfriendly. It is therefore desirable to eliminate these second-order quantifiers, when this is mathematically possible; and often it is. If second-order quantifiers are eliminable we want to know under which conditions, we want to understand the principles and we want to develop methods for second-order quantifier elimination. This book provides the first comprehensive, systematic and uniform account of the state-of-the-art of second-order quantifier elimination in classical and non-classical logics. It covers the foundations, it discusses in detail existing second-order quantifier elimination methods, and it presents numerous examples of applications and non-standard uses in different areas. These include: * classical and non-classical logics, * correspondence and duality theory, * knowledge representation and description logics, * commonsense reasoning and approximate reasoning, * relational and deductive databases, and * complexity theory. The book is intended for anyone interested in the theory and application of logics in computer science and artificial intelligence.

80 citations


Journal ArticleDOI
Ernest Davis1
TL;DR: A theory is presented that supports commonsense, qualitative reasoning about the flow of liquid around slowly moving solid objects, inferring that liquid can be poured from one container to another, given only qualitative information about the shapes and motions of the containers.

43 citations


Book ChapterDOI
01 Jan 2008
TL;DR: Question answering takes a whole new dimension where the system has a huge body of documents, and it is asked a query in a natural language, and the system is expected to give an answer to the question not only using the documents but also using appropriate commonsense knowledge.
Abstract: Publisher Summary A search engine or a typical information retrieval (IR) system, such as Google, does not go far enough as it takes keywords and only gives a ranked list of documents that may contain those keywords. Often this list is very long, and an analyst might have to read the documents in the list. Other reasons behind the unsuitability of an IR system (for an analyst) are that the nuances of a question in a natural language cannot be adequately expressed through keywords, most IR systems ignore synonyms, and most IR systems cannot reason. There should be a system that can take the documents and the analyst's question as input, access the data in fact books, and do commonsense reasoning based on them to provide answers to questions. Such a system is referred to as a “question answering system” or a “QA system.” A precursor to question answering is database querying where one queries a database using a database query language. Question answering takes this to a whole new dimension where the system has a huge body of documents (in natural languages, possibly including multimedia objects, situated in the Web and described in a Web language), and it is asked a query in a natural language. It is expected to give an answer to the question not only using the documents but also using appropriate commonsense knowledge. The system needs to be able to accommodate new additions to the body of documents.

30 citations


Proceedings Article
16 Sep 2008
TL;DR: This paper reports on experiments on a corpus of a half million sentences of natural language text that test whether commonsense knowledge can be usefully acquired through Knowledge Infusion, and shows that chaining learned commonsense rules together leads to measurable improvements in prediction performance on the authors' task as compared with the baseline.
Abstract: A central goal of Artificial Intelligence is to create systems that embody commonsense knowledge in a reliable enough form that it can be used for reasoning in novel situations. Knowledge Infusion is an approach to this problem in which the commonsense knowledge is acquired by learning. In this paper we report on experiments on a corpus of a half million sentences of natural language text that test whether commonsense knowledge can be usefully acquired through this approach. We examine the task of predicting a deleted word from the remainder of a sentence for some 268 target words. As baseline we consider how well this task can be performed using learned rules based on the words within a fixed distance of the target word and their parts of speech. This captures an approach that has been previously demonstrated to be highly successful for a variety of natural language tasks. We then go on to learn from the corpus rules that embody commonsense knowledge, additional to the knowledge used in the baseline case. We show that chaining learned commonsense rules together leads to measurable improvements in prediction performance on our task as compared with the baseline. This is apparently the first experimental demonstration that commonsense knowledge can be learned from natural inputs on a massive scale reliably enough that chaining the learned rules is efficacious for reasoning.

29 citations


Proceedings Article
20 Jun 2008
TL;DR: A hybrid architecture that combines tableaux-based reasoning with a framework for generic simulation based on the concept of 'molecular' models is proposed, allowing an agent to build and reason with automatically constructed simulations in a problem-sensitive manner.
Abstract: Rich computer simulations or quantitative models can enable an agent to realistically predict real-world behavior with precision and performance that is difficult to emulate in logical formalisms. Unfortunately, such simulations lack the deductive flexibility of techniques such as formal logics and so do not find natural application in the deductive machinery of commonsense or general purpose reasoning systems. This dilemma can, however, be resolved via a hybrid architecture that combines tableaux-based reasoning with a framework for generic simulation based on the concept of 'molecular' models. This combination exploits the complementary strengths of logic and simulation, allowing an agent to build and reason with automatically constructed simulations in a problem-sensitive manner.

27 citations


Book ChapterDOI
01 Jan 2008
TL;DR: This chapter discusses Commonsense reasoning, which has been a focus of extensive studies by the knowledge representation community since the early eighties of the past century and was fueled by several fundamental challenges facing knowledge representation such as modeling and reasoning about rules with exceptions or defaults and solving the frame problem.
Abstract: Publisher Summary Classic logic is monotonic in the following sense: whenever a sentence A is a logical consequence of a set of sentences T , A is also a consequence of an arbitrary superset of T . In other words, adding information never invalidates any conclusions. Commonsense reasoning is different. Plausible conclusions are often drawn based on the assumption that the world in which one functions and about which one reasons is normal and as expected. This is far from being irrational. To the contrary, it is the best one can do in situations in which one has only incomplete information. However, as unexpected as it may be, it can happen that the normality assumptions turn out to be wrong. New information can show that the situation actually is abnormal in some respect. In this case, conclusions may have to be revised. Such reasoning, where additional information may invalidate conclusions, is called “nonmonotonic.” It has been a focus of extensive studies by the knowledge representation community since the early eighties of the past century. This interest was fueled by several fundamental challenges facing knowledge representation such as modeling and reasoning about rules with exceptions or defaults and solving the frame problem.

25 citations


Journal ArticleDOI
Joan Bliss1
TL;DR: In this paper, the authors report on a programme of research carried out over a period of twenty years by Joan Bliss, Jon Ogborn and others, and aimed at understanding how pupils and students reason about the everyday physical world.
Abstract: This paper reports on a programme of research carried out over a period of twenty years by Joan Bliss, Jon Ogborn and others, and aimed at understanding how pupils and students reason about the everyday physical world. Our starting point was the science alternative conceptions literature, particularly force and motion, where the results puzzled and motivated us to make sense of the diversity of the findings. In a second step a Commonsense Theory of Motion was developed, which served as a framework for four other studies in the area of force and motion. These revealed that children were indeed using their everyday knowledge to explain these physical notions. The next step was to ask whether children also reasoned in a commonsense way about the physical world more generally, which led to the development of both a new theoretical framework and a further study to investigate these ideas. The findings showed that the physical reasoning schemes shown to exist were: movement, action/force/effort, container, carr...

18 citations


Book ChapterDOI
01 Jan 2008
TL;DR: This chapter discusses the work of KR researchers that tries to represent commonsense knowledge and carry out commonsense reasoning over some basic physical domains.
Abstract: Publisher Summary An intelligent creature or automaton that is set in a complex uncontrolled world will be able to act more effectively and flexibly if it understands the physical laws governing its surroundings and their relation to its own actions and the actions of other agents. This chapter discusses the work of KR researchers that tries to represent commonsense knowledge and carry out commonsense reasoning over some basic physical domains. There is a vast body of computer science and scientific computing that deals in one way or another with physical phenomena; almost all of this lies outside the scope of KR research and, hence, of this chapter. Even within artificial intelligence (AI), there are many types of physical reasoning that are excluded. For instance, the automated visual recognition of a scene is, in a sense, a type of physical reasoning. Image formation is a physical process; the problem in vision is to infer plausible characteristics of a scene given an image of it. This is not considered a problem for KR physical reasoning because the physics involved is too specialized. A single, quite complex, physical process and a single type of inference about the process are at issue; the computational techniques to be applied are highly tuned to that process and inference, and they hardly generalize to any other kind of problem.

17 citations


Book ChapterDOI
07 Sep 2008
TL;DR: This paper focuses on OMCS-Br, a collaborative project that makes use of web technologies in order to get common sense knowledge from a general public and so use it in computer applications and it is hoped that software with more usability can be developed.
Abstract: Good sense can be defined as the quality which someone has to make sensible decisions about what to do in specific situations. It can also be defined as good judgment. However, in order to have good sense, people have to use common sense knowledge. This is not different to computers. Nowadays, computers are still not able to make sensible decisions and one of the reasons is the fact that they lack common sense. This paper focuses on OMCS-Br, a collaborative project that makes use of web technologies in order to get common sense knowledge from a general public and so use it in computer applications. Here it is presented how people can contribute to give computers the knowledge they need to be able to perform common sense reasoning and, therefore, to make good sense decisions. In this manner, it is hoped that software with more usability can be developed.

12 citations


Journal ArticleDOI
TL;DR: This paper will present a theory and an implementation for the recovery of implicit entities and events of (non-) standard implicatures and show how the use of commonsense knowledge may fruitfully contribute to find relevant implied meanings.
Abstract: In this paper we will focus on the notion of “implicit” or lexically unexpressed linguistic elements that are nonetheless necessary for a complete semantic interpretation of a text. We refer to “entities” and “events” because the recovery of the implicit material may affect all the modules of a system for semantic processing, from the grammatically guided components to the inferential and reasoning ones. Reference to the system GETARUNS offers one possible implementation of the algorithms and procedures needed to cope with the problem and enables us to deal with all the spectrum of phenomena. The paper will address at first the following three types of “implicit” entities and events: the grammatical ones, as suggested by a linguistic theories like LFG or similar generative theories; the semantic ones suggested in the FrameNet project, i.e. CNI, DNI, INI; the pragmatic ones: here we will present a theory and an implementation for the recovery of implicit entities and events of (non-) standard implicatures. In particular we will show how the use of commonsense knowledge may fruitfully contribute to find relevant implied meanings. Last Implicit Entity only touched on, though for lack of space, is the Subject of Point of View, which is computed by Semantic Informational Structure and contributes the intended entity from whose point of view a given subjective statement is expressed.

9 citations


Proceedings Article
13 Jul 2008
TL;DR: This system translates basic English descriptions of a wide range of objects in a simplistic zoo environment into plausible, three-dimensional, interactive visualizations of their positions, orientations, and dimensions and serves as an extensible test-and-evaluation framework for a multitude of linguistic and artificial-intelligence investigations.
Abstract: This system translates basic English descriptions of a wide range of objects in a simplistic zoo environment into plausible, three-dimensional, interactive visualizations of their positions, orientations, and dimensions. It combines a semantic network and contextually sensitive knowledge base as representations for explicit and implicit spatial knowledge, respectively. Its linguistic aspects address underspecification, vagueness, uncertainty, and context with respect to intrinsic, extrinsic, and deictic frames of spatial reference. The underlying, commonsense reasoning fomlalism is probability-based geometric fields that are solved through constraint satisfaction. The architecture serves as an extensible test-and-evaluation framework for a multitude of linguistic and artificial-intelligence investigations.

Proceedings ArticleDOI
04 Mar 2008
TL;DR: It is shown that the qualitative approach is more effective than the quantitative approach and therefore it is more suitable to computing anticipatory systems with high reliability and high security requirements.
Abstract: A computing anticipatory system has to reason about actions as fast as possible in order to decide its next actions anticipatorily. We have implemented an action reasoning engine with general-purpose for various computing anticipatory systems and performed some experiments with the action reasoning engine and empirical knowledge represented quantitatively. Our experiments showed that the quantitative approach is very time-consuming and therefore it is not suitable to computing anticipatory systems with high reliability and high security requirements. This paper presents a qualitative approach for reasoning about actions fast. We show that the qualitative approach is more effective than the quantitative approach and therefore it is more suitable to computing anticipatory systems with high reliability and high security requirements.

Proceedings ArticleDOI
18 Oct 2008
TL;DR: A method for acquiring commonsense knowledge about properties of concepts by analyzing how adjectives are used with nouns in everyday language by mining a large scale corpus and filtering erroneously acquired concepts based on heuristic rules and statistical approaches.
Abstract: Commonsense knowledge plays an important role in various areas such as natural language understanding, information retrieval, etc. This paper presents a method for acquiring commonsense knowledge about properties of concepts by analyzing how adjectives are used with nouns in everyday language. We firstly mine a large scale corpus for potential concept-property pairs using lexico-syntactic patterns and then filter erroneously acquired ones based on heuristic rules and statistical approaches. For each concept, we automatically select the commonsensical properties and evaluate their applicability. Finally, we generate commonsense knowledge represented with explicit fuzzy quantifiers. Experimental results demonstrate the effectiveness of our approach.

Book ChapterDOI
01 Jan 2008
TL;DR: A full online computer model for integrating deductive and inductive reasoning and the main tendency is to combine some already well-known models of learning (inductive reasoning) and deductive reasoning.
Abstract: The development of a full online computer model for integrating deductive and inductive reasoning is of great interest in machine learning. The main tendency of integration is to combine, into a whole system, some already well-known models of learning (inductive reasoning) and deductive reasoning. For instance, the idea of combining inductive learning from examples with prior knowledge and default reasoning has been advanced in Giraud-Carrier and Martinez (1994). AbstrAct

Book ChapterDOI
02 Oct 2008
TL;DR: This paper develops a formal account of action, knowledge and time within the context of the Event Calculus and proposes a unified theory that is applicable to diverse scenarios of commonsense reasoning.
Abstract: To regulate and coordinate the behavior of agents within real-world environments, the participating entities must be able to reason not only about the state of the environment itself, but also about their own knowledge concerning that state, based on information acquired through sensing actions. Considering the dynamic nature of most realistic domains, the study of knowledge evolution over time is a critical aspect. This paper develops a formal account of action, knowledge and time within the context of the Event Calculus and proposes a unified theory that is applicable to diverse scenarios of commonsense reasoning.

Proceedings ArticleDOI
18 Nov 2008
TL;DR: In this article, the authors propose to use Herrera-Martinez' 2-tuple linguistic representation model for reasoning with uncertain and qualitative information in Dezert-Smarandache theory (DSmT) framework to preserve the precision and the efficiency of the fusion of linguistic information expressing the expert's qualitative beliefs.
Abstract: Most of modern systems for information retrieval, fusion and management have to deal more and more with information expressed quatitatively (by linguistic labels) since human reports are better and easier expressed in natural language than with numbers. In this paper, we propose to use Herrera-Martinez' 2-tuple linguistic representation model (i.e. equidistant linguistic labels with a numeric value assessment) for reasoning with uncertain and qualitative information in Dezert-Smarandache theory (DSmT) framework to preserve the precision and the efficiency of the fusion of linguistic information expressing the expert's qualitative beliefs. We present operators to deal with the 2-tuples and show from a simple example how qualitative DSmT-based fusion rules can be used for qualitative reasoning and fusioning under uncertainty.

Book ChapterDOI
01 Jan 2008
TL;DR: Causal complexes are groupings of finer-grained causal relations into a larger- grained causal object that are necessarily more imprecise than some of their components.
Abstract: Causality can be imprecise as well as granular. Complete knowledge of all possible causal factors could lead to crisp causal understanding. However, knowledge of at least some causal effects is inherently inexact and imprecise. It is unlikely that complete knowledge of all possible factors can be known for many subjects. It may not be known what events are in the complex; or, what constraints and laws the complex is subject to. Consequently, causal knowledge is inherently incomplete and inexact. Whether or not all of the elements are precisely known, people recognize that a complex of elements usually causes an effect. Causal complexes are groupings of finer-grained causal relations into a larger-grained causal object. Commonsense world understanding deals with imprecision, uncertainty and imperfect knowledge. Usually, commonsense reasoning is more successful in reasoning about a fewer large-grained events than many fine-grained events. However, the larger-grained causal objects are necessarily more imprecise than some of their components. A satisficing solution might be to develop large-grained solutions and then only go to the finer-grain when the impreciseness of the large-grain is unsatisfactory.

Proceedings ArticleDOI
06 Oct 2008
TL;DR: One of the main results of this paper indicates that the p-stable semantics for semi-negative normal programs with constraints agrees with the Comp semantics, which means all the applications based on theComp semantics of semi- negative programs can also be based on p- stable semantics of this type of programs.
Abstract: Currently non-monotonic reasoning (NMR) is a promising approach to model features of common sense reasoning. In order to formalize NMR the research community has applied monotonic logics. The present paper furthers the study of one of the semantics useful in this formalization called p-stable. We introduce three different formats for normal programs with constraints: negative normal programs, restricted negative normal programs and semi-negative normal programs. These forms help to simplify the search of p-stable models of the original program. One of the main results of this paper indicates that the p-stable semantics for semi-negative normal programs with constraints agrees with the Comp semantics. In this way all the applications based on the Comp semantics of semi-negative programs can also be based on p-stable semantics of this type of programs. It is worth to mention that this class of programs can express interesting problems as the 3-coloring problem.

01 Jan 2008
TL;DR: A vision of how the stories that people tell in Internet weblogs can be used directly for automated commonsense reasoning, specifically to support the core envisionment functions of event prediction, explanation, and imagination is presented.
Abstract: In this position paper we present a vision of how the stories that people tell in Internet weblogs can be used directly for automated commonsense reasoning, specifically to support the core envisionment functions of event prediction, explanation, and imagination.

Book ChapterDOI
01 Jan 2008
TL;DR: The companion article to this article in this volume, “Logic Programming Languages for Expert Systems,” discusses logic programming and negation as failure.
Abstract: Knowledge representation is a field of artificial intelligence that has been actively pursued since the 1940s.1 The issues at stake are that given a specific domain, how do we represent knowledge in that domain, and how do we reason about that domain? This issue of knowledge representation is of paramount importance, since the knowledge representation scheme may foster or hinder reasoning. The representation scheme can enable reasoning to take place, or it may make the desired reasoning impossible. To some extent, the knowledge representation depends upon the underlying technology. For instance, in order to perform default reasoning with exceptions, one needs weak negation (aka negation as failure. In fact, most complex forms of reasoning will require weak negation. This is a facility that is an integral part of logic programs but is lacking from expert system shells. Many Prolog implementations provide negation as failure, however, they do not understand nor implement the proper semantics. The companion article to this article in this volume, “Logic Programming Languages for Expert Systems,” discusses logic programming and negation as failure.

Proceedings ArticleDOI
18 Oct 2008
TL;DR: The static reasoning process of the mechanism structure in-side a truck is shown, which indicates that the qualitative reasoning can be an assistant method when the numeric information is incomplete.
Abstract: A qualitative reasoning approach is used for the spatial configuration of mechanisms in conceptual design phase. The appropriate types of mechanisms are chosen firstly for a specific reasoning task. The qualitative vectors of the basic components are given to construct the complicated mechanism. Based on the qualitative sign algebra rules, the reasoning algorithm is presented to reason the spatial configuration of the selected mechanism. The static reasoning process of the mechanism structure in-side a truck is shown finally, which indicates that the qualitative reasoning can be an assistant method when the numeric information is incomplete.

Proceedings ArticleDOI
21 Jul 2008
TL;DR: A system that augments user interaction to elicit intelligence of a living room adorned with aware artefacts by improvising other spatially correlated artefacts with the help of commonsense knowledge is presented.
Abstract: In this paper, we present a system that augments user interaction to elicit intelligence of a living room adorned with aware artefacts. These artefacts are computationally augmented everyday objects, like a phone, a light, a TV, etc., which are enabled to communicate their operational states among themselves. The operational state of an artefact is usually changed to another state as a result of an interaction by the user. Our system augments such interactions by improvising other spatially correlated artefacts with the help of commonsense knowledge. For example, if a user picks up a ringing phone while the TV is on, the system either mutes or reduces the TV volume. In this case the phone's state is changed to "in use" state due to deliberate interaction of the user and simultaneously the state of the TV may be changed to "mute" or "low volume" by the system. Such idea of augmenting user interaction has been elaborated in this paper. We present the computational model and the implementation detail of our approach and also demonstrate its feasibility through an informal user study. (6 pages)

01 Jan 2008
TL;DR: The tutor combines agent technology, medical protocals, planning, real-time simulation, and student and domain modeling to yield a tutor capable of monitoring a student's progress with out the need of a human teacher.
Abstract: This paper examines the problem of achieving interaction among multiple agents within an intellgient tutor. The tutor combines agent technology, medical protocals, planning, real-time simulation, and student and domain modeling to yield a tutor capable of monitoring a student's progress with out the need of a human teacher. This tutor provides one of the first complete integrations of real-time simulation and knowledge-based reasoning based upon sophisticated principles of communication and coordiation. We report on novel techniques for commonsense reasoning about plans, multi-agent reasoning and recovery from unexpected situqations.

Proceedings ArticleDOI
12 Jul 2008
TL;DR: The reasoning of cardinal direction relations is presented, which is based on the representation of one-dimensional topological relations, which has received a lot of attention in the area ofQualitative spatial reasoning.
Abstract: Qualitative spatial reasoning has received a lot of attention in the areas of Geographic Information Systems, Artificial Intelligence, Databases and Multimedia. Direction relation reasoning is an important branch in the field of spatial reasoning. Applying the theory of interval algebra and rectangle algebra and studying points, a new representation method based on projection Interval is presented. For coordinate projection has certain topological features, we have transformed the nine cardinal direction relations to the superimposition of one-dimensional topological relations in x axis and y axis for a further study of the integrated reasoning between topological relations and direction relations. Finally, this paper presented the reasoning of cardinal direction relations, which is based on the representation of one-dimensional topological relations.

Proceedings ArticleDOI
12 Dec 2008
TL;DR: A geometrical projection-based system is applied to divide a 3D space into sub-divisions and a generalized equation is derived as the result from the integrated reasoning.
Abstract: Qualitative representation and reasoning of combined direction and distance relationships are recognized as one of the promising theoretical themes in 3D GIS which are related to urban information management, planning and services, mining engineering, and so on. The focus of the studies deals with the methodologies on the descriptions of spatial objects and the determinations of their geo-referenced relationships. In this paper, a geometrical projection-based system is applied to divide a 3D space into sub-divisions. Analytical geometry is used as the mathematical tool to represent both distance and direction relationships, which are then integrated to develop a conceptual framework to depict positional relationships. A generalized equation is derived as the result from the integrated reasoning. Empirical examples are provided to show the qualitative reasoning process and results of positional relationships for some classical cases.

01 Jan 2008
TL;DR: Development of methods and tools for modeling human reasoning by structural analogy in intelligent decision support systems and the possibility of estimating the obtained analogies taking into account the context is studied.
Abstract: Development of methods and tools for modeling human reasoning (common sense reasoning) by analogy in intelligent decision support systems is considered. Special attention is drawn to modeling reasoning by structural analogy taking the context into account. The possibility of estimating the obtained analogies taking into account the context is studied. This work was supported by RFBR.

Book ChapterDOI
01 Jan 2008
TL;DR: The greatest impact on the decision value quality of association rules may come from treating association rules as causal statements without understanding whether there is, in fact, underlying causality.
Abstract: Naive association rules may result if the underlying causality of the rules is not considered. The greatest impact on the decision value quality of association rules may come from treating association rules as causal statements without understanding whether there is, in fact, underlying causality. A complete knowledge of all possible factors (i.e., states, events, constraints) might lead to a crisp description of whether an effect will occur. However, it is unlikely that all possible factors can be known. Commonsense understanding and reasoning accepts imprecision, uncertainty and imperfect knowledge. The events in an event/effect complex may be incompletely known; as well as, what constraints and laws the complex is subject to. Usually, commonsense reasoning is more successful in reasoning about a few large-grain sized events than many fine-grained events. A satisficing solution would be to develop large-grained solutions and only use the finer-grain when the impreciseness of the large-grain is unsatisfactory.

Proceedings ArticleDOI
16 Mar 2008
TL;DR: This work takes a highly reactive calculus (membrane computing) and combines it with high-level modeling concepts and uses quotient-based reasoning to introduce situational knowledge into knowledge-based human machine interaction.
Abstract: In our approach we show how to integrate traditional BDI-Logics and reactive commonsense reasoning. Our goal consists in supporting efficient runtime inferences in the course of distributed decision making. For this sake we take a highly reactive calculus (membrane computing) and combine it with high-level modeling concepts. In order to obtain efficient inference mechanisms we use quotient-based reasoning. With this framework we are able to introduce situational knowledge into knowledge-based human machine interaction. We think that this kind of situational reasoning is highly important for human- machine interaction in the age of disappearing computing.

01 Jan 2008
TL;DR: This work presents a specialization of the DeLP system, called Database Defeasible Logic Programming (DB DeLP), which could be easily integrated with a relational database component to achieve a system capable of massive data processing and intelligent behavior.
Abstract: Information systems play an ever-increasing key role in our society. In particular, data intensive applications are in constant demand and there is need of computing environments with much more intelligent capabilities than those present in today’s Data-base Management Systems (DBMS). Nowadays intelligent applications require better reasoning capabilities than those present in older data processing systems[7]. Argumentation frameworks appear to be an excellent starting-point for building such systems. Research in argumentation has provided important results while striving to obtain tools for common sense reasoning. As a result, argumentation systems have substantially evolved in the past few years, and this resulted in a new set of argument-based applications in diverse areas where knowledge representation issues play a major role. Clustering algorithms [6], intelligent web search [3], recommender systems [4, 3], and natural language assessment [2] are the outcome of this evolution. We claim that massive data processing systems can be combined with argumentation to obtain systems that administer and reason with large databases. These would result in a system that can extract and process information from massive databases and exhibit intelligent behavior and common sense reasoning as a by-product of the argumentation system used for the reasoning process. In this work, we present a specialization of the DeLP [5] system, called Database Defeasible Logic Programming (DB DeLP). This framework could be easily integrated with a relational database component to achieve a system capable of massive data processing and intelligent behavior. Next, we formally define this system.

Proceedings ArticleDOI
26 Sep 2008
TL;DR: A framework architecture is presented that uses the QPT ontology as the knowledge representation scheme to model the behaviors of a number of organic reactions and the design of two main components for a tool abbreviated as QRIOM for predicting and explaining organic reactions is placed.
Abstract: The work discusses the application of an artificial intelligence technique called qualitative reasoning (QR) and a process-based ontology in constructing qualitative models for organic reaction simulation. We present a framework architecture that uses the QPT ontology as the knowledge representation scheme to model the behaviors of a number of organic reactions. The main focus of this paper placed on the design of two main components (model constructor and reasoning engine) for a tool abbreviated as QRIOM for predicting and explaining organic reactions. The discussion starts by presenting the workflow of the reasoning process and the automated model construction logic. We then move on to demonstrate how the constructed models can be used to reproduce the behavior of organic reactions. Finally, behavioral explanation manifestation is discussed. The simulator is implemented in bi lingual; Prolog is at the backend supplying data and chemical theories while Java handles all front-end GUI and molecular pattern updating.