scispace - formally typeset
Search or ask a question

Showing papers on "Knowledge representation and reasoning published in 2012"


Journal ArticleDOI
TL;DR: The Knowledge-Learning-Instruction framework is described, which promotes the emergence of instructional principles of high potential for generality, while explicitly identifying constraints of and opportunities for detailed analysis of the knowledge students may acquire in courses.

511 citations


Book
19 Dec 2012
TL;DR: This book presents a practical introduction to ASP, aiming at using ASP languages and systems for solving application problems, and introduces ASP's solving technology, modeling language and methodology.
Abstract: Answer Set Programming (ASP) is a declarative problem solving approach, initially tailored to modeling problems in the area of Knowledge Representation and Reasoning (KRR). More recently, its attractive combination of a rich yet simple modeling language with high-performance solving capacities has sparked interest in many other areas even beyond KRR. This book presents a practical introduction to ASP, aiming at using ASP languages and systems for solving application problems. Starting from the essential formal foundations, it introduces ASP's solving technology, modeling language and methodology, while illustrating the overall solving process by practical examples. Table of Contents: List of Figures / List of Tables / Motivation / Introduction / Basic modeling / Grounding / Characterizations / Solving / Systems / Advanced modeling / Conclusions

503 citations


Proceedings Article
01 May 2012
TL;DR: The latest iteration of ConceptNet 5 is presented, including its fundamental design decisions, ways to use it, and evaluations of its coverage and accuracy.
Abstract: ConceptNet is a knowledge representation project, providing a large semantic graph that describes general human knowledge and how it is expressed in natural language. This paper presents the latest iteration, ConceptNet 5, including its fundamental design decisions, ways to use it, and evaluations of its coverage and accuracy.

475 citations


Journal ArticleDOI
TL;DR: This paper survey and classify most of the ontology-based approaches developed in order to evaluate their advantages and limitations and compare their expected performance both from theoretical and practical points of view, and presents a new ontological-based measure relying on the exploitation of taxonomical features.
Abstract: Estimation of the semantic likeness between words is of great importance in many applications dealing with textual data such as natural language processing, knowledge acquisition and information retrieval. Semantic similarity measures exploit knowledge sources as the base to perform the estimations. In recent years, ontologies have grown in interest thanks to global initiatives such as the Semantic Web, offering an structured knowledge representation. Thanks to the possibilities that ontologies enable regarding semantic interpretation of terms many ontology-based similarity measures have been developed. According to the principle in which those measures base the similarity assessment and the way in which ontologies are exploited or complemented with other sources several families of measures can be identified. In this paper, we survey and classify most of the ontology-based approaches developed in order to evaluate their advantages and limitations and compare their expected performance both from theoretical and practical points of view. We also present a new ontology-based measure relying on the exploitation of taxonomical features. The evaluation and comparison of our approach's results against those reported by related works under a common framework suggest that our measure provides a high accuracy without some of the limitations observed in other works.

361 citations


Journal ArticleDOI
TL;DR: The REG problem is introduced and early work in this area is described, discussing what basic assumptions lie behind it, and showing how its remit has widened in recent years.
Abstract: This article offers a survey of computational research on referring expression generation (REG). It introduces the REG problem and describes early work in this area, discussing what basic assumptions lie behind it, and showing how its remit has widened in recent years. We discuss computational frameworks underlying REG, and demonstrate a recent trend that seeks to link REG algorithms with well-established Knowledge Representation techniques. Considerable attention is given to recent efforts at evaluating REG algorithms and the lessons that they allow us to learn. The article concludes with a discussion of the way forward in REG, focusing on references in larger and more realistic settings.

352 citations


Proceedings ArticleDOI
14 May 2012
TL;DR: A probabilistic framework combining heterogeneous, uncertain, information such as object observations, shape, size, appearance of rooms and human input for semantic mapping that relies on the concept of spatial properties which make the semantic map more descriptive, and the system more scalable and better adapted for human interaction.
Abstract: This paper presents a probabilistic framework combining heterogeneous, uncertain, information such as object observations, shape, size, appearance of rooms and human input for semantic mapping. It abstracts multi-modal sensory information and integrates it with conceptual common-sense knowledge in a fully probabilistic fashion. It relies on the concept of spatial properties which make the semantic map more descriptive, and the system more scalable and better adapted for human interaction. A probabilistic graphical model, a chaingraph, is used to represent the conceptual information and perform spatial reasoning. Experimental results from online system tests in a large unstructured office environment highlight the system's ability to infer semantic room categories, predict existence of objects and values of other spatial properties as well as reason about unexplored space.

277 citations


Book
06 Dec 2012
TL;DR: This book discusses Neural-Symbolic Integration: The Road Ahead, a method for integrating Neurons and Symbols into Acceptable Programs and Neural Networks, and its applications in Logic Programming and Nonmonotonic Theory.
Abstract: 1. Introduction and Overview.- 1.1 Why Integrate Neurons and Symbols?.- 1.2 Strategies of Neural-Symbolic Integration.- 1.3 Neural-Symbolic Learning Systems.- 1.4 A Simple Example.- 1.5 How to Read this Book.- 1.6 Summary.- 2. Background.- 2.1 General Preliminaries.- 2.2 Inductive Learning.- 2.3 Neural Networks.- 2.3.1 Architectures.- 2.3.2 Learning Strategy.- 2.3.3 Recurrent Networks.- 2.4 Logic Programming.- 2.4.1 What is Logic Programming?.- 2.4.2 Fixpoints and Definite Programs.- 2.5 Nonmonotonic Reasoning.- 2.5.1 Stable Models and Acceptable Programs.- 2.6 Belief Revision.- 2.6.1 Truth Maintenance Systems.- 2.6.2 Compromise Revision.- I. Knowledge Refinement in Neural Networks.- 3. Theory Refinement in Neural Networks.- 3.1 Inserting Background Knowledge.- 3.2 Massively Parallel Deduction.- 3.3 Performing Inductive Learning.- 3.4 Adding Classical Negation.- 3.5 Adding Metalevel Priorities.- 3.6 Summary and Further Reading.- 4. Experiments on Theory Refinement.- 4.1 DNA Sequence Analysis.- 4.2 Power Systems Fault Diagnosis.- 4.3.Discussion.- 4.4.Appendix.- II. Knowledge Extraction from Neural Networks.- 5. Knowledge Extraction from Trained Networks.- 5.1 The Extraction Problem.- 5.2 The Case of Regular Networks.- 5.2.1 Positive Networks.- 5.2.2 Regular Networks.- 5.3 The General Case Extraction.- 5.3.1 Regular Subnetworks.- 5.3.2 Knowledge Extraction from Subnetworks.- 5.3.3 Assembling the Final Rule Set.- 5.4 Knowledge Representation Issues.- 5.5 Summary and Further Reading.- 6. Experiments on Knowledge Extraction.- 6.1 Implementation.- 6.2 The Monk's Problems.- 6.3 DNA Sequence Analysis.- 6.4 Power Systems Fault Diagnosis.- 6.5 Discussion.- III. Knowledge Revision in Neural Networks.- 7. Handling Inconsistencies in Neural Networks.- 7.1 Theory Revision in Neural Networks.- 7.1.1The Equivalence with Truth Maintenance Systems.- 7.1.2Minimal Learning.- 7.2 Solving Inconsistencies in Neural Networks.- 7.2.1 Compromise Revision.- 7.2.2 Foundational Revision.- 7.2.3 Nonmonotonic Theory Revision.- 7.3 Summary of the Chapter.- 8. Experiments on Handling Inconsistencies.- 8.1 Requirements Specifications Evolution as Theory Refinement.- 8.1.1Analysing Specifications.- 8.1.2Revising Specifications.- 8.2 The Automobile Cruise Control System.- 8.2.1Knowledge Insertion.- 8.2.2Knowledge Revision: Handling Inconsistencies.- 8.2.3Knowledge Extraction.- 8.3 Discussion.- 8.4 Appendix.- 9. Neural-Symbolic Integration: The Road Ahead.- 9.1 Knowledge Extraction.- 9.2 Adding Disjunctive Information.- 9.3 Extension to the First-Order Case.- 9.4 Adding Modalities.- 9.5 New Preference Relations.- 9.6 A Proof Theoretical Approach.- 9.7 The "Forbidden Zone" [Amax, Amin].- 9.8 Acceptable Programs and Neural Networks.- 9.9 Epilogue.

245 citations


Proceedings Article
12 Jul 2012
TL;DR: This work presents a method for training a semantic parser using only a knowledge base and an unlabeled text corpus, without any individually annotated sentences, and demonstrates recovery of this richer structure by extracting logical forms from natural language queries against Freebase.
Abstract: We present a method for training a semantic parser using only a knowledge base and an unlabeled text corpus, without any individually annotated sentences. Our key observation is that multiple forms of weak supervision can be combined to train an accurate semantic parser: semantic supervision from a knowledge base, and syntactic supervision from dependency-parsed sentences. We apply our approach to train a semantic parser that uses 77 relations from Freebase in its knowledge representation. This semantic parser extracts instances of binary relations with state-of-the-art accuracy, while simultaneously recovering much richer semantic structures, such as conjunctions of multiple relations with partially shared arguments. We demonstrate recovery of this richer structure by extracting logical forms from natural language queries against Freebase. On this task, the trained semantic parser achieves 80% precision and 56% recall, despite never having seen an annotated logical form.

172 citations


Proceedings ArticleDOI
25 Jun 2012
TL;DR: The design of a comprehensive description ontology for knowledge representation in the domain of Internet of Things is presented and how it can be used to support tasks such as service discovery, testing and dynamic composition is discussed.
Abstract: Semantic modeling for the Internet of Things has become fundamental to resolve the problem of interoperability given the distributed and heterogeneous nature of the "Things". Most of the current research has primarily focused on devices and resources modeling while paid less attention on access and utilisation of the information generated by the things. The idea that things are able to expose standard service interfaces coincides with the service oriented computing and more importantly, represents a scalable means for business services and applications that need context awareness and intelligence to access and consume the physical world information. We present the design of a comprehensive description ontology for knowledge representation in the domain of Internet of Things and discuss how it can be used to support tasks such as service discovery, testing and dynamic composition.

168 citations


Journal ArticleDOI
TL;DR: A mature architecture for typical-case reasoning tasks is provided in RacerPro, a description logic reasoner that goes well beyond standard inference services provided by other OWL reasoners.
Abstract: RacerPro is a software system for building applications based on ontologies. The backbone of RacerPro is a description logic reasoner. It provides inference services for terminological knowledge as well as for representations of knowledge about individuals. Based on new optimization techniques and techniques that have been developed in the research field of description logics throughout the years, a mature architecture for typical-case reasoning tasks is provided. The system has been used in hundreds of research projects and industrial contexts throughout the last twelve years. W3C standards as well as detailed feedback reports from numerous users have influenced the design of the system architecture in general, and have also shaped the RacerPro knowledge representation and interface languages. With its query and rule languages, RacerPro goes well beyond standard inference services provided by other OWL reasoners.

154 citations


Journal ArticleDOI
17 Feb 2012
TL;DR: In this article, the authors present a formal definition and a process theory of complex problem solving (CPS) applicable to the interdisciplinary field of knowledge acquisition and knowledge application concerning the goal-oriented control of systems that contain many highly interrelated elements.
Abstract: This article is about Complex Problem Solving (CPS), its history in a variety of research domains (e.g., human problem solving, expertise, decision making, and intelligence), a formal definition and a process theory of CPS applicable to the interdisciplinary field. CPS is portrayed as (a) knowledge acquisition and (b) knowledge application concerning the goal-oriented control of systems that contain many highly interrelated elements (i.e., complex systems). The impact of implicit and explicit knowledge as well as systematic strategy selection on the solution process are discussed, emphasizing the importance of (1) information generation (due to the initial intransparency of the situation), (2) informa- tion reduction (due to the overcharging complexity of the problem's structure), (3) model building (due to the interconnectedness of the variables), (4) dynamic decision making (due to the eigendynamics of the system), and (5) evaluation (due to many, interfering and/or ill-defined goals).

Patent
10 Sep 2012
TL;DR: In this paper, a method of constructing an elemental data structure may include analyzing first information to identify a first elemental component associated with a data consumer, and adding the first component to a customized module corresponding to the data consumer.
Abstract: Techniques for analyzing and synthesizing complex knowledge representations (KRs) may utilize an atomic knowledge representation model including an elemental data structure and knowledge processing rules that are machine-readable. The elemental data structure may include a universal kernel and customized modules, which may represent knowledge that is generally applicable to a population and knowledge that is specifically applicable to individual data consumers, respectively. A method of constructing an elemental data structure may include analyzing first information to identify a first elemental component associated with a data consumer, and adding the first elemental component to a customized module corresponding to the data consumer. The method may also include analyzing second information to identify a second elemental component associated with a population, and adding the second elemental component to the universal kernel.

Journal ArticleDOI
TL;DR: New approach to perform effective personalization highly based on Semantic web technologies performed in new version of Protus 2.0, which comprises the use of an ontology and adaptation rules for knowledge representation and inference engines for reasoning.
Abstract: With the development of the Semantic web the use of ontologies as a formalism to describe knowledge and information in a way that can be shared on the web is becoming common. The explicit conceptualization of system components in a form of ontology facilitates knowledge sharing, knowledge reuse, communication and collaboration and construction of knowledge rich and intensive systems. Semantic web provides huge potential and opportunities for developing the next generation of e-learning systems. In previous work, we presented tutoring system named Protus (PRogramming TUtoring System) that is used for learning the essence of Java programming language. It uses principles of learning style identification and content recommendation for course personalization. This paper presents new approach to perform effective personalization highly based on Semantic web technologies performed in new version of the system, named Protus 2.0. This comprises the use of an ontology and adaptation rules for knowledge representation and inference engines for reasoning. Functionality, structure and implementation of a Protus 2.0 ontology as well as syntax of SWRL rules implemented for on-the-fly personalization will be presented in this paper.

Journal ArticleDOI
31 Jan 2012-PLOS ONE
TL;DR: This paper introduces the olog, or ontology log, a category-theoretic model for knowledge representation (KR) grounded in formal mathematics, which can be rigorously formulated and cross-compared in ways that other KR models cannot.
Abstract: In this paper we introduce the olog, or ontology log, a category-theoretic model for knowledge representation (KR). Grounded in formal mathematics, ologs can be rigorously formulated and cross-compared in ways that other KR models (such as semantic networks) cannot. An olog is similar to a relational database schema; in fact an olog can serve as a data repository if desired. Unlike database schemas, which are generally difficult to create or modify, ologs are designed to be user-friendly enough that authoring or reconfiguring an olog is a matter of course rather than a difficult chore. It is hoped that learning to author ologs is much simpler than learning a database definition language, despite their similarity. We describe ologs carefully and illustrate with many examples. As an application we show that any primitive recursive function can be described by an olog. We also show that ologs can be aligned or connected together into a larger network using functors. The various methods of information flow and institutions can then be used to integrate local and global world-views. We finish by providing several different avenues for future research.

Proceedings ArticleDOI
24 Dec 2012
TL;DR: The goal of this working group is to develop a standard ontology and associated methodology for knowledge representation and reasoning in robotics and automation, together with the representation of concepts in an initial set of application domains.
Abstract: This article discusses a newly formed IEEE-RAS working group entitled Ontologies for Robotics and Automation (ORA). The goal of this working group is to develop a standard ontology and associated methodology for knowledge representation and reasoning in robotics and automation, together with the representation of concepts in an initial set of application domains. The standard provides a unified way of representing knowledge and provides a common set of terms and definitions, allowing for unambiguous knowledge transfer among any group of humans, robots, and other artificial systems. In addition to describing the goal and structure of the group, this article gives some examples of how the ontology, once developed, can be used by applications such as industrial kitting.

Journal ArticleDOI
TL;DR: The objective of this paper is to describe developing of a screening expert system that will help to detect CAD at an early stage and the proposed methodology is developed to assist the medical practitioners in predicting the patient's risk status of CAD from rules provided by medical experts.
Abstract: Coronary artery disease (CAD) affects millions of people all over the world including a major portion in India every year. Although much progress has been done in medical science, but the early detection of this disease is still a challenge for prevention. The objective of this paper is to describe developing of a screening expert system that will help to detect CAD at an early stage. Rules were formulated from the doctors and fuzzy expert system approach was taken to cope with uncertainty present in medical domain. This work describes the risk factors responsible for CAD, knowledge acquisition and knowledge representation techniques, method of rule organisation, fuzzification of clinical parameters and defuzzification of fuzzy output to crisp value. The system implementation is done using object oriented analysis and design. The proposed methodology is developed to assist the medical practitioners in predicting the patient's risk status of CAD from rules provided by medical experts. The present paper focuses on rule organisation using the concept of modules, meta-rule base, rule address storage in tree representation and rule consistency checking for efficient search of large number of rules in rule base. The developed system leads to 95.85% sensitivity and 83.33% specificity in CAD risk computation.

Book
12 Jan 2012
TL;DR: The work presented in this thesis should be of interest to researchers in the area of knowledge representation and reasoning, and developers of reasoners and ontology editors, who wish to incorporate explanation generation techniques into their systems.
Abstract: The Web Ontology Language, OWL, is the latest standard in logic based ontology languages. It is built upon the foundations of highly expressive Description Logics, which are fragments of First Order Logic. These logical foundations mean that it is possible to compute what is entailed by an OWL ontology. The reasons for entailments can range from fairly simple localised reasons through to highly non-obvious reasons. In both cases, without tool support that provides explanations for entailments, it can be very difficult or impossible to understand why an entailment holds. In the OWL world, justifications, which are minimal entailing subsets of ontologies, have emerged as the dominant form of explanation.This thesis investigates justification based explanation techniques. The core of the thesis is devoted to defining and analysing Laconic and Precise Justifications. These are fine-grained justifications whose axioms do not contain any superfluous parts. Optimised algorithms for computing these justifications are presented, and an extensive empirical investigation shows that these algorithms perform well on state of the art, large and expressive bio-medical ontologies. The investigation also highlights the prevalence of superfluity in real ontologies, along with the related phenomena of justification masking. The practicality of computing Laconic Justifications coupled with the prevalence of non-laconic justifications in the wild indicates that Laconic and Precise justifications are likely to be useful in practice.The work presented in this thesis should be of interest to researchers in the area of knowledge representation and reasoning, and developers of reasoners and ontology editors, who wish to incorporate explanation generation techniques into their systems.

Proceedings ArticleDOI
13 Jun 2012
TL;DR: The ontology is instantiated and put to use at the Smart Building setting of the International Hellenic University, enabling knowledge representation in machine-interpretable form and hence is expected to enhance service-based intelligent applications.
Abstract: This work introduces an ontology for incorporating Ambient Intelligence in Smart Buildings. The ontology extends and benefits from existing ontologies in the field, but also adds classes needed to sufficiently model every aspect of a service-oriented smart building system. Namely, it includes concepts modeling all functionality (i.e. services, operations, inputs, outputs, logic, parameters and environmental conditions), QoS (resources, QoS parameters), hardware (smart devices, sensors and actuators, appliances, servers) users and context (user profiles, moods, location, rooms etc.). The ontology is instantiated and put to use at the Smart Building setting of the International Hellenic University, enabling knowledge representation in machine-interpretable form and hence is expected to enhance service-based intelligent applications.

Journal ArticleDOI
TL;DR: It is argued that machine learning research has to offer a wide variety of methods applicable to different expressivity levels ofSemantic Web knowledge bases: ranging from weakly expressive but widely available knowledge bases in RDF to highly expressive first-order knowledge bases, this paper surveys statistical approaches to mining the Semantic Web.
Abstract: In the Semantic Web vision of the World Wide Web, content will not only be accessible to humans but will also be available in machine interpretable form as ontological knowledge bases. Ontological knowledge bases enable formal querying and reasoning and, consequently, a main research focus has been the investigation of how deductive reasoning can be utilized in ontological representations to enable more advanced applications. However, purely logic methods have not yet proven to be very effective for several reasons: First, there still is the unsolved problem of scalability of reasoning to Web scale. Second, logical reasoning has problems with uncertain information, which is abundant on Semantic Web data due to its distributed and heterogeneous nature. Third, the construction of ontological knowledge bases suitable for advanced reasoning techniques is complex, which ultimately results in a lack of such expressive real-world data sets with large amounts of instance data. From another perspective, the more expressive structured representations open up new opportunities for data mining, knowledge extraction and machine learning techniques. If moving towards the idea that part of the knowledge already lies in the data, inductive methods appear promising, in particular since inductive methods can inherently handle noisy, inconsistent, uncertain and missing data. While there has been broad coverage of inducing concept structures from less structured sources (text, Web pages), like in ontology learning, given the problems mentioned above, we focus on new methods for dealing with Semantic Web knowledge bases, relying on statistical inference on their standard representations. We argue that machine learning research has to offer a wide variety of methods applicable to different expressivity levels of Semantic Web knowledge bases: ranging from weakly expressive but widely available knowledge bases in RDF to highly expressive first-order knowledge bases, this paper surveys statistical approaches to mining the Semantic Web. We specifically cover similarity and distance-based methods, kernel machines, multivariate prediction models, relational graphical models and first-order probabilistic learning approaches and discuss their applicability to Semantic Web representations. Finally we present selected experiments which were conducted on Semantic Web mining tasks for some of the algorithms presented before. This is intended to show the breadth and general potential of this exiting new research and application area for data mining.

Journal ArticleDOI
TL;DR: Question Answering (QA) systems give the ability to answer questions posed in natural language by extracting, from a repository of documents, fragments of documents that contain material relevant to the answer.
Abstract: Question Answering (QA) is a specific type of information retrieval. Given a set of documents, a Question Answering system attempts to find out the correct answer to the question pose in natural language. Question answering is multidisciplinary. It involves information technology, artificial intelligence, natural language processing, knowledge and database management and cognitive science. From the technological perspective, question answering uses natural or statistical language processing, information retrieval, and knowledge representation and reasoning as potential building blocks. It involves text classification, information extraction and summarization technologies. In general, question answering system (QAS) has three components such as question classification, information retrieval, and answer extraction. These components play a essential role in QAS. Question classification play primary role in QA system to categorize the question based upon on the type of its entity. Information retrieval method is get of identify success by extracting out applicable answer post by their intelligent question answering system. Finally, answer extraction module is rising topics in the QAS where these systems are often requiring ranking and validating a candidate’s answer. Most of the Question Answering systems consists of three main modules: question processing, document processing and answer processing. Question processing module plays an important part in QA systems. If this module doesn't work correctly, it will make problems for other sections. Moreover answer processing module is an emerging topic in Question Answering, in which these systems are often required to rank and validate candidate answers. These techniques aiming at discovering the short and precise answers are often based on the semantic classification. QA systems give the ability to answer questions posed in natural language by extracting, from a repository of documents, fragments of documents that contain material relevant to the answer.

Journal ArticleDOI
TL;DR: This paper presents how extraction, representation and use of symbolic knowledge from real-world perception and human-robot verbal and non-verbal interaction can actually enable a grounded and shared model of the world that is suitable for later high-level tasks such as dialogue understanding.
Abstract: This paper presents how extraction, representation and use of symbolic knowledge from real-world perception and human-robot verbal and non-verbal interaction can actually enable a grounded and shared model of the world that is suitable for later high-level tasks such as dialogue understanding. We show how the anchoring process itself relies on the situated nature of human-robot interactions. We present an integrated approach, including a specialized symbolic knowledge representation system based on Description Logics, and case studies on several robotic platforms that demonstrate these cognitive capabilities.

Dissertation
01 Jan 2012
TL;DR: This dissertation investigates the linguistic and technological challenges involved in creating a cross-linguistic data set to undertake phonological typology, and addresses the question of whether more sophisticated, knowledge-based approaches to data modeling can extend previous typological observations and provide new ways of querying segment inventories.
Abstract: In this dissertation, I investigate the linguistic and technological challenges involved in creating a cross-linguistic data set to undertake phonological typology. I then address the question of whether more sophisticated, knowledge-based approaches to data modeling, coupled with a broad cross-linguistic data set, can extend previous typological observations and provide new ways of querying segment inventories. The model that I implement facilitates testing typological observations by aligning data models to questions that typologists wish to ask. The technological infrastructure that I create is conducive to data sharing, extensibility and reproducibility of results. I use the data set and data models in this work to validate and extend previous typological observations. In doing so, I revisit the typological facts proposed in the linguistics literature about the size, shape and composition of segment inventories in the world's languages and find that they remain similar even with a much larger sample of languages. I also show that as the number of segment inventories increases, the number of distinct segments also continues to increase. And when vowel systems grow beyond the basic cardinal vowels, they do so first by length and nasalization, and then diphthongization. Moving beyond segments, I show that distinctive feature sets in general lack the typological representation needed to straightforwardly map sets of features to the segment types found in a broad set of language descriptions. Therefore, I extend a distinctive feature set, devise a method to computationally encode features by combining feature vectors and assigning them to segment types, and create a system in which users can query by feature, by sets of features that define natural classes, or by omitting features in queries to utilize the underspecification of segments. I use this system and reinvestigate proposed descriptive universals about phonological systems and find that some, but not all universals hold up to the more rigorous testing made possible with this larger data set and a graph data model. Lastly, I reevaluate one of the many purported correlations between a non-linguistic factor and language: the claim that there exists a relationship between population size and phoneme inventory size. I show that this finding is actually an artifact of a small data set, which constrains the use of more nuanced statistical approaches that can control for the genealogical relatedness of languages. Thus, in this work I illustrate how researchers can leverage the data set and data models that I have implemented to investigate different aspects of languages' phonological systems, including the possible impact of non-linguistic factors on phonology.

DOI
17 Mar 2012
TL;DR: A question generation approach suitable for tutorial dialogues based on previous psychological theories that hypothesize questions are generated from a knowledge representation modeled as a concept map is presented.
Abstract: In this paper we present a question generation approach suitable for tutorial dialogues. The approach is based on previous psychological theories that hypothesize questions are generated from a knowledge representation modeled as a concept map. Our model automatically extracts concept maps from a textbook and uses them to generate questions. The purpose of the study is to generate and evaluate pedagogically-appropriate questions at varying levels of specificity across one or more sentences. The evaluation metrics include scales from the Question Generation Shared Task and Evaluation Challenge and a new scale specific to the pedagogical nature of questions in tutoring.

Journal ArticleDOI
TL;DR: The importance of the perspective/stance of the learner for achieving robust transfer, the neglected role of motivation in determining transfer, and the existence of specific, specific, v... as discussed by the authors collects together new approaches toward understanding and fostering appropriate transfer in learners.
Abstract: Understanding how to get learners to transfer their knowledge to new situations is a topic of both theoretical and practical importance. Theoretically, it touches on core issues in knowledge representation, analogical reasoning, generalization, embodied cognition, and concept formation. Practically, learning without transfer of what has been learned is almost always unproductive and inefficient. Although schools often measure the efficiency of learning in terms of speed and retention of knowledge, a relatively neglected and subtler component of efficiency is the generality and applicability of the acquired knowledge. This special issue of Educational Psychologist collects together new approaches toward understanding and fostering appropriate transfer in learners. Three themes that emerge from the collected articles are (a) the importance of the perspective/stance of the learner for achieving robust transfer, (b) the neglected role of motivation in determining transfer, and (c) the existence of specific, v...

Journal ArticleDOI
TL;DR: The paper defines the syntax and the semantics of CKR and shows that concept satisfiability and subsumption are decidable with the complexity upper bound of 2NExpTime, and it also provides a sound and complete natural deduction calculus that serves to characterize the propagation of knowledge between contexts.

Journal ArticleDOI
TL;DR: A planning and monitoring algorithm for safe execution of plans, so that robots can recover from plan failures due to collision with movable objects whose presence and location are not known in advance or due to heavy objects that cannot be lifted alone.
Abstract: Answer set programming (ASP) is a knowledge representation and reasoning paradigm with high-level expressive logic-based formalism, and efficient solvers; it is applied to solve hard problems in various domains, such as systems biology, wire routing, and space shuttle control. In this paper, we present an application of ASP to housekeeping robotics. We show how the following problems are addressed using computational methods/tools of ASP: (1) embedding commonsense knowledge automatically extracted from the commonsense knowledge base ConceptNet, into high-level representation, and (2) embedding (continuous) geometric reasoning and temporal reasoning about durations of actions, into (discrete) high-level reasoning. We introduce a planning and monitoring algorithm for safe execution of plans, so that robots can recover from plan failures due to collision with movable objects whose presence and location are not known in advance or due to heavy objects that cannot be lifted alone. Some of the recoveries require collaboration of robots. We illustrate the applicability of ASP on several housekeeping robotics problems, and report on the computational efficiency in terms of CPU time and memory.

Journal ArticleDOI
TL;DR: The paper provides an operational semantics for electronic institutions, specifying the essential data structures, the state representation and the key operations necessary to implement them and particular instantiations of knowledge representation languages that support the institutional model.

Proceedings Article
22 Jul 2012
TL;DR: This work proposes a two-level affective reasoning framework that concurrently employs multi-dimensionality reduction and graph mining techniques to mimic the integration of conscious and unconscious reasoning, and exploit it for sentiment analysis.
Abstract: An important difference between traditional AI systems and human intelligence is our ability to harness common sense knowledge gleaned from a lifetime of learning and experiences to inform our decision making and behavior. This allows humans to adapt easily to novel situations where AI fails catastrophically for lack of situation-specific rules and generalization capabilities. Common sense knowledge also provides the background knowledge for humans to successfully operate in social situations where such knowledge is typically assumed. In order for machines to exploit common sense knowledge in reasoning as humans do, moreover, we need to endow them with human-like reasoning strategies. In this work, we propose a two-level affective reasoning framework that concurrently employs multi-dimensionality reduction and graph mining techniques to mimic the integration of conscious and unconscious reasoning, and exploit it for sentiment analysis.

Proceedings Article
01 Dec 2012
TL;DR: The reasons for a Brazilian Portuguese Wordnet are discussed, the process to get a preliminary version of such a resource is used and possible steps to improving the preliminary version are discussed.
Abstract: Brazilian Portuguese needs a Wordnet that is open access, downloadable and changeable, so that it can be improved by the community interested in using it for knowledge representation and automated deduction. This kind of resource is also very valuable to linguists and computer scientists interested in extracting and representing knowledge obtained from texts. We discuss briefly the reasons for a Brazilian Portuguese Wordnet and the process we used to get a preliminary version of such a resource. Then we discuss possible steps to improving our preliminary version.1

Book ChapterDOI
11 Nov 2012
TL;DR: A systematic study to tackle hardness of reasoning about individual ontologies using machine learning techniques, covering over 350 real-world ontologies and four state-of-the-art, widely-used OWL 2 reasoners and identifying a number of metrics that can be used to effectively predict reasoning performance.
Abstract: A key issue in semantic reasoning is the computational complexity of inference tasks on expressive ontology languages such as OWL DL and OWL 2 DL. Theoretical works have established worst-case complexity results for reasoning tasks for these languages. However, hardness of reasoning about individual ontologies has not been adequately characterised. In this paper, we conduct a systematic study to tackle this problem using machine learning techniques, covering over 350 real-world ontologies and four state-of-the-art, widely-used OWL 2 reasoners. Our main contributions are two-fold. Firstly, we learn various classifiers that accurately predict classification time for an ontology based on its metric values. Secondly, we identify a number of metrics that can be used to effectively predict reasoning performance. Our prediction models have been shown to be highly effective, achieving an accuracy of over 80%.