scispace - formally typeset
Search or ask a question

Showing papers on "Knowledge representation and reasoning published in 2015"


Journal Article
TL;DR: Two mechanisms that can be used for significantly accelerating the speed of student's learning using privileged information are described: correction of Student's concepts of similarity between examples, and direct Teacher-Student knowledge transfer.
Abstract: This paper describes a new paradigm of machine learning, in which Intelligent Teacher is involved. During training stage, Intelligent Teacher provides Student with information that contains, along with classification of each example, additional privileged information (for example, explanation) of this example. The paper describes two mechanisms that can be used for significantly accelerating the speed of Student's learning using privileged information: (1) correction of Student's concepts of similarity between examples, and (2) direct Teacher-Student knowledge transfer.

360 citations


Journal ArticleDOI
TL;DR: An explicit analysis of the existing methods of semantic mapping is sought, and the several algorithms are categorized according to their primary characteristics, namely scalability, inference model, temporal coherence and topological map usage.

348 citations


Book
17 Dec 2015
TL;DR: This volume presents a knowledge-based approach to concept-level sentiment analysis at the crossroads between affective computing, information extraction, and common-sense computing, which exploits both computer and social sciences to better interpret and process information on the Web.
Abstract: This volume presents a knowledge-based approach to concept-level sentiment analysis at the crossroads between affective computing, information extraction, and common-sense computing, which exploits both computer and social sciences to better interpret and process information on the Web. Concept-level sentiment analysis goes beyond a mere word-level analysis of text in order to enable a more efficient passage from (unstructured) textual information to (structured) machine-processable data, in potentially any domain. Readers will discover the following key novelties, that make this approach so unique and avant-garde, being reviewed and discussed: Sentic Computing's multi-disciplinary approach to sentiment analysis-evidenced by the concomitant use of AI, linguistics and psychology for knowledge representation and inference Sentic Computings shift from syntax to semantics-enabled by the adoption of the bag-of-concepts model instead of simply counting word co-occurrence frequencies in text Sentic Computing's shift from statistics to linguistics-implemented by allowing sentiments to flow from concept to concept based on the dependency relation between clauses This volume is the first in the Series Socio-Affective Computing edited byDr Amir Hussain and Dr Erik Cambria and will be of interest to researchers in the fields of socially intelligent, affective and multimodal human-machine interaction and systems.

181 citations


Proceedings Article
11 Mar 2015
TL;DR: This paper recalls the main contributions and discusses key challenges for neural-symbolic integration which have been identified at a recent Dagstuhl seminar.
Abstract: The goal of neural-symbolic computation is to integrate robust connectionist learning and sound symbolic reasoning. With the recent advances in connectionist learning, in particular deep neural networks, forms of representation learning have emerged. However, such representations have not become useful for reasoning. Results from neural-symbolic computation have shown to offer powerful alternatives for knowledge representation, learning and reasoning in neural computation. This paper recalls the main contributions and discusses key challenges for neural-symbolic integration which have been identified at a recent Dagstuhl seminar.

138 citations


Posted Content
TL;DR: This work proposes TransA, an adaptive metric approach for embedding, utilizing the metric learning ideas to provide a more flexible embedding method.
Abstract: Knowledge representation is a major topic in AI, and many studies attempt to represent entities and relations of knowledge base in a continuous vector space. Among these attempts, translation-based methods build entity and relation vectors by minimizing the translation loss from a head entity to a tail one. In spite of the success of these methods, translation-based methods also suffer from the oversimplified loss metric, and are not competitive enough to model various and complex entities/relations in knowledge bases. To address this issue, we propose \textbf{TransA}, an adaptive metric approach for embedding, utilizing the metric learning ideas to provide a more flexible embedding method. Experiments are conducted on the benchmark datasets and our proposed method makes significant and consistent improvements over the state-of-the-art baselines.

104 citations


Book ChapterDOI
01 Jan 2015
TL;DR: This chapter gives an overview of the topics related to fuzzy system interpretability, facing the ambitious goal of proposing some answers to a number of open challenging questions.
Abstract: Fuzzy systems are universally acknowledged as valuable tools to model complex phenomena while preserving a readable form of knowledge representation. The resort to natural language for expressing the terms involved in fuzzy rules, in fact, is a key factor to conjugate mathematical formalism and logical inference with human-centered interpretability . That makes fuzzy systems specifically suitable in every real-world context where people are in charge of crucial decisions. This is because the self-explanatory nature of fuzzy rules profitably supports expert assessments. Additionally, as far as interpretability is investigated, it appears that (a) the simple adoption of fuzzy sets in modeling is not enough to ensure interpretability; (b) fuzzy knowledge representation must confront the problem of preserving the overall system accuracy, thus yielding a trade-off which is frequently debated. Such issues have attracted a growing interest in the research community and became to assume a central role in the current literature panorama of computational intelligence. This chapter gives an overview of the topics related to fuzzy system interpretability, facing the ambitious goal of proposing some answers to a number of open challenging questions: What is interpretability? Why interpretability is worth considering? How to ensure interpretability, and how to assess (quantify) it? Finally, how to design interpretable fuzzy models?

95 citations


Book
01 Jan 2015
TL;DR: Cognitive Computing is a comprehensive guide to the subject, providing both the theoretical and practical guidance technologists need to build a new class of systems that learn from experience and derive insights to unlock the value of big data.
Abstract: A comprehensive guide to learning technologies that unlock the value in big data Cognitive Computing provides detailed guidance toward building a new class of systems that learn from experience and derive insights to unlock the value of big data. This book helps technologists understand cognitive computing's underlying technologies, from knowledge representation techniques and natural language processing algorithms to dynamic learning approaches based on accumulated evidence, rather than reprogramming. Detailed case examples from the financial, healthcare, and manufacturing walk readers step-by-step through the design and testing of cognitive systems, and expert perspectives from organizations such as Cleveland Clinic, Memorial Sloan-Kettering, as well as commercial vendors that are creating solutions. These organizations provide insight into the real-world implementation of cognitive computing systems. The IBM Watson cognitive computing platform is described in a detailed chapter because of its significance in helping to define this emerging market. In addition, the book includes implementations of emerging projects from Qualcomm, Hitachi, Google and Amazon. Today's cognitive computing solutions build on established concepts from artificial intelligence, natural language processing, ontologies, and leverage advances in big data management and analytics. They foreshadow an intelligent infrastructure that enables a new generation of customer and context-aware smart applications in all industries. Cognitive Computing is a comprehensive guide to the subject, providing both the theoretical and practical guidance technologists need. * Discover how cognitive computing evolved from promise to reality * Learn the elements that make up a cognitive computing system * Understand the groundbreaking hardware and software technologies behind cognitive computing * Learn to evaluate your own application portfolio to find the best candidates for pilot projects * Leverage cognitive computing capabilities to transform the organization Cognitive systems are rightly being hailed as the new era of computing. Learn how these technologies enable emerging firms to compete with entrenched giants, and forward-thinking established firms to disrupt their industries. Professionals who currently work with big data and analytics will see how cognitive computing builds on their foundation, and creates new opportunities. Cognitive Computing provides complete guidance to this new level of human-machine interaction.

94 citations


Journal ArticleDOI
TL;DR: This paper contributes to the practical support of nonmonotonic inferences in description logics by introducing a new semantics expressly designed to address knowledge engineering needs, which has appealing expressiveness, enjoys nice computational properties, and constitutes an interesting solution to an ample class of application needs.

92 citations


Book ChapterDOI
25 Jan 2015
TL;DR: The capability of LARS to serve as the desired formal foundation for expressing and analyzing different semantic approaches to stream processing/reasoning and engines is demonstrated.
Abstract: The recent rise of smart applications has drawn interest to logical reasoning over data streams. Different query languages and stream processing/reasoning engines were proposed. However, due to a lack of theoretical foundations, the expressivity and semantics of these diverse approaches were only informally discussed. Towards clear specifications and means for analytic study, a formal framework is needed to characterize their semantics in precise terms. We present LARS, a Logic-based framework for Analyzing Reasoning over Streams, i.e., a rule-based formalism with a novel window operator providing a flexible mechanism to represent views on streaming data. We establish complexity results for central reasoning tasks and show how the prominent Continuous Query Language (CQL) can be captured. Moreover, the relation between LARS and ETALIS, a system for complex event processing is discussed. We thus demonstrate the capability of LARS to serve as the desired formal foundation for expressing and analyzing different semantic approaches to stream processing/reasoning and engines.

84 citations


Journal ArticleDOI
TL;DR: This paper focuses on the knowledge put into ontologies created for robotic devices and manufacturing tasks, and presents examples of AI-related services that use the semantic descriptions of skills to help users instruct the robot adequately.
Abstract: When robots are working in dynamic environments, close to humans lacking extensive knowledge of robotics, there is a strong need to simplify the user interaction and make the system execute as autonomously as possible, as long as it is feasible. For industrial robots working side-by-side with humans in manufacturing industry, AI systems are necessary to lower the demand on programming time and system integration expertise. Only by building a system with appropriate knowledge and reasoning services can one simplify the robot programming sufficiently to meet those demands while still getting a robust and efficient task execution.In this paper, we present a system we have realized that aims at fulfilling the above demands. The paper focuses on the knowledge put into ontologies created for robotic devices and manufacturing tasks, and presents examples of AI-related services that use the semantic descriptions of skills to help users instruct the robot adequately. HighlightsWe present a system for knowledge-based task specification in assembly.Robotic skills are described in ontologies and used as building blocks for task specification and synthesis.Robotic skills are declarative, compositional, and reusable.An architecture to maintain and use industrial robotics knowledge is provided.

84 citations


Book ChapterDOI
01 Jan 2015
TL;DR: This chapter offers a first exploration of the general potential of Artificial Intelligence Techniques in Human Resource Management and a brief foundation elaborates on the central functionalities of Artificial intelligence Techniques and the central requirements of Human resource Management based on the task-technology fit approach.
Abstract: Artificial Intelligence Techniques and its subset, Computational Intelligence Techniques, are not new to Human Resource Management, and since their introduction, a heterogeneous set of suggestions on how to use Artificial Intelligence and Computational Intelligence in Human Resource Management has accumulated. While such contributions offer detailed insights into specific application possibilities, an overview of the general potential is missing. Therefore, this chapter offers a first exploration of the general potential of Artificial Intelligence Techniques in Human Resource Management . To this end, a brief foundation elaborates on the central functionalities of Artificial Intelligence Techniques and the central requirements of Human Resource Management based on the task-technology fit approach. Based on this, the potential of Artificial Intelligence in Human Resource Management is explored in six selected scenarios (turnover prediction with artificial neural networks , candidate search with knowledge-based search engines, staff rostering with genetic algorithms , HR sentiment analysis with text mining , resume data acquisition with information extraction and employee self-service with interactive voice response ). The insights gained based on the foundation and exploration are discussed and summarized.

Posted Content
TL;DR: In this paper, a manifold-based embedding principle is proposed, which can be treated as a well-posed algebraic system that expands the position of golden triples from one point in current models to a manifold in ours.
Abstract: Knowledge graph embedding aims at offering a numerical knowledge representation paradigm by transforming the entities and relations into continuous vector space. However, existing methods could not characterize the knowledge graph in a fine degree to make a precise prediction. There are two reasons: being an ill-posed algebraic system and applying an overstrict geometric form. As precise prediction is critical, we propose an manifold-based embedding principle (\textbf{ManifoldE}) which could be treated as a well-posed algebraic system that expands the position of golden triples from one point in current models to a manifold in ours. Extensive experiments show that the proposed models achieve substantial improvements against the state-of-the-art baselines especially for the precise prediction task, and yet maintain high efficiency.

Journal ArticleDOI
TL;DR: This research aims to depict the methodological steps and tools about the combined operation of case-based reasoning (CBR) and multi-agent system (MAS) to expose the ontological application in the field of clinical decision support.

Journal ArticleDOI
TL;DR: This research contributes to the body of knowledge by providing an extensible framework built upon defeasible reasoning, and implemented with argumentation theory (AT), in which MWL can be better defined, measured, analysed, explained and applied in different human–computer interactive contexts.
Abstract: Human mental workload MWL has gained importance in the last few decades as an important design concept. It is a multifaceted complex construct mainly applied in cognitive sciences and has been defined in many different ways. Although measuring MWL has potential advantages in interaction and interface design, its formalisation as an operational and computational construct has not sufficiently been addressed. This research contributes to the body of knowledge by providing an extensible framework built upon defeasible reasoning, and implemented with argumentation theory AT, in which MWL can be better defined, measured, analysed, explained and applied in different human–computer interactive contexts. User studies have demonstrated how a particular instance of this framework outperformed state-of-the-art subjective MWL assessment techniques in terms of sensitivity, diagnosticity and validity. This in turn encourages further application of defeasible AT for enhancing the representation of MWL and improving the quality of its assessment.

Journal ArticleDOI
TL;DR: In this paper, the ontologies have been developed in biology and these ontologies increasingly contain large volumes of formalized knowledge commonly expressed in the Web Ontology Language (OWL).
Abstract: Background Many ontologies have been developed in biology and these ontologies increasingly contain large volumes of formalized knowledge commonly expressed in the Web Ontology Language (OWL). Computational access to the knowledge contained within these ontologies relies on the use of automated reasoning.

Journal ArticleDOI
TL;DR: KaBOB is an integrated knowledge base of biomedical data representationally based in prominent, actively maintained Open Biomedical Ontologies, thus enabling queries of the underlying data in terms of biomedical concepts rather than features of source-specific data schemas or file formats.
Abstract: The ability to query many independent biological databases using a common ontology-based semantic model would facilitate deeper integration and more effective utilization of these diverse and rapidly growing resources. Despite ongoing work moving toward shared data formats and linked identifiers, significant problems persist in semantic data integration in order to establish shared identity and shared meaning across heterogeneous biomedical data sources. We present five processes for semantic data integration that, when applied collectively, solve seven key problems. These processes include making explicit the differences between biomedical concepts and database records, aggregating sets of identifiers denoting the same biomedical concepts across data sources, and using declaratively represented forward-chaining rules to take information that is variably represented in source databases and integrating it into a consistent biomedical representation. We demonstrate these processes and solutions by presenting KaBOB (the Knowledge Base Of Biomedicine), a knowledge base of semantically integrated data from 18 prominent biomedical databases using common representations grounded in Open Biomedical Ontologies. An instance of KaBOB with data about humans and seven major model organisms can be built using on the order of 500 million RDF triples. All source code for building KaBOB is available under an open-source license. KaBOB is an integrated knowledge base of biomedical data representationally based in prominent, actively maintained Open Biomedical Ontologies, thus enabling queries of the underlying data in terms of biomedical concepts (e.g., genes and gene products, interactions and processes) rather than features of source-specific data schemas or file formats. KaBOB resolves many of the issues that routinely plague biomedical researchers intending to work with data from multiple data sources and provides a platform for ongoing data integration and development and for formal reasoning over a wealth of integrated biomedical data.

Journal ArticleDOI
TL;DR: A type-2 fuzzy ontology to provide accurate information about collision risk and the marine environment during real-time marine operations and a simulator for marine users that will reduce experimental time and the cost of marine robots and will evaluate algorithms intelligently.

Journal ArticleDOI
TL;DR: The designs of audit methodologies, including elements of knowledge elicitation (KE), knowledge representation (KR), and role of researcher (RR) for SBP and UBP, are proposed in this paper.
Abstract: Purpose – This study/paper aims to study the knowledge audit methodologies needed in structured business processes (SBP) and unstructured business processes (UBP) respectively. The knowledge audit methodology used for SBP aims to identify and capture procedural knowledge, while the one for UBP aims to facilitate the sharing of experiential knowledge. The designs of audit methodologies, including elements of knowledge elicitation (KE), knowledge representation (KR), and role of researcher (RR) for SBP and UBP, are proposed in this paper. Design/methodology/approach – Two knowledge audit cases studies were conducted. The first case was conducted in an SBP, and the second one in an UBP. The first case provides a view of a typical knowledge audit in SBP, the limitations are identified. The second case pinpoints the development of a new knowledge audit methodology applicable for UBP. Findings – A significant differentiation between knowledge audits in SBP and UBP is that the knowledge to be captured in the for...

Proceedings Article
25 Jan 2015
TL;DR: BC+ as mentioned in this paper is a high level notation of propositional formulas under the stable model semantics, which is defined as a high-level notation for propositional expressions under ASP semantics.
Abstract: Action languages are formal models of parts of natural language that are designed to describe effects of actions. Many of these languages can be viewed as high level notations of answer set programs structured to represent transition systems. However, the form of answer set programs considered in the earlier work is quite limited in comparison with the modern Answer Set Programming (ASP) language, which allows several useful constructs for knowledge representation, such as choice rules, aggregates, and abstract constraint atoms. We propose a new action language called BC+, which closes the gap between action languages and the modern ASP language. Language BC+ is defined as a high level notation of propositional formulas under the stable model semantics. Due to the generality of the underlying language, BC+ is expressive enough to encompass many modern ASP language constructs and the best features of several other action languages, such as B, C, C+ and BC. Computational methods available in ASP solvers are readily applicable to compute BC+, which led us to implement the language by extending system CPLUS2ASP.

Journal ArticleDOI
TL;DR: This paper provides a gentle introduction to problem-solving with the IDP3 system, a finite model generator that supports first-order logic enriched with types, inductive definitions, aggregates and partial functions that offers its users a modeling language that allows them to solve a wide range of search problems.
Abstract: This paper provides a gentle introduction to problem-solving with the IDP3 system. The core of IDP3 is a finite model generator that supports first-order logic enriched with types, inductive definitions, aggregates and partial functions. It offers its users a modeling language that is a slight extension of predicate logic and allows them to solve a wide range of search problems. Apart from a small introductory example, applications are selected from problems that arose within machine learning and data mining research. These research areas have recently shown a strong interest in declarative modeling and constraint-solving as opposed to algorithmic approaches. The paper illustrates that the IDP3 system can be a valuable tool for researchers with such an interest. The first problem is in the domain of stemmatology, a domain of philology concerned with the relationship between surviving variant versions of text. The second problem is about a somewhat related problem within biology where phylogenetic trees are used to represent the evolution of species. The third and final problem concerns the classical problem of learning a minimal automaton consistent with a given set of strings. For this last problem, we show that the performance of our solution comes very close to that of the state-of-the art solution. For each of these applications, we analyze the problem, illustrate the development of a logic-based model and explore how alternatives can affect the performance.

01 Jan 2015
TL;DR: Open-Ease as discussed by the authors is a remote knowledge representation and processing service that aims at facilitating the capabilities of future autonomous robots capable of accomplishing human-scale manipulation tasks by providing access to the knowl-edge of leading-edge autonomous robotic agents.
Abstract: Making future autonomous robots capable of accomplishing human-scale manipulation tasks requires us to equip them with knowledge and reasoning mechanisms. We propose OPEN-EASE, a remote knowledge representation and processing service that aims at facilitating these capabilities. OPEN-EASE gives its users unprecedented access to the knowl- edge of leading-edge autonomous robotic agents. It also provides the representational infrastructure to make inhomogeneous experience data from robots and human manipulation episodes semantically accessible, and is complemented by a suite of software tools that enable researchers and robots to interpret, analyze, visualize, and learn from the experience data. Using OPEN-EASE users can retrieve the memorized experiences of manipulation episodes and ask queries regarding to what the robot saw, reasoned, and did as well as how the robot did it, why, and what effects it caused.

Proceedings Article
01 Jan 2015
TL;DR: Using OPEN-EASE users can retrieve the memorized experiences of manipulation episodes and ask queries regarding to what the robot saw, reasoned, and did as well as how the robot did it, why, and what effects it caused.
Abstract: Making future autonomous robots capable of accomplishing human-scale manipulation tasks requires us to equip them with knowledge and reasoning mechanisms. We propose OPEN-EASE, a remote knowledge representation and processing service that aims at facilitating these capabilities. OPEN-EASE gives its users unprecedented access to the knowledge of leading-edge autonomous robotic agents. It also provides the representational infrastructure to make inhomogeneous experience data from robots and human manipulation episodes semantically accessible, and is complemented by a suite of software tools that enable researchers and robots to interpret, analyze, visualize, and learn from the experience data. Using OPEN-EASE users can retrieve the memorized experiences of manipulation episodes and ask queries regarding to what the robot saw, reasoned, and did as well as how the robot did it, why, and what effects it caused.

Journal ArticleDOI
TL;DR: This paper presents an architecture that exploits the complementary strengths of declarative programming and probabilistic graphical models as a step toward addressing the challenges of deployment of robots in practical domains.
Abstract: Deployment of robots in practical domains poses key knowledge representation and reasoning challenges. Robots need to represent and reason with incomplete domain knowledge, acquiring and using sensor inputs based on need and availability. This paper presents an architecture that exploits the complementary strengths of declarative programming and probabilistic graphical models as a step toward addressing these challenges. Answer Set Prolog (ASP), a declarative language, is used to represent, and perform inference with, incomplete domain knowledge, including default information that holds in all but a few exceptional situations. A hierarchy of partially observable Markov decision processes (POMDPs) probabilistically models the uncertainty in sensor input processing and navigation. Nonmonotonic logical inference in ASP is used to generate a multinomial prior for probabilistic state estimation with the hierarchy of POMDPs. It is also used with historical data to construct a beta (meta) density model of priors for metareasoning and early termination of trials when appropriate. Robots equipped with this architecture automatically tailor sensor input processing and navigation to tasks at hand, revising existing knowledge using information extracted from sensor inputs. The architecture is empirically evaluated in simulation and on a mobile robot visually localizing objects in indoor domains.

Proceedings ArticleDOI
17 Oct 2015
TL;DR: This work proposes a new framework for generating context-aware text representations without diving into the sense space, and model the concept space shared among senses, resulting in a framework that is efficient in both computation and storage.
Abstract: Representing discrete words in a continuous vector space turns out to be useful for natural language applications related to text understanding Meanwhile, it poses extensive challenges, one of which is due to the polysemous nature of human language A common solution (aka word sense induction) is to separate each word into multiple senses and create a representation for each sense respectively However, this approach is usually computationally expensive and prone to data sparsity, since each sense needs to be managed discriminatively In this work, we propose a new framework for generating context-aware text representations without diving into the sense space We model the concept space shared among senses, resulting in a framework that is efficient in both computation and storage Specifically, the framework we propose is one that: i) projects both words and concepts into the same vector space; ii) obtains unambiguous word representations that not only preserve the uniqueness among words, but also reflect their context-appropriate meanings We demonstrate the effectiveness of the framework in a number of tasks on text understanding, including word/phrase similarity measurements, paraphrase identification and question-answer relatedness classification

Journal ArticleDOI
TL;DR: Advanced data analysis, neural networks and knowledge representation technologies are brought together towards an intelligent information system for tourist destination marketing, showing that the intelligent system was able to assist users who are not experts in analysis to solve typical destination marketing problems.

Journal ArticleDOI
TL;DR: Although the main goal is to efficiently answer queries over OWL 2 ontologies and data, the technical results are very general and the approach is applicable to first-order knowledge representation languages that can be captured by rules allowing for existential quantification and disjunction in the head.
Abstract: Answering conjunctive queries over ontology-enriched datasets is a core reasoning task for many applications. Query answering is, however, computationally very expensive, which has led to the development of query answering procedures that sacrifice either expressive power of the ontology language, or the completeness of query answers in order to improve scalability. In this paper, we describe a hybrid approach to query answering over OWL 2 ontologies that combines a datalog reasoner with a fully-fledged OWL 2 reasoner in order to provide scalable 'pay-as-you-go' performance. The key feature of our approach is that it delegates the bulk of the computation to the datalog reasoner and resorts to expensive OWL 2 reasoning only as necessary to fully answer the query. Furthermore, although our main goal is to efficiently answer queries over OWL 2 ontologies and data, our technical results are very general and our approach is applicable to first-order knowledge representation languages that can be captured by rules allowing for existential quantification and disjunction in the head; our only assumption is the availability of a datalog reasoner and a fully-fledged reasoner for the language of interest, both of which are used as 'black boxes'. We have implemented our techniques in the PAGOdA system, which combines the datalog reasoner RDFox and the OWL 2 reasoner HermiT. Our extensive evaluation shows that PAGOdA succeeds in providing scalable pay-as-you-go query answering for a wide range of OWL 2 ontologies, datasets and queries.

Journal ArticleDOI
TL;DR: The purpose of the framework is to allow the automated information exchange between different medicine specialists from different areas of expertise and the key factor of the exchange is sharing concepts between the Areas of expertise.
Abstract: In order to simplify the information exchange within the medical diagnosis process, a collaborative software agents framework is presented.The human body systems (e.g. respiratory, cardiovascular) are embedded into distinct software agents.The automated information exchange between different medicine specialists from different areas of expertise.The framework has three key components: knowledge management, uncertainty reasoning and software agents. In order to simplify the information exchange within the medical diagnosis process, a collaborative software agents framework is presented. The human body systems (e.g. respiratory, cardiovascular) are embedded into distinct software agents. The holistic perspective is given by the all connected agents exchanging information. The purpose of the framework is to allow the automated information exchange between different medicine specialists. The key factor of the exchange is sharing concepts between the areas of expertise. Each human body system expert will act over his concepts (evidences, causes, effects), however the information from other systems will be assimilated as well. The framework has three key components: knowledge management, uncertainty reasoning and software agents. The ontology is chosen to address the management of human body systems knowledge. The Bayesian Network is the graphical model for probabilistic knowledge representation and reasoning about partial beliefs under uncertainty. The software agents, as collaboration framework, are in charge of the belief propagation between system instances.

Journal ArticleDOI
TL;DR: This work proposes a general method of knowledge reduction by reducing attributes and objects in formal fuzzy contexts based on the variable threshold concept lattices by removing attributes andObjects which are non-essential to the structure of a variable thresholds concept lattice.
Abstract: Knowledge reduction is a basic issue in knowledge representation and data mining. Although various methods have been developed to reduce the size of classical formal contexts, the reduction of formal fuzzy contexts based on fuzzy lattices remains a difficult problem owing to its complicated derivation operators. To address this problem, we propose a general method of knowledge reduction by reducing attributes and objects in formal fuzzy contexts based on the variable threshold concept lattices. Employing the proposed approaches, we remove attributes and objects which are non-essential to the structure of a variable threshold concept lattice, i.e., with a given threshold level, the concept lattice constructed from a reduced formal context is made identical to that constructed from the original formal context. Discernibility matrices and Boolean functions are, respectively, employed to compute the attribute reducts and object reducts of the formal fuzzy contexts, by which all the attribute reducts and object reducts of the formal fuzzy contexts are determined without changing the structure of the lattice.

Book ChapterDOI
06 Jul 2015
TL;DR: This article presents a provenance framework for trees and treelike instances, by describing a linear-time construction of a circuit provenance representation for MSO queries, and shows applications of this provenance to capture existing counting and probabilistic results on trees andtreeliked instances, and give novel consequences for probability evaluation.
Abstract: Expressive query languages are gaining relevance in knowledge representation (KR), and new reasoning problems come to the fore. Especially query containment is interesting in this context. The problem is known to be decidable for many expressive query languages, but exact complexities are often missing. We introduce a new query language, guarded queries (GQ), which generalizes most known languages where query containment is decidable. GQs can be nested (more expressive), or restricted to linear recursion (less expressive). Our comprehensive analysis of the computational properties and expressiveness of (linear/nested) GQs also yields insights on many previous languages.

Journal ArticleDOI
TL;DR: Affective applications require a common way to represent emotions so it can be more easily integrated, shared and reused by applications improving user experience, and this proposal is to use rich semantic models based on ontology.