scispace - formally typeset
Search or ask a question

Showing papers on "Knowledge representation and reasoning published in 2019"


Journal ArticleDOI
TL;DR: RuleMatrix is designed, a matrix-based visualization of rules to help users navigate and verify the rules and the black-box model, and is evaluated via two use cases and a usability study.
Abstract: With the growing adoption of machine learning techniques, there is a surge of research interest towards making machine learning systems more transparent and interpretable Various visualizations have been developed to help model developers understand, diagnose, and refine machine learning models However, a large number of potential but neglected users are the domain experts with little knowledge of machine learning but are expected to work with machine learning systems In this paper, we present an interactive visualization technique to help users with little expertise in machine learning to understand, explore and validate predictive models By viewing the model as a black box, we extract a standardized rule-based knowledge representation from its input-output behavior Then, we design RuleMatrix, a matrix-based visualization of rules to help users navigate and verify the rules and the black-box model We evaluate the effectiveness of RuleMatrix via two use cases and a usability study

146 citations


Posted Content
TL;DR: Recent accomplishments of neural-symbolic computing as a principled methodology for integrated machine learning and reasoning are surveyed and the insights provided shed new light on the increasingly prominent need for interpretable and accountable AI systems.
Abstract: Current advances in Artificial Intelligence and machine learning in general, and deep learning in particular have reached unprecedented impact not only across research communities, but also over popular media channels. However, concerns about interpretability and accountability of AI have been raised by influential thinkers. In spite of the recent impact of AI, several works have identified the need for principled knowledge representation and reasoning mechanisms integrated with deep learning-based systems to provide sound and explainable models for such systems. Neural-symbolic computing aims at integrating, as foreseen by Valiant, two most fundamental cognitive abilities: the ability to learn from the environment, and the ability to reason from what has been learned. Neural-symbolic computing has been an active topic of research for many years, reconciling the advantages of robust learning in neural networks and reasoning and interpretability of symbolic representation. In this paper, we survey recent accomplishments of neural-symbolic computing as a principled methodology for integrated machine learning and reasoning. We illustrate the effectiveness of the approach by outlining the main characteristics of the methodology: principled integration of neural learning with symbolic knowledge representation and reasoning allowing for the construction of explainable AI systems. The insights provided by neural-symbolic computing shed new light on the increasingly prominent need for interpretable and accountable AI systems.

127 citations


DOI
25 Mar 2019
TL;DR: This report documents the program and the outcomes of Dagstuhl Seminar 18371 "Knowledge Graphs: New Directions for Knowledge Representation on the Semantic Web", where a group of experts from academia and industry discussed fundamental questions around these topics for a week in early September 2018.
Abstract: The increasingly pervasive nature of the Web, expanding to devices and things in everyday life, along with new trends in Artificial Intelligence call for new paradigms and a new look on Knowledge Representation and Processing at scale for the Semantic Web. The emerging, but still to be concretely shaped concept of "Knowledge Graphs" provides an excellent unifying metaphor for this current status of Semantic Web research. More than two decades of Semantic Web research provides a solid basis and a promising technology and standards stack to interlink data, ontologies and knowledge on the Web. However, neither are applications for Knowledge Graphs as such limited to Linked Open Data, nor are instantiations of Knowledge Graphs in enterprises – while often inspired by – limited to the core Semantic Web stack. This report documents the program and the outcomes of Dagstuhl Seminar 18371 "Knowledge Graphs: New Directions for Knowledge Representation on the Semantic Web", where a group of experts from academia and industry discussed fundamental questions around these topics for a week in early September 2018, including the following: what are knowledge graphs? Which applications do we see to emerge? Which open research questions still need be addressed and which technology gaps still need to be closed?

104 citations


Journal ArticleDOI
TL;DR: This paper focuses on knowledge representations and notably how knowledge is typically gathered, represented, and reproduced to solve problems as done by researchers in the past decades and the key distinction between such representations and useful learning models that have extensively been introduced and studied in recent years.

86 citations


Journal ArticleDOI
TL;DR: This work will systematically search for projects that fulfill a set of inclusion criteria and compare them with each other with respect to the scope of their ontology, what types of cognitive capabilities are supported by the use of ontologies, and which is their application domain.
Abstract: Within the next decades, robots will need to be able to execute a large variety of tasks autonomously in a large variety of environments. To relax the resulting programming effort, a knowledge-enabled approach to robot programming can be adopted to organize information in re-usable knowledge pieces. However, for the ease of reuse, there needs to be an agreement on the meaning of terms. A common approach is to represent these terms using ontology languages that conceptualize the respective domain. In this work, we will review projects that use ontologies to support robot autonomy. We will systematically search for projects that fulfill a set of inclusion criteria and compare them with each other with respect to the scope of their ontology, what types of cognitive capabilities are supported by the use of ontologies, and which is their application domain.

73 citations


01 May 2019
TL;DR: The authors survey recent accomplishments of neural-symbolic computing as a principled methodology for integrated machine learning and reasoning, and illustrate the effectiveness of the approach by outlining the main characteristics of the methodology: principled integration of neural learning with symbolic knowledge representation and reasoning.
Abstract: Current advances in Artificial Intelligence and machine learning in general, and deep learning in particular have reached unprecedented impact not only across research communities, but also over popular media channels. However, concerns about interpretability and accountability of AI have been raised by influential thinkers. In spite of the recent impact of AI, several works have identified the need for principled knowledge representation and reasoning mechanisms integrated with deep learning-based systems to provide sound and explainable models for such systems. Neural-symbolic computing aims at integrating, as foreseen by Valiant, two most fundamental cognitive abilities: the ability to learn from the environment, and the ability to reason from what has been learned. Neural-symbolic computing has been an active topic of research for many years, reconciling the advantages of robust learning in neural networks and reasoning and interpretability of symbolic representation. In this paper, we survey recent accomplishments of neural-symbolic computing as a principled methodology for integrated machine learning and reasoning. We illustrate the effectiveness of the approach by outlining the main characteristics of the methodology: principled integration of neural learning with symbolic knowledge representation and reasoning allowing for the construction of explainable AI systems. The insights provided by neural-symbolic computing shed new light on the increasingly prominent need for interpretable and accountable AI systems.

72 citations


Journal ArticleDOI
TL;DR: A comprehensive summary of the state of the art of ontology-based systems engineering is presented, illuminating a roadmap for future directions and assesses the influence of ontologies in systems engineering knowledge areas.

69 citations


Proceedings ArticleDOI
Krisztian Balog1, Tom Kenter1
23 Sep 2019
TL;DR: The concept of personal knowledge graphs is presented: resources of structured information about entities personally related to its user, including the ones that might not be globally important.
Abstract: Knowledge graphs, organizing structured information about entities, and their attributes and relationships, are ubiquitous today. Entities, in this context, are usually taken to be anyone or anything considered to be globally important. This, however, rules out many entities people interact with on a daily basis. In this position paper, we present the concept of personal knowledge graphs: resources of structured information about entities personally related to its user, including the ones that might not be globally important. We discuss key aspects that separate them for general knowledge graphs, identify the main challenges involved in constructing and using them, and define a research agenda.

59 citations


Proceedings ArticleDOI
01 Jun 2019
TL;DR: A novel knowledge base (KB)-driven tree-structured long short-term memory networks (Tree-LSTM) framework is proposed, incorporating two new types of features: dependency structures to capture wide contexts and entity properties from external ontologies via entity linking.
Abstract: Event extraction for the biomedical domain is more challenging than that in the general news domain since it requires broader acquisition of domain-specific knowledge and deeper understanding of complex contexts. To better encode contextual information and external background knowledge, we propose a novel knowledge base (KB)-driven tree-structured long short-term memory networks (Tree-LSTM) framework, incorporating two new types of features: (1) dependency structures to capture wide contexts; (2) entity properties (types and category descriptions) from external ontologies via entity linking. We evaluate our approach on the BioNLP shared task with Genia dataset and achieve a new state-of-the-art result. In addition, both quantitative and qualitative studies demonstrate the advancement of the Tree-LSTM and the external knowledge representation for biomedical event extraction.

59 citations


Journal ArticleDOI
TL;DR: A framework for cultural knowledge representation that relies on a three-layer ontology for storing concepts of relevance, culture-specific information and statistics, person- specific information and preferences and an algorithm for the acquisition of person-specific knowledge to drive the search is proposed.
Abstract: Culture, intended as the set of beliefs, values, ideas, language, norms and customs which compose a person’s life, is an essential element to know by any robot for personal assistance. Culture, intended as that person’s background, can be an invaluable source of information to drive and speed up the process of discovering and adapting to the person’s habits, preferences and needs. This article discusses the requirements posed by cultural competence on the knowledge management system of a robot. We propose a framework for cultural knowledge representation that relies on (i) a three-layer ontology for storing concepts of relevance, culture-specific information and statistics, person-specific information and preferences; (ii) an algorithm for the acquisition of person-specific knowledge, which uses culture-specific knowledge to drive the search; (iii) a Bayesian Network for speeding up the adaptation to the person by propagating the effects of acquiring one specific information onto interconnected concepts. We have conducted a preliminary evaluation of the framework involving 159 Italian and German volunteers and considering 122 among habits, attitudes and social norms.

58 citations


Journal ArticleDOI
TL;DR: This paper proposes a principled knowledge-based model for AM in the form of a computational ontology that constitutes the backbone structure to organize AM data and automatically reason over experts’ knowledge for data validation, ultimately supporting the development of algorithms and applications for decision making.

Journal ArticleDOI
TL;DR: In this paper, the authors illustrate the success of machine learning (ML) algorithms in tasks ranging from machine vision to game playing and describe how existing algorithms can also be impactful in materials science, while noting key limitations for accelerating materials discovery.
Abstract: Continued progress in artificial intelligence (AI) and associated demonstrations of superhuman performance have raised the expectation that AI can revolutionize scientific discovery in general and materials science specifically. We illustrate the success of machine learning (ML) algorithms in tasks ranging from machine vision to game playing and describe how existing algorithms can also be impactful in materials science, while noting key limitations for accelerating materials discovery. Issues of data scarcity and the combinatorial nature of materials spaces, which limit application of ML techniques in materials science, can be overcome by exploiting the rich scientific knowledge from physics and chemistry using additional AI techniques such as reasoning, planning, and knowledge representation. The integration of these techniques in materials-intelligent systems will enable AI governance of the scientific method and autonomous scientific discovery.

Journal ArticleDOI
TL;DR: The landscape of current annotation practices among the COmputational Modeling in BIology NEtwork community is reported and a set of recommendations for building a consensus approach to semantic annotation are provided.
Abstract: Life science researchers use computational models to articulate and test hypotheses about the behavior of biological systems. Semantic annotation is a critical component for enhancing the interoperability and reusability of such models as well as for the integration of the data needed for model parameterization and validation. Encoded as machine-readable links to knowledge resource terms, semantic annotations describe the computational or biological meaning of what models and data represent. These annotations help researchers find and repurpose models, accelerate model composition and enable knowledge integration across model repositories and experimental data stores. However, realizing the potential benefits of semantic annotation requires the development of model annotation standards that adhere to a community-based annotation protocol. Without such standards, tool developers must account for a variety of annotation formats and approaches, a situation that can become prohibitively cumbersome and which can defeat the purpose of linking model elements to controlled knowledge resource terms. Currently, no consensus protocol for semantic annotation exists among the larger biological modeling community. Here, we report on the landscape of current annotation practices among the COmputational Modeling in BIology NEtwork community and provide a set of recommendations for building a consensus approach to semantic annotation.

Journal ArticleDOI
TL;DR: A new mesh-to-HBIM modeling workflow and an integrated BIM management system to connect HBIM elements and historical knowledge and a semantic model with object-oriented knowledge is developed by extending the capability of the BIM platform.
Abstract: Built heritage has been documented by reality-based modeling for geometric description and by ontology for knowledge management. The current challenge still involves the extraction of geometric primitives and the establishment of their connection to heterogeneous knowledge. As a recently developed 3D information modeling environment, building information modeling (BIM) entails both graphical and non-graphical aspects of the entire building, which has been increasingly applied to heritage documentation and generates a new issue of heritage/historic BIM (HBIM). However, HBIM needs to additionally deal with the heterogeneity of geometric shape and semantic knowledge of the heritage object. This paper developed a new mesh-to-HBIM modeling workflow and an integrated BIM management system to connect HBIM elements and historical knowledge. Using the St-Pierre-le-Jeune Church, Strasbourg, France as a case study, this project employs Autodesk Revit as a BIM environment and Dynamo, a built-in visual programming tool of Revit, to extend the new HBIM functions. The mesh-to-HBIM process segments the surface mesh, thickens the triangle mesh to 3D volume, and transfers the primitives to BIM elements. The obtained HBIM is then converted to the ontology model to enrich the heterogeneous knowledge. Finally, HBIM geometric elements and ontology semantic knowledge is joined in a unified BIM environment. By extending the capability of the BIM platform, the HBIM modeling process can be conducted in a time-saving way, and the obtained HBIM is a semantic model with object-oriented knowledge.

Journal ArticleDOI
17 Jul 2019
TL;DR: TransNFCM as discussed by the authors uses category-specific complementary relations to model the category-aware compatibility between items in a translation-based embedding space, which can not only capture the specific notion of compatibility conditioned on a specific pair of complementary categories, but also preserve the global concept of compatibility.
Abstract: Identifying mix-and-match relationships between fashion items is an urgent task in a fashion e-commerce recommender system. It will significantly enhance user experience and satisfaction. However, due to the challenges of inferring the rich yet complicated set of compatibility patterns in a large e-commerce corpus of fashion items, this task is still underexplored. Inspired by the recent advances in multirelational knowledge representation learning and deep neural networks, this paper proposes a novel Translation-based Neural Fashion Compatibility Modeling (TransNFCM) framework, which jointly optimizes fashion item embeddings and category-specific complementary relations in a unified space via an end-to-end learning manner. TransNFCM places items in a unified embedding space where a category-specific relation (category-comp-category) is modeled as a vector translation operating on the embeddings of compatible items from the corresponding categories. By this way, we not only capture the specific notion of compatibility conditioned on a specific pair of complementary categories, but also preserve the global notion of compatibility. We also design a deep fashion item encoder which exploits the complementary characteristic of visual and textual features to represent the fashion products. To the best of our knowledge, this is the first work that uses category-specific complementary relations to model the category-aware compatibility between items in a translation-based embedding space. Extensive experiments demonstrate the effectiveness of TransNFCM over the state-of-the-arts on two real-world datasets.

Proceedings Article
10 Sep 2019
TL;DR: This paper used non-myopic meta-gradients to learn GVF-questions such that learning answers to them, as an auxiliary task, induces useful representations for the main task faced by the RL agent.
Abstract: Arguably, intelligent agents ought to be able to discover their own questions so that in learning answers for them they learn unanticipated useful knowledge and skills; this departs from the focus in much of machine learning on agents learning answers to externally defined questions. We present a novel method for a reinforcement learning (RL) agent to discover questions formulated as general value functions or GVFs, a fairly rich form of knowledge representation. Specifically, our method uses non-myopic meta-gradients to learn GVF-questions such that learning answers to them, as an auxiliary task, induces useful representations for the main task faced by the RL agent. We demonstrate that auxiliary tasks based on the discovered GVFs are sufficient, on their own, to build representations that support main task learning, and that they do so better than popular hand-designed auxiliary tasks from the literature. Furthermore, we show, in the context of Atari2600 videogames, how such auxiliary tasks, meta-learned alongside the main task, can improve the data efficiency of an actor-critic agent.

Posted Content
TL;DR: This paper proposed a self-knowledge distillation method based on the soft target probabilities of the training model itself, where multimode information is distilled from the word embedding space right below the softmax layer.
Abstract: Since deep learning became a key player in natural language processing (NLP), many deep learning models have been showing remarkable performances in a variety of NLP tasks, and in some cases, they are even outperforming humans. Such high performance can be explained by efficient knowledge representation of deep learning models. While many methods have been proposed to learn more efficient representation, knowledge distillation from pretrained deep networks suggest that we can use more information from the soft target probability to train other neural networks. In this paper, we propose a new knowledge distillation method self-knowledge distillation, based on the soft target probabilities of the training model itself, where multimode information is distilled from the word embedding space right below the softmax layer. Due to the time complexity, our method approximates the soft target probabilities. In experiments, we applied the proposed method to two different and fundamental NLP tasks: language model and neural machine translation. The experiment results show that our proposed method improves performance on the tasks.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the MKRL model outperforms the state-of-the-art methods, which indicates the effectiveness of multi-source information for knowledge representation.
Abstract: Knowledge representation learning methods usually only utilize triple facts, or just consider one kind of extra information. In this paper, we propose a multi-source knowledge representation learning (MKRL) model, which can combine entity descriptions, hierarchical types, and textual relations with triple facts. Specifically, for entity descriptions, a convolutional neural network is used to get representations. For hierarchical type, weighted hierarchy encoders are used to construct the projection matrixes of hierarchical types, and the projection matrix of an entity combines all hierarchical type projection matrixes of the entity with the relation-specific type constrains. For textual relations, a sentence-level attention mechanism is employed to get representations. We evaluate MKRL model on knowledge graph completion task with dataset FB15k-237, and experimental results demonstrate that our model outperforms the state-of-the-art methods, which indicates the effectiveness of multi-source information for knowledge representation.

Journal ArticleDOI
TL;DR: In this article, the authors describe an architecture for robots that combines the complementary strengths of probabilistic graphical models and declarative programming to represent and reason with logic-based descriptions of uncertainty and domain knowledge.
Abstract: This paper describes an architecture for robots that combines the complementary strengths of probabilistic graphical models and declarative programming to represent and reason with logic-based and probabilistic descriptions of uncertainty and domain knowledge. An action language is extended to support non-boolean fluents and non-deterministic causal laws. This action language is used to describe tightly-coupled transition diagrams at two levels of granularity, with a fine-resolution transition diagram defined as a refinement of a coarse-resolution transition diagram of the domain. The coarse-resolution system description, and a history that includes (prioritized) defaults, are translated into an Answer Set Prolog (ASP) program. For any given goal, inference in the ASP program provides a plan of abstract actions. To implement each such abstract action, the robot automatically zooms to the part of the fine-resolution transition diagram relevant to this action. A probabilistic representation of the uncertainty in sensing and actuation is then included in this zoomed fine-resolution system description, and used to construct a partially observable Markov decision process (POMDP). The policy obtained by solving the POMDP is invoked repeatedly to implement the abstract action as a sequence of concrete actions, with the corresponding observations being recorded in the coarse-resolution history and used for subsequent reasoning. The architecture is evaluated in simulation and on a mobile robot moving objects in an indoor domain, to show that it supports reasoning with violation of defaults, noisy observations and unreliable actions, in complex domains.

Journal ArticleDOI
TL;DR: Evidence that individuals retain detailed causal information for a few domains and coarse causal models embedding markers indicating that these details are available elsewhere for most domains are marshalled.

Journal ArticleDOI
TL;DR: This paper collects and surveys approaches to forgetting in the field of knowledge representation and reasoning, highlighting their roles in diverse tasks of knowledge processing, and elaborating on common techniques.
Abstract: Forgetting is an ambivalent concept of (human) intelligence. By definition, it is negatively related to knowledge in that knowledge is lost, be it deliberately or not, and therefore, forgetting has not received as much attention in the field of knowledge representation and reasoning (KRR) as other processes with a more positive orientation, like query answering, inference, or update. However, from a cognitive view, forgetting also has an ordering function in the human mind, suppressing information that is deemed irrelevant and improving cognitive capabilities to focus and deal only with relevant aspects of the problem under consideration. In this regard, forgetting is a crucial part of reasoning. This paper collects and surveys approaches to forgetting in the field of knowledge representation and reasoning, highlighting their roles in diverse tasks of knowledge processing, and elaborating on common techniques. We recall forgetting operations for propositional and predicate logic, as well as for answer set programming (as an important representative of nonmonotonic logics) and modal logics. We discuss forgetting in the context of (ir)relevance and (in)dependence, and explicit the role of forgetting for specific tasks of knowledge representation, showing its positive impact on solving KRR problems.

Journal ArticleDOI
TL;DR: Test and results of the presented implementation of Decisional DNA case studies support it as a technology that can improve and be applied to the aforementioned technologies enhancing them with intelligence by predicting capabilities and facilitating knowledge engineering processes.

Journal ArticleDOI
TL;DR: The capacity of ontologies to represent both symbolic and numeric knowledge, to reason based on cognitive semantics and to share knowledge on the interpretation of remote sensing images is focused on.
Abstract: The development of new sensors and easier access to remote sensing data are significantly transforming both the theory and practice of remote sensing. Although data-driven approaches based on innovative algorithms and enhanced computing capacities are gaining importance to process big Earth Observation data, the development of knowledge-driven approaches is still considered by the remote sensing community to be one of the most important directions of their research. In this context, the future of remote sensing science should be supported by knowledge representation techniques such as ontologies. However, ontology-based remote sensing applications still have difficulty capturing the attention of remote sensing experts. This is mainly because of the gap between remote sensing experts’ expectations of ontologies and their real possible contribution to remote sensing. This paper provides insights to help reduce this gap. To this end, the conceptual limitations of the knowledge-driven approaches currently used in remote sensing science are clarified first. Then, the different modes of definition of geographic concepts, their duality, vagueness and ambiguity, and the sensory and semantic gaps are discussed in order to explain why ontologies can help address these limitations. In particular, this paper focuses on the capacity of ontologies to represent both symbolic and numeric knowledge, to reason based on cognitive semantics and to share knowledge on the interpretation of remote sensing images. Finally, a few recommendations are provided for remote sensing experts to comprehend the advantages of ontologies in interpreting satellite images.

Journal ArticleDOI
TL;DR: A new semantic information retrieval system is proposed in this paper which uses feature selection and classification for enhancing the relevancy score and a new intelligent fuzzy rough set based feature selection algorithm and an intelligent ontology and Latent Dirichlet Allocation based semantic information retrieved algorithm are proposed.
Abstract: Semantic information retrieval provides more relevant information to the user query by performing semantic analysis. In such a scenario, knowledge representation using ontology can provide effective semantic retrieval facility which is more efficient than representation using semantic networks and frames. The existing information retrieval systems have been developed to handle very large volume of data and information stored in text format. On the other hand, the information available in the current web based applications such as Facebook and twitter grow very fast and hence the existing information retrieval systems consume large amount of time for relevant information retrieval. Moreover, most of the existing search engines use syntactic approach for information retrieval and use page ranking algorithms to measure the relevancy score. However, such approach is not able to provide more accurate results in terms of relevancy. Therefore, a new semantic information retrieval system is proposed in this paper which uses feature selection and classification for enhancing the relevancy score which is performed in this work by proposing a new intelligent fuzzy rough set based feature selection algorithm and an intelligent ontology and Latent Dirichlet Allocation based semantic information retrieval algorithm. The main advantages of the proposed algorithms are the increase in relevancy, ability to handle big data and fast retrieval.

Journal ArticleDOI
20 Aug 2019-Energies
TL;DR: This study aims to investigate literature on the application of ontology in multi-agent systems within the energy domain and map the key concepts underpinning these research areas and provides a recommendation list for the ontology-driven multi- agent system development.
Abstract: Multi-agent systems are well-known for their expressiveness to explore interactions and knowledge representation in complex systems. Multi-agent systems have been applied in the energy domain since the 1990s. As more applications of multi-agent systems in the energy domain for advanced functions, the interoperability raises challenge raises to an increasing requirement for data and information exchange between systems. Therefore, the application of ontology in multi-agent systems needs to be emphasized and a systematic approach for the application needs to be developed. This study aims to investigate literature on the application of ontology in multi-agent systems within the energy domain and map the key concepts underpinning these research areas. A scoping review of the existing literature on ontology for multi-agent systems in the energy domain is conducted. This paper presents an overview of the application of multi-agent systems (MAS) and ontologies in the energy domain with five aspects of the definition of agent and MAS; MAS applied in the energy domain, defined ontologies in the energy domain, MAS design methodology, and architectures, and the application of ontology in the MAS development. Furthermore, this paper provides a recommendation list for the ontology-driven multi-agent system development with the aspects of 1) ontology development process in MAS design, 2) detail design process and realization of ontology-driven MAS development, 3) open standard implementation and adoption, 4) inter-domain MAS development, and 5) agent listing approach.

Proceedings ArticleDOI
14 Jul 2019
TL;DR: Compared with traditional methods based on only structural knowledge, TransAE can significantly improve the performance in the sense of link prediction and triplet classification and has the ability to learn representations for entities out of knowledge base in zero-shot.
Abstract: Knowledge graph, or knowledge base, plays an important role in a variety of applications in the field of artificial intelligence. In both research and application of knowledge graph, knowledge representation learning is one of the fundamental tasks. Existing representation learning approaches are mainly based on structural knowledge between entities and relations, while knowledge among entities per se is largely ignored. Though a few approaches integrated entity knowledge while learning representations, these methods lack the flexibility to apply to multimodalities. To tackle this problem, in this paper, we propose a new representation learning method, TransAE, by combining multimodal autoencoder with TransE model, where TransE is a simple and effective representation learning method for knowledge graphs. In TransAE, the hidden layer of autoencoder is used as the representation of entities in the TransE model, thus it encodes not only the structural knowledge, but also the multimodal knowledge, such as visual and textural knowledge, into the final representation. Compared with traditional methods based on only structural knowledge, TransAE can significantly improve the performance in the sense of link prediction and triplet classification. Also, TransAE has the ability to learn representations for entities out of knowledge base in zero-shot. Experiments on various tasks demonstrate the effectiveness of our proposed TransAE method.

Proceedings ArticleDOI
22 Oct 2019
TL;DR: A new knowledge distillation method self-knowledge distillation, based on the soft target probabilities of the training model itself, where multimode information is distilled from the word embedding space right below the softmax layer is proposed.
Abstract: Since deep learning became a key player in natural language processing (NLP), many deep learning models have been showing remarkable performances in a variety of NLP tasks. Such high performance can be explained by efficient knowledge representation of deep learning models. Knowledge distillation from pretrained deep networks suggests that we can use more information from the soft target probability to train other neural networks. In this paper, we propose a self-knowledge distillation method, based on the soft target probabilities of the training model itself, where multimode information is distilled from the word embedding space right below the softmax layer. Due to the time complexity, our method approximates the soft target probabilities. In experiments, we applied the proposed method to two different and fundamental NLP tasks: language model and neural machine translation. The experiment results show that our proposed method improves performance on the tasks.

Journal ArticleDOI
TL;DR: The knowledge base of plant production, built on ontological principles, will be useful to enterprise managers, agronomists, machine operators, planning services and other specialists of large, medium and small farms, as well as to individual farmers.

Journal ArticleDOI
TL;DR: In his NASIG vision session, Sören Auer introduced attendees to knowledge graphs and explained how they could make scientific research more discoverable, efficient, and collaborative.
Abstract: Knowledge graphs facilitate the discovery of information by organizing it into entities and describing the relationships of those entities to each other and to established ontologies. They are popu...

Journal ArticleDOI
TL;DR: In this paper, a set of compositional design patterns is proposed to describe a large variety of systems that combine statistical techniques from machine learning with symbolic techniques from knowledge representation, which help to systematize the literature, clarify which combinations of techniques serve which purposes, and encourage re-use of software components.
Abstract: We propose a set of compositional design patterns to describe a large variety of systems that combine statistical techniques from machine learning with symbolic techniques from knowledge representation. As in other areas of computer science (knowledge engineering, software engineering, ontology engineering, process mining and others), such design patterns help to systematize the literature, clarify which combinations of techniques serve which purposes, and encourage re-use of software components. We have validated our set of compositional design patterns against a large body of recent literature.