scispace - formally typeset
Search or ask a question

Showing papers on "Knowledge representation and reasoning published in 2016"


Proceedings Article
09 Jul 2016
TL;DR: Experimental results show that the proposed Type-embodied Knowledge Representation Learning models significantly outperform all baselines on both tasks, especially with long-tail distribution, and indicates that the models are capable of capturing hierarchical type information which is significant when constructing representations of knowledge graphs.
Abstract: Representation learning of knowledge graphs aims to encode both entities and relations into a continuous low-dimensional vector space. Most existing methods only concentrate on learning representations with structured information located in triples, regardless of the rich information located in hierarchical types of entities, which could be collected in most knowledge graphs. In this paper, we propose a novel method named Type-embodied Knowledge Representation Learning (TKRL) to take advantages of hierarchical entity types. We suggest that entities should have multiple representations in different types. More specifically, we consider hierarchical types as projection matrices for entities, with two type encoders designed to model hierarchical structures. Meanwhile, type information is also utilized as relation-specific type constraints. We evaluate our models on two tasks including knowledge graph completion and triple classification, and further explore the performances on long-tail dataset. Experimental results show that our models significantly outperform all baselines on both tasks, especially with long-tail distribution. It indicates that our models are capable of capturing hierarchical type information which is significant when constructing representations of knowledge graphs. The source code of this paper can be obtained from https://github.com/thunlp/TKRL.

264 citations


Journal ArticleDOI
TL;DR: The challenges addressed by ASP in these applications are discussed and the strengths of ASP as a useful AI paradigm are emphasized.
Abstract: ASP has been applied fruitfully to a wide range of areas in AI and in other fields, both in academia and in industry, thanks to the expressive representation languages of ASP and the continuous improvement of ASP solvers We present some of these ASP applications, in particular, in knowledge representation and reasoning, robotics, bioinformatics and computational biology as well as some industrial applications We discuss the challenges addressed by ASP in these applications and emphasize the strengths of ASP as a useful AI paradigm

196 citations


Posted Content
TL;DR: Experimental results demonstrate that the proposed Image-embodied Knowledge Representation Learning models outperform all baselines on both tasks, which indicates the significance of visual information for knowledge representations and the capability of the models in learning knowledge representations with images.
Abstract: Entity images could provide significant visual information for knowledge representation learning. Most conventional methods learn knowledge representations merely from structured triples, ignoring rich visual information extracted from entity images. In this paper, we propose a novel Image-embodied Knowledge Representation Learning model (IKRL), where knowledge representations are learned with both triple facts and images. More specifically, we first construct representations for all images of an entity with a neural image encoder. These image representations are then integrated into an aggregated image-based representation via an attention-based method. We evaluate our IKRL models on knowledge graph completion and triple classification. Experimental results demonstrate that our models outperform all baselines on both tasks, which indicates the significance of visual information for knowledge representations and the capability of our models in learning knowledge representations with images.

105 citations


Proceedings Article
12 Feb 2016
TL;DR: This work presents a system that excels at all the tasks except one and demonstrates that the introduction of a reasoning module significantly improves the performance of an intelligent agent.
Abstract: A group of researchers from Facebook has recently proposed a set of 20 question-answering tasks (Facebook's bAbl dataset) as a challenge for the natural language understanding ability of an intelligent agent. These tasks are designed to measure various skills of an agent, such as: fact based question-answering, simple induction, the ability to find paths, co-reference resolution and many more. Their goal is to aid in the development of systems that can learn to solve such tasks and to allow a proper evaluation of such systems. They show existing systems cannot fully solve many of those toy tasks. In this work, we present a system that excels at all the tasks except one. The proposed model of the agent uses the Answer Set Programming (ASP) language as the primary knowledge representation and reasoning language along with the standard statistical Natural Language Processing (NLP) models. Given a training dataset containing a set of narrations, questions and their answers, the agent jointly uses a translation system, an Inductive Logic Programming algorithm and Statistical NLP methods to learn the knowledge needed to answer similar questions. Our results demonstrate that the introduction of a reasoning module significantly improves the performance of an intelligent agent.

98 citations


Proceedings Article
09 Jul 2016
TL;DR: The experiment results show that, by special modeling of attribute, KR-EAR can significantly outperform state-of-the-art KR models in prediction of entities, attributes and relations.
Abstract: Distributed knowledge representation (KR) encodes both entities and relations in a low-dimensional semantic space, which has significantly promoted the performance of relation extraction and knowledge reasoning. In many knowledge graphs (KG), some relations indicate attributes of entities (attributes) and others indicate relations between entities (relations). Existing KR models regard all relations equally, and usually suffer from poor accuracies when modeling one-to-many and many-to-one relations, mostly composed of attribute. In this paper, we distinguish existing KG-relations into attributes and relations, and propose a new KR model with entities, attributes and relations (KR-EAR). The experiment results show that, by special modeling of attribute, KR-EAR can significantly outperform state-of-the-art KR models in prediction of entities, attributes and relations. The source code of this paper can be obtained from https://github.com/thunlp/KR-EAR.

92 citations


Proceedings Article
09 Jul 2016
TL;DR: This work proposes an manifold-based embedding principle (ManifoldE) which could be treated as a well-posed algebraic system that expands the position of golden triples from one point in current models to a manifold in the authors'.
Abstract: Knowledge graph embedding aims at offering a numerical knowledge representation paradigm by transforming the entities and relations into continuous vector space. However, existing methods could not characterize the knowledge graph in a fine degree to make a precise link prediction. There are two reasons for this issue: being an ill-posed algebraic system and adopting an overstrict geometric form. As precise link prediction is critical for knowledge graph embedding, we propose a manifold-based embedding principle (ManifoldE) which could be treated as a well-posed algebraic system that expands point-wise modeling in current models to manifold-wise modeling. Extensive experiments show that the proposed models achieve substantial improvements against the state-of-the-art baselines, particularly for the precise prediction task, and yet maintain high efficiency.

82 citations


Proceedings Article
01 Dec 2016
TL;DR: In this paper, the concept of Unit Dependency Graphs (UDGs) is proposed to capture and reason about units and shows how it can benefit an arithmetic word problem solver.
Abstract: Math word problems provide a natural abstraction to a range of natural language understanding problems that involve reasoning about quantities, such as interpreting election results, news about casualties, and the financial section of a newspaper. Units associated with the quantities often provide information that is essential to support this reasoning. This paper proposes a principled way to capture and reason about units and shows how it can benefit an arithmetic word problem solver. This paper presents the concept of Unit Dependency Graphs (UDGs), which provides a compact representation of the dependencies between units of numbers mentioned in a given problem. Inducing the UDG alleviates the brittleness of the unit extraction system and allows for a natural way to leverage domain knowledge about unit compatibility, for word problem solving. We introduce a decomposed model for inducing UDGs with minimal additional annotations, and use it to augment the expressions used in the arithmetic word problem solver of (Roy and Roth 2015) via a constrained inference framework. We show that introduction of UDGs reduces the error of the solver by over 10 %, surpassing all existing systems for solving arithmetic word problems. In addition, it also makes the system more robust to adaptation to new vocabulary and equation forms .

76 citations


Journal ArticleDOI
TL;DR: This evaluation suggests ways in which factor based systems, which are limited by taking as their starting point the representation of cases as sets of factors and so abstracting away the particular facts, can be extended to address open issues in AI and Law by incorporating the case facts to improve the decision.
Abstract: This paper presents a methodology to design and implement programs intended to decide cases, described as sets of factors, according to a theory of a particular domain based on a set of precedent cases relating to that domain. We use Abstract Dialectical Frameworks (ADFs), a recent development in AI knowledge representation, as the central feature of our design method. ADFs will play a role akin to that played by Entity---Relationship models in the design of database systems. First, we explain how the factor hierarchy of the well-known legal reasoning system CATO can be used to instantiate an ADF for the domain of US Trade Secrets. This is intended to demonstrate the suitability of ADFs for expressing the design of legal cased based systems. The method is then applied to two other legal domains often used in the literature of AI and Law. In each domain, the design is provided by the domain analyst expressing the cases in terms of factors organised into an ADF from which an executable program can be implemented in a straightforward way by taking advantage of the closeness of the acceptance conditions of the ADF to components of an executable program. We evaluate the ease of implementation, the performance and efficacy of the resulting program, ease of refinement of the program and the transparency of the reasoning. This evaluation suggests ways in which factor based systems, which are limited by taking as their starting point the representation of cases as sets of factors and so abstracting away the particular facts, can be extended to address open issues in AI and Law by incorporating the case facts to improve the decision, and by considering justification and reasoning using portion of precedents.

72 citations


Journal ArticleDOI
TL;DR: Most methods should introduce more reusable knowledge to manage security requirements, according to the questions related to methods, techniques, modeling frameworks, and tools for and by reuse in security requirements engineering.
Abstract: Security is a concern that must be taken into consideration starting from the early stages of system development. Over the last two decades, researchers and engineers have developed a considerable number of methods for security requirements engineering. Some of them rely on the (re)use of security knowledge. Despite some existing surveys about security requirements engineering, there is not yet any reference for researchers and practitioners that presents in a systematic way the existing proposals, techniques, and tools related to security knowledge reuse in security requirements engineering. The aim of this paper is to fill this gap by looking into drawing a picture of the literature on knowledge and reuse in security requirements engineering. The questions we address are related to methods, techniques, modeling frameworks, and tools for and by reuse in security requirements engineering. We address these questions through a systematic mapping study. The mapping study was a literature review conducted with the goal of identifying, analyzing, and categorizing state-of-the-art research on our topic. This mapping study analyzes more than thirty approaches, covering 20 years of research in security requirements engineering. The contributions can be summarized as follows: (1) A framework was defined for analyzing and comparing the different proposals as well as categorizing future contributions related to knowledge reuse and security requirements engineering; (2) the different forms of knowledge representation and reuse were identified; and (3) previous surveys were updated. We conclude that most methods should introduce more reusable knowledge to manage security requirements.

66 citations


Book ChapterDOI
29 Nov 2016
TL;DR: Real logic is introduced: a framework that seamlessly integrates logical deductive reasoning with efficient, data-driven relational learning and is implemented in an deep learning architecture, called logic tensor networks, based on Google's TensorFlow primitives.
Abstract: The paper introduces real logic: a framework that seamlessly integrates logical deductive reasoning with efficient, data-driven relational learning. Real logic is based on full first order language. Terms are interpreted in n-dimensional feature vectors, while predicates are interpreted in fuzzy sets. In real logic it is possible to formally define the following two tasks: i learning from data in presence of logical constraints, and ii reasoning on formulas exploiting concrete data. We implement real logic in an deep learning architecture, called logic tensor networks, based on Google's $$\textsc {TensorFlow}^{\tiny {\text {TM}}}$$ primitives. The paper concludes with experiments on a simple but representative example of knowledge completion.

62 citations


Journal ArticleDOI
TL;DR: A new ontological-based knowledge and reasoning framework for decision support for correct-by-design cyber-physical systems (CPS) enables the development of determinate, provable and executable CPS models supported by sound semantics strengthening the model-driven approach to CPS design.

Journal ArticleDOI
27 Apr 2016-Sensors
TL;DR: This survey reviews the state-of-the-art search methods for the Web of Things, which are classified according to three different viewpoints: basic principles, data/knowledge representation, and contents being searched.
Abstract: The Web of Things aims to make physical world objects and their data accessible through standard Web technologies to enable intelligent applications and sophisticated data analytics. Due to the amount and heterogeneity of the data, it is challenging to perform data analysis directly; especially when the data is captured from a large number of distributed sources. However, the size and scope of the data can be reduced and narrowed down with search techniques, so that only the most relevant and useful data items are selected according to the application requirements. Search is fundamental to the Web of Things while challenging by nature in this context, e.g., mobility of the objects, opportunistic presence and sensing, continuous data streams with changing spatial and temporal properties, efficient indexing for historical and real time data. The research community has developed numerous techniques and methods to tackle these problems as reported by a large body of literature in the last few years. A comprehensive investigation of the current and past studies is necessary to gain a clear view of the research landscape and to identify promising future directions. This survey reviews the state-of-the-art search methods for the Web of Things, which are classified according to three different viewpoints: basic principles, data/knowledge representation, and contents being searched. Experiences and lessons learned from the existing work and some EU research projects related to Web of Things are discussed, and an outlook to the future research is presented.

Journal ArticleDOI
TL;DR: A formalized schema that can be used to capture clash features and associated solutions during MEP coordination and, more importantly, to capture experiential knowledge to support future decision making is presented.

Journal ArticleDOI
TL;DR: This paper introduces a method to develop knowledge bases for medical decision support systems, with a focus on evaluating such knowledge bases, and develops an ontological-semantic knowledge base and evaluated its information content using the metrics developed, and then compared the results to the UMLS backbone knowledge base.
Abstract: Development of an entropy-based evaluation method to evaluate ontology strength.Evaluation an ontological semantic ontology using the evaluation method.Evaluation of the backbone of the UMLS with this method. In this paper we introduce a method to develop knowledge bases for medical decision support systems, with a focus on evaluating such knowledge bases. Departing from earlier efforts with concept maps, we developed an ontological-semantic knowledge base and evaluated its information content using the metrics we have developed, and then compared the results to the UMLS backbone knowledge base. The evaluation method developed uses information entropy of concepts, but in contrast to previous approaches normalizes it against the number of relations to evaluate the information density of knowledge bases of varying sizes. A detailed description of the knowledge base development and evaluation is discussed using the underlying algorithms, and the results of experimentation of the methods are explained. The main evaluation results show that the normalized metric provides a balanced method for assessment and that our knowledge base is strong, despite having fewer relationships, is more information-dense, and hence more useful. The key contributions in the area of developing expert systems detailed in this paper include: (a) introduction of a normalized entropy-based evaluation technique to evaluate knowledge bases using graph theory, (b) results of the experimentation of the use of this technique on existing knowledge bases.


Journal ArticleDOI
TL;DR: It is argued that semantic pointers are uniquely well-suited to providing a biologically plausible account of the structured representations that underwrite human cognition.

Proceedings ArticleDOI
16 May 2016
TL;DR: A stochastic graph-based framework for a robot to understand tasks from human demonstrations and perform them with feedback control is proposed, which unifies both knowledge representation and action planning in the same hierarchical data structure, allowing a robots to expand its spatial, temporal, and causal knowledge at varying levels of abstraction.
Abstract: We propose a stochastic graph-based framework for a robot to understand tasks from human demonstrations and perform them with feedback control. It unifies both knowledge representation and action planning in the same hierarchical data structure, allowing a robot to expand its spatial, temporal, and causal knowledge at varying levels of abstraction. The learning system can watch human demonstrations, generalize learned concepts, and perform tasks in new environments, across different robotic platforms. We show the success of our system by having a robot perform a cloth-folding task after watching few human demonstrations. The robot can accurately reproduce the learned skill, as well as generalize the task to other articles of clothing.

Journal ArticleDOI
01 Apr 2016
TL;DR: A case-based reasoning system for the automatic surveillance and diagnosis of healthcare-associated infections is introduced and results obtained from a real deployment in a public hospital belonging to the Spanish National Health System recognize the usefulness of the system.
Abstract: Nowadays, it is recognized worldwide that healthcare-associated infections are responsible for an increase in patient morbidity, mortality, and higher costs related to prolonged hospital stays. As electronic health data are increasingly available today, there is a unique opportunity to implement real-time decision support systems for automating the surveillance of healthcare-associated infections. As a consequence, different electronic surveillance systems have been implemented to date with varying degrees of success. However, there have been few instances in which clinical data and physician narratives with the potential to significantly improve electronic surveillance alternatives have been adopted. In this context, the present work introduces a case-based reasoning system for the automatic surveillance and diagnosis of healthcare-associated infections. The developed system makes use of different machine learning techniques in order to (i) automatically extract evidence from different types of data including clinical unstructured documents, (ii) incorporate static a priori knowledge handled by infection preventionists, and (iii) dynamically generate new knowledge as well as understandable explanations about the system's decisions. Results obtained from a real deployment in a public hospital belonging to the Spanish National Health System trained with 2569 samples belonging to 1800 patients during more than 10 consecutive months recognize the usefulness of the system. Display Omitted Automatic surveillance of healthcare-associated infections.Diagnostic decision support system aiding monitoring and control.Case-based reasoning system for classifying nosocomial infections.Static rule-based knowledge representation and dynamic induction process.Natural language processing for physician narratives and nurses' comments.

Journal ArticleDOI
TL;DR: A new type of FPN model based on intuitionistic fuzzy sets and ordered weighted averaging operators to deal with the problems and improve the effectiveness of the conventional FPNs is proposed.
Abstract: Fuzzy Petri nets (FPNs) are an important modeling tool for knowledge representation and reasoning, which have been extensively used in a lot of fields. However, the conventional FPN models have been criticized as having many shortcomings in the literature. Many different models have been suggested to enhance the performance of FPNs, but deficiencies still exist in these models. First, various types of uncertain knowledge information provided by domain experts are very hard to be modeled by the existing FPN models. Second, the traditional FPNs determine the results of knowledge reasoning using the min, max, and product operators, which may not work well in many practical applications. In this paper, we propose a new type of FPN model based on intuitionistic fuzzy sets and ordered weighted averaging operators to deal with the problems and improve the effectiveness of the conventional FPNs. Moreover, a max-algebra-based reasoning algorithm is developed in order to implement the intuitionistic fuzzy reasoning formally and automatically. Finally, a case study concerning fault diagnosis of aircraft generator is presented to demonstrate the proposed intuitionistic FPN model. Numerical experiments show that the new FPN model is feasible and quite effective for knowledge representation and reasoning of intuitionistic fuzzy expert systems.


Journal ArticleDOI
TL;DR: This study aims to review ontology research to explore its trends, gaps, and opportunities in the construction industry and to reduce arbitrariness and subjectivity involved in research topic analysis.
Abstract: Being information-intensive, the construction industry has the feature of multiagents, including multiparticipants from different disciplines, multiprocesses with a long-span timeline, and multidocuments generated by various systems. The multistakeholder context of the construction industry creates problems such as poor information interoperability and low productivity arising from difficulties in information reuse. Many researchers have explored the use of ontology to address these issues. This study aims to review ontology research to explore its trends, gaps, and opportunities in the construction industry. A systematic process employing three-phase search method, objective analysis and subjective analysis, helps to provide enough potential articles related to construction ontology research, and to reduce arbitrariness and subjectivity involved in research topic analysis. As a result, three main research topics aligned with the ontology development lifecycle were derived as follows: information ...

Journal ArticleDOI
TL;DR: This editorial introduces answer set programming, a vibrant research area in computational knowledge representation and declarative programming, and gives a brief overview of the articles that form this special issue.
Abstract: This editorial introduces answer set programming, a vibrant research area in computational knowledge representation and declarative programming We give a brief overview of the articles that form this special issue on answer set programming and of the main topics they discuss

Journal ArticleDOI
01 Dec 2016
TL;DR: An airborne target classification problem in the air surveillance system is studied to demonstrate the performance of the proposed HBRBCS for combining both uncertain sensor measurements and expert knowledge to make classification.
Abstract: In some real-world classification applications, such as target recognition, both training data collected by sensors and expert knowledge may be available. These two types of information are usually independent and complementary, and both are useful for classification. In this paper, a hybrid belief rule-based classification system (HBRBCS) is developed to make joint use of these two types of information. The belief rule structure, which is capable of capturing fuzzy, imprecise, and incomplete causal relationships, is used as the common representation model. With the belief rule structure, a data-driven belief rule base (DBRB) and a knowledge-driven belief rule base (KBRB) are learned from uncertain training data and expert knowledge, respectively. A fusion algorithm is proposed to combine the DBRB and KBRB to obtain an optimal hybrid belief rule base (HBRB). A belief reasoning and decision-making module is then developed to classify a query pattern based on the generated HBRB. An airborne target classification problem in the air surveillance system is studied to demonstrate the performance of the proposed HBRBCS for combining both uncertain sensor measurements and expert knowledge to make classification.

DOI
01 Jul 2016
TL;DR: Knowledge graph technology is a critical part of artificial intelligence research as mentioned in this paper and it establishes a knowledge base with the capacity of semantic processing and open interconnection in order to provide intelligent information service, such as search, question-answering, personalized recommendation, and so on.
Abstract: Knowledge graph technology is a critical part of artificial intelligence research. It establishes a knowledge base with the capacity of semantic processing and open interconnection in order to provide intelligent information service, such as search, question-answering, personalized recommendation, and so on. This article first presents a comprehensive study on definitions and architectures of knowledge graphs. Then we summarizes recent advances in knowledge graphs, including knowledge extraction, knowledge representation, knowledge fusion, and knowledge reasoning, with typical applications. Finally, this article concludes with future challenges of knowledge graphs.

Proceedings Article
09 Jul 2016
TL;DR: This paper introduces a general technique for obtaining lower bounds on Decomposable Negation Normal Form (DNNFs), one of the most widely studied and succinct representation languages, by relating the size of DNNFs to multi-partition communication complexity.
Abstract: Choosing a language for knowledge representation and reasoning involves a trade-off between two competing desiderata: succinctness (the encoding should be small) and tractability (the language should support efficient reasoning algorithms). The area of knowledge compilation is devoted to the systematic study of representation languages along these two dimensions--in particular, it aims to determine the relative succinctness of languages. Showing that one language is more succinct than another typically involves proving a nontrivial lower bound on the encoding size of a carefully chosen function, and the corresponding arguments increase in difficulty with the succinctness of the target language. In this paper, we introduce a general technique for obtaining lower bounds on Decomposable Negation Normal Form (DNNFs), one of the most widely studied and succinct representation languages, by relating the size of DNNFs to multi-partition communication complexity. This allows us to directly translate lower bounds from the communication complexity literature into lower bounds on the size of DNNF representations. We use this approach to prove exponential separations of DNNFs from deterministic DNNFs and of CNF formulas from DNNFs.

Proceedings Article
12 Feb 2016
TL;DR: The ASP competition series aims at assessing and promoting the evolution of ASP systems and applications, and its growing range of challenging application-oriented benchmarks inspires and showcases continuous advancements of the state of the art in ASP.
Abstract: Answer Set Programming (ASP) is a declarative programming paradigm with roots in logic programming, knowledge representation, and non-monotonic reasoning. The ASP competition series aims at assessing and promoting the evolution of ASP systems and applications. Its growing range of challenging application-oriented benchmarks inspires and showcases continuous advancements of the state of the art in ASP.

Journal ArticleDOI
TL;DR: An ontology-driven decision support system for facilitating the selection of domestic solar hot water system, which delivers certain advantages, such as sustainability of the decisionSupport system itself, and its adaptability/flexibility in decision making policies, due to is semantic (ontological) nature.

Journal ArticleDOI
TL;DR: Using an industry test case, the paper demonstrates the effectiveness of the proposed framework in terms of fulfilling customer orders with lower production and emissions costs, compared to the results generated using existing tools.
Abstract: Considering the need for more effective decision support in the context of distributed manufacturing, this paper develops an advanced analytics framework for configuring supply chain (SC) networks. The proposed framework utilises a distributed multi-agent system architecture to deploy fuzzy rough sets-based algorithms for knowledge elicitation and representation. A set of historical sales data, including network node-related information, is used together with the relevant details of product families to predict SC configurations capable of fulfilling desired customer orders. Multiple agents such as data retrieval agent, knowledge acquisition agent, knowledge representation agent, configuration predictor agent, evaluator agent and dispatching agent are used to help execute a broad spectrum of SC configuration decisions. The proposed framework considers multiple product variants and sourcing options at each network node, as well as multiple performance objectives. It also captures decisions that span the ent...

Journal ArticleDOI
TL;DR: The generic nature of OntoDT enables it to support a wide range of other applications, especially in combination with other domain specific ontologies: the construction of data mining workflows, annotation of software and algorithms, semantic annotation of scientific articles, etc.

Journal ArticleDOI
TL;DR: This paper aims to present the work in-progress developed by the autonomous robotics (AuR) subgroup, the first group that adopts a systematic approach to develop ontologies consisting of specific concepts and axioms that are commonly used in autonomous robots.
Abstract: IEEE Ontologies for Robotics and Automation Working Group were divided into subgroups that were in charge of studying industrial robotics, service robotics and autonomous robotics. This paper aims to present the work in-progress developed by the autonomous robotics (AuR) subgroup. This group aims to extend the core ontology for robotics and automation to represent more specific concepts and axioms that are commonly used in autonomous robots.,For autonomous robots, various concepts for aerial robots, underwater robots and ground robots are described. Components of an autonomous system are defined, such as robotic platforms, actuators, sensors, control, state estimation, path planning, perception and decision-making.,AuR has identified the core concepts and domains needed to create an ontology for autonomous robots.,AuR targets to create a standard ontology to represent the knowledge and reasoning needed to create autonomous systems that comprise robots that can operate in the air, ground and underwater environments. The concepts in the developed ontology will endow a robot with autonomy, that is, endow robots with the ability to perform desired tasks in unstructured environments without continuous explicit human guidance.,Creating a standard for knowledge representation and reasoning in autonomous robotics will have a significant impact on all R&A domains, such as on the knowledge transmission among agents, including autonomous robots and humans. This tends to facilitate the communication among them and also provide reasoning capabilities involving the knowledge of all elements using the ontology. This will result in improved autonomy of autonomous systems. The autonomy will have considerable impact on how robots interact with humans. As a result, the use of robots will further benefit our society. Many tedious tasks that currently can only be performed by humans will be performed by robots, which will further improve the quality of life. To the best of the authors’knowledge, AuR is the first group that adopts a systematic approach to develop ontologies consisting of specific concepts and axioms that are commonly used in autonomous robots.