scispace - formally typeset
Search or ask a question

Showing papers on "Ontology (information science) published in 2020"


Posted Content
TL;DR: A formal context model based on ontology using OWL is proposed to address issues including semantic context representation, context reasoning and knowledge sharing, context classification, context dependency and quality of context.
Abstract: Computing becomes increasingly mobile and pervasive today; these changes imply that applications and services must be aware of and adapt to their changing contexts in highly dynamic environments. Today, building context-aware systems is a complex task due to lack of an appropriate infrastructure support in intelligent environments. A context-aware infrastructure requires an appropriate context model to represent, manipulate and access context information. In this paper, we propose a formal context model based on ontology using OWL to address issues including semantic context representation, context reasoning and knowledge sharing, context classification, context dependency and quality of context. The main benefit of this model is the ability to reason about various contexts. Based on our context model, we also present a Service-Oriented Context-Aware Middleware (SOCAM) architecture for building of context-aware services.

438 citations


Book ChapterDOI
31 May 2020
TL;DR: The latest version of YAGO 4 is presented, which reconciles the rigorous typing and constraints of schema.org with the rich instance data of Wikidata.org, and has a consistent ontology that allows semantic reasoning with OWL 2 description logics.
Abstract: YAGO is one of the large knowledge bases in the Linked Open Data cloud. In this resource paper, we present its latest version, YAGO 4, which reconciles the rigorous typing and constraints of schema.org with the rich instance data of Wikidata. The resulting resource contains 2 billion type-consistent triples for 64 Million entities, and has a consistent ontology that allows semantic reasoning with OWL 2 description logics.

124 citations


Proceedings ArticleDOI
27 Jan 2020
TL;DR: This paper focuses on explaining Doctor AI, a multilabel classifier which takes as input the clinical history of a patient in order to predict the next visit, and shows how exploiting the temporal dimension in the data and the domain knowledge encoded in the medical ontology improves the quality of the mined explanations.
Abstract: Several recent advancements in Machine Learning involve blackbox models: algorithms that do not provide human-understandable explanations in support of their decisions. This limitation hampers the fairness, accountability and transparency of these models; the field of eXplainable Artificial Intelligence (XAI) tries to solve this problem providing human-understandable explanations for black-box models. However, healthcare datasets (and the related learning tasks) often present peculiar features, such as sequential data, multi-label predictions, and links to structured background knowledge. In this paper, we introduce Doctor XAI, a model-agnostic explainability technique able to deal with multi-labeled, sequential, ontology-linked data. We focus on explaining Doctor AI, a multilabel classifier which takes as input the clinical history of a patient in order to predict the next visit. Furthermore, we show how exploiting the temporal dimension in the data and the domain knowledge encoded in the medical ontology improves the quality of the mined explanations.

113 citations


Proceedings ArticleDOI
27 Jan 2020
TL;DR: In this article, the authors examine three key factors within the person subtree of ImageNet that may lead to problematic behavior in downstream computer vision technology: the stagnant concept vocabulary of WordNet, the attempt at exhaustive illustration of all categories with images, and the inequality of representation in the images within concepts.
Abstract: Computer vision technology is being used by many but remains representative of only a few. People have reported misbehavior of computer vision models, including offensive prediction results and lower performance for underrepresented groups. Current computer vision models are typically developed using datasets consisting of manually annotated images or videos; the data and label distributions in these datasets are critical to the models' behavior. In this paper, we examine ImageNet, a large-scale ontology of images that has spurred the development of many modern computer vision methods. We consider three key factors within the person subtree of ImageNet that may lead to problematic behavior in downstream computer vision technology: (1) the stagnant concept vocabulary of WordNet, (2) the attempt at exhaustive illustration of all categories with images, and (3) the inequality of representation in the images within concepts. We seek to illuminate the root causes of these concerns and take the first steps to mitigate them constructively.

108 citations


Journal ArticleDOI
TL;DR: This work constructed a technology semantic network (TechNet) that covers the elemental concepts in all domains of technology and their semantic associations by mining the complete U.S. patent database from 1976.
Abstract: The growing developments in general semantic networks, knowledge graphs and ontology databases have motivated us to build a large-scale comprehensive semantic network of technology-related data for engineering knowledge discovery, technology search and retrieval, and artificial intelligence for engineering design and innovation. Specially, we constructed a technology semantic network (TechNet) that covers the elemental concepts in all domains of technology and their semantic associations by mining the complete U.S. patent database from 1976. To derive the TechNet, natural language processing techniques were utilized to extract terms from massive patent texts and recent word embedding algorithms were employed to vectorize such terms and establish their semantic relationships. We report and evaluate the TechNet for retrieving terms and their pairwise relevance that is meaningful from a technology and engineering design perspective. The TechNet may serve as an infrastructure to support a wide range of applications, e.g., technical text summaries, search query predictions, relational knowledge discovery, and design ideation support, in the context of engineering and technology, and complement or enrich existing semantic databases. To enable such applications, the TechNet is made public via an online interface and APIs for public users to retrieve technology-related terms and their relevancies.

106 citations


Journal ArticleDOI
TL;DR: The issues of trustworthiness in qualitative leisure research, often demonstrated through particular techniques of reliability and/or validity, are often either nonexistent, unsubstantial, or unexplaine.
Abstract: Issues of trustworthiness in qualitative leisure research, often demonstrated through particular techniques of reliability and/or validity, is often either nonexistent, unsubstantial, or unexplaine...

101 citations


Journal ArticleDOI
TL;DR: For example, the authors found that qualitative studies are often found to be accompanied by quotations from interviews or similar data sources, and that it is essential to critically explore the general general...
Abstract: Qualitative studies are often found to be accompanied by quotations from interviews or similar data sources. As with any methodological tradition, it is essential to critically explore the general ...

95 citations


Journal ArticleDOI
TL;DR: The Building Topology Ontology (BOT) is introduced as a core vocabulary to the BIM Maturity Level 3 approach and provides a high-level description of the topology of buildings including storeys and spaces, the building elements they contain, and their web-friendly 3D models.
Abstract: Actors in the Architecture, Engineering, Construction, Owner and Operation (AECOO) industry traditionally exchange building models as files. The Building Information Modelling (BIM) methodology advocates the seamless exchange of all information between related stakeholders using digital technologies. The ultimate evolution of the methodology, BIM Maturity Level 3, envisions interoperable, distributed, web-based, interdisciplinary information exchange among stakeholders across the life-cycle of buildings. The World Wide Web Consortium Linked Building Data Community Group (W3C LBD-CG) hypothesises that the Linked Data models and best practices can be leveraged to achieve this vision in modern web-based applications. In this paper, we introduce the Building Topology Ontology (BOT) as a core vocabulary to this approach. It provides a high-level description of the topology of buildings including storeys and spaces, the building elements they contain, and their web-friendly 3D models. We describe how existing applications produce and consume datasets combining BOT with other ontologies that describe product catalogues, sensor observations, or Internet of Things (IoT) devices effectively implementing BIM Maturity Level 3. We evaluate our approach by exporting and querying three real-life large building models.

84 citations


Journal ArticleDOI
TL;DR: In this paper, the authors explicates pragmatism as a relevant and useful paradigm for qualitative research on organizational processes and focuses on three core methodological principles that underlie their approach.
Abstract: This article explicates pragmatism as a relevant and useful paradigm for qualitative research on organizational processes The article focuses on three core methodological principles that underlie

82 citations


Book
30 Sep 2020
TL;DR: This book discusses Ontologies and Applications of Ontologies in Biomedicine, a meta- Ontology for Data Organization, Integration, and Searching Computer Reasoning with Ontologies, and other topics.
Abstract: BASIC CONCEPTS Ontologies and Applications of Ontologies in Biomedicine What Is an Ontology? Ontologies and Bio-Ontologies Ontologies for Data Organization, Integration, and Searching Computer Reasoning with Ontologies Typical Applications of Bio-Ontologies Mathematical Logic and Inference Representation and Logic Propositional Logic First-Order Logic Sets Description Logic Probability Theory and Statistics for Bio-Ontologies Probability Theory Bayes' Theorem Introduction to Graphs Bayesian Networks Ontology Languages OBO RDF and RDFS OWL and the Semantic Web BIO-ONTOLOGIES The Gene Ontology A Tool for the Unification of Biology Three Subontologies Relations in GO GO Annotations GO Slims Upper-Level Ontologies Basic Formal Ontology The Big Divide: Continuants and Occurrents Universals and Particulars Relation Ontology Revisiting Gene Ontology Revisiting GO Annotations A Selective Survey of Bio-Ontologies OBO Foundry The National Center for Biomedical Ontology Bio-Ontologies What Makes a Good Ontology? GRAPH ALGORITHMS FOR BIO-ONTOLOGIES Overrepresentation Analysis Definitions Term-for-Term Multiple Testing Problem Term-for-Term Analysis: An Extended Example Inferred Annotations Lead to Statistical Dependencies in Ontology DAGs Parent-Child Algorithms Parent-Child Analysis: An Extended Example Topology-Based Algorithms Topology-elim: An Extended Example Other Approaches Summary Model-Based Approaches to GO Analysis A Probabilistic Generative Model for GO Enrichment Analysis A Bayesian Network Model MGSA: An Extended Example Summary Semantic Similarity Information Content in Ontologies Semantic Similarity of Genes and Other Items Annotated by Ontology Terms Statistical Significance of Semantic Similarity Scores Frequency-Aware Bayesian Network Searches in Attribute Ontologies Modeling Queries Probabilistic Inference for the Items Parameter-Augmented Network The Frequency-Aware Network Benchmark INFERENCE IN ONTOLOGIES Inference in the Gene Ontology Inference over GO Edges Cross-Products and Logical Definitions RDFS Semantics and Inference Definitions Interpretations RDF Entailment RDFS Entailment Entailment Rules Summary Inference in OWL Ontologies The Semantics of Equality The Semantics of Properties The Semantics of Classes The Semantics of the Schema Vocabulary Conclusions Algorithmic Foundations of Computational Inference The Tableau Algorithm Developer Libraries SPARQL SPARQL Queries Combining RDF Graphs Conclusions Appendix A: An Overview of R Appendix B: Information Content and Entropy Appendix C: W3C Standards: XML, URIs, and RDF Appendix D: W3C Standards: OWL Bibliography Index Exercises and Further Reading appear at the end of each chapter.

80 citations


Posted Content
TL;DR: In this article, the authors describe a technique, called knowledge patterns, for helping construct axiom-rich, formal ontologies, based on identifying and explicitly representing recurring patterns of knowledge (theory schemata) in the ontology, and then stating how those patterns map onto domain-specific concepts.
Abstract: This paper describes a new technique, called "knowledge patterns", for helping construct axiom-rich, formal ontologies, based on identifying and explicitly representing recurring patterns of knowledge (theory schemata) in the ontology, and then stating how those patterns map onto domain-specific concepts in the ontology. From a modeling perspective, knowledge patterns provide an important insight into the structure of a formal ontology: rather than viewing a formal ontology simply as a list of terms and axioms, knowledge patterns views it as a collection of abstract, modular theories (the "knowledge patterns") plus a collection of modeling decisions stating how different aspects of the world can be modeled using those theories. Knowledge patterns make both those abstract theories and their mappings to the domain of interest explicit, thus making modeling decisions clear, and avoiding some of the ontological confusion that can otherwise arise. In addition, from a computational perspective, knowledge patterns provide a simple and computationally efficient mechanism for facilitating knowledge reuse. We describe the technique and an application built using them, and then critique its strengths and weaknesses. We conclude that this technique enables us to better explicate both the structure and modeling decisions made when constructing a formal axiom-rich ontology.

Journal ArticleDOI
TL;DR: The research trend in heritage modelling is analyzed by comparing the attention paid by researchers before and during the 2010s and it is shown that photogrammetry is always the most popular method in heritage modeling.

Book ChapterDOI
13 Feb 2020

Journal ArticleDOI
TL;DR: A hybrid solution for sentence-level aspect-based sentiment analysis using A Lexicalized Domain Ontology and a Regularized Neural Attention model (ALDONAr), where the bidirectional context attention mechanism is introduced to measure the influence of each word in a given sentence on an aspect’s sentiment value.
Abstract: Aspect-based sentiment analysis allows one to compute the sentiment for an aspect in a certain context. One problem in this analysis is that words possibly carry different sentiments for different aspects. Moreover, an aspect’s sentiment might be highly influenced by the domain-specific knowledge. In order to tackle these issues, in this paper, we propose a hybrid solution for sentence-level aspect-based sentiment analysis using A Lexicalized Domain Ontology and a Regularized Neural Attention model (ALDONAr). The bidirectional context attention mechanism is introduced to measure the influence of each word in a given sentence on an aspect’s sentiment value. The classification module is designed to handle the complex structure of a sentence. The manually created lexicalized domain ontology is integrated to utilize the field-specific knowledge. Compared to the existing ALDONA model, ALDONAr uses BERT word embeddings, regularization, the Adam optimizer, and different model initialization. Moreover, its classification module is enhanced with two 1D CNN layers providing superior results on standard datasets.

Journal ArticleDOI
TL;DR: This work uses the benchmark track provided by Ontology Alignment Evaluation Initiative (OAEI) to test the proposal’s performance, and comparing results with state-of-the-art ontology matching systems show that the approach can efficiently determine high-quality ontology alignments.
Abstract: Ontology matching technique aims at determining the identical entities, which can effectively solve the ontology heterogeneity problem and implement the collaborations among ontology-based intelligent systems. Typically, an ontology consists of a set of concepts which are described by various properties, and they define a space such that each distinct concept and property represents one dimension in that space. Therefore, it is an effective way to model an ontology in a vector space, and use the vector space based similarity measure to calculate two entities’ similarity. In this work, the entities’ structure information is utilized to model an ontology in a vector space, and then, their linguistic information is used to reduce the number of dimensions, which can improve the efficiency of the similarity calculation and entity matching process. After that, a discrete optimization model is constructed for the ontology matching problem, and a compact Evolutionary Algorithm (cEA) based ontology matching technique is proposed to efficiently address it. The experiment uses the benchmark track provided by Ontology Alignment Evaluation Initiative (OAEI) to test our proposal’s performance, and the comparing results with state-of-the-art ontology matching systems show that our approach can efficiently determine high-quality ontology alignments.

Journal ArticleDOI
TL;DR: The proposed approach for testing autonomous driving takes ontologies describing the environment of autonomous vehicles, and automatically converts it to test cases that are used in a simulation environment to verify automated driving functions, and relies on combinatorial testing.
Abstract: Context: Ontologies are known as a formal and explicit conceptualization of entities, their interfaces, behaviors, and relationships. They have been applied in various application domains such as autonomous driving where ontologies are used for decision making, traffic description, auto-pilot etc. It has always been a challenge to test the corresponding safety-critical software systems in autonomous driving that have been playing an increasingly important role in our daily routines. Objective: Failures in these systems potentially not only cause great financial loss but also the loss of lives. Therefore, it is vital to obtain and cover as many as critical driving scenarios during auto drive testing to ensure that the system can always reach a fail-safe state under different circumstances. Method: We outline a general framework for testing, verification, and validation for automated and autonomous driving functions. The introduced method makes use of ontologies for describing the environment of autonomous vehicles and convert them to input models for combinatorial testing. The combinatorial test suite comprises abstract test cases that are mapped to concrete test cases that can be executed using simulation environments. Results: We discuss in detail on how to automatically convert ontologies to the corresponding combinatorial testing input models. Specifically, we present two conversion algorithms and compare their applicability using ontologies with different sizes. We also carried out a case study to further demonstrate the practical value of applying ontology-based test generation in industrial settings. Conclusion: The proposed approach for testing autonomous driving takes ontologies describing the environment of autonomous vehicles, and automatically converts it to test cases that are used in a simulation environment to verify automated driving functions. The conversion relies on combinatorial testing. The first experimental results relying on an example from the automotive industry indicates that the approach can be used in practice.

Journal ArticleDOI
11 Feb 2020-Sensors
TL;DR: This work presents a light model to semantically annotate streams, IoT-Stream, which takes advantage of common knowledge sharing of the semantics, but keeping the inferences and queries simple, and presents tools that could be used in conjunction to the IoT- Stream model and facilitate the use of semantics in IoT.
Abstract: With the proliferation of sensors and IoT technologies, stream data are increasingly stored and analysed, but rarely combined, due to the heterogeneity of sources and technologies. Semantics are increasingly used to share sensory data, but not so much for annotating stream data. Semantic models for stream annotation are scarce, as generally, semantics are heavy to process and not ideal for Internet of Things (IoT) environments, where the data are frequently updated. We present a light model to semantically annotate streams, IoT-Stream. It takes advantage of common knowledge sharing of the semantics, but keeping the inferences and queries simple. Furthermore, we present a system architecture to demonstrate the adoption the semantic model, and provide examples of instantiation of the system for different use cases. The system architecture is based on commonly used architectures in the field of IoT, such as web services, microservices and middleware. Our system approach includes the semantic annotations that take place in the pipeline of IoT services and sensory data analytics. It includes modules needed to annotate, consume, and query data annotated with IoT-Stream. In addition to this, we present tools that could be used in conjunction to the IoT-Stream model and facilitate the use of semantics in IoT.

Journal ArticleDOI
TL;DR: This paper reviews various approaches, systems, and challenges of automatic ontology construction from the text and discusses ways the ontologyConstruction process could be enhanced in the future by presenting techniques from shallow learning to deep learning (DL).
Abstract: The explosive growth of textual data on the web coupled with the increase on demand for ontologies to promote the semantic web, have made the automatic ontology construction from the text a very promising research area. Ontology learning (OL) from text is a process that aims to (semi-) automatically extract and represent the knowledge from text in machine-readable form. Ontology is considered one of the main cornerstones of representing the knowledge in a more meaningful way on the semantic web. Usage of ontologies has proven to be beneficial and efficient in different applications (e.g. information retrieval, information extraction, and question answering). Nevertheless, manually construction of ontologies is time-consuming as well extremely laborious and costly process. In recent years, many approaches and systems that try to automate the construction of ontologies have been developed. This paper reviews various approaches, systems, and challenges of automatic ontology construction from the text. In addition, it also discusses ways the ontology construction process could be enhanced in the future by presenting techniques from shallow learning to deep learning (DL).

Journal ArticleDOI
TL;DR: An ontology-driven aspect-based sentiment analysis model is proposed with which to measure the general public’s opinions as regards infectious diseases when expressed in Spanish by employing a case study of tweets concerning the Zika, Dengue and Chikungunya viruses in Latin America.

Journal ArticleDOI
TL;DR: The observed trends show that formal models and ontologies will play an even more essential role in I4.0 systems as interoperability becomes more of a focus and that the new generation of linkable data sources should be based on semantically enriched information.

Journal ArticleDOI
25 Sep 2020
TL;DR: The added value of the Ontologies Community of Practice (CoP) of the CGIAR Platform for Big Data in Agriculture for harnessing relevant expertise in ontology development and identifying innovative solutions that support quality data annotation is discussed.
Abstract: Heterogeneous and multidisciplinary data generated by research on sustainable global agriculture and agrifood systems requires quality data labeling or annotation in order to be interoperable. As recommended by the FAIR principles, data, labels, and metadata must use controlled vocabularies and ontologies that are popular in the knowledge domain and commonly used by the community. Despite the existence of robust ontologies in the Life Sciences, there is currently no comprehensive full set of ontologies recommended for data annotation across agricultural research disciplines. In this paper, we discuss the added value of the Ontologies Community of Practice (CoP) of the CGIAR Platform for Big Data in Agriculture for harnessing relevant expertise in ontology development and identifying innovative solutions that support quality data annotation. The Ontologies CoP stimulates knowledge sharing among stakeholders, such as researchers, data managers, domain experts, experts in ontology design, and platform development teams.

Journal ArticleDOI
31 Jan 2020
TL;DR: The fundamental role of formal ontological theories to properly ground the construction of representation languages, as well as methodological and computational tools for supporting the engineering of ontologies (in the former sense) in the context of FAIR are discussed.
Abstract: According to the FAIR guiding principles, one of the central attributes for maximizing the added value of information artifacts is interoperability. In this paper, I discuss the importance, and pro...

Journal ArticleDOI
TL;DR: A novel “cognitive approach” integrating ontology-based knowledge reasoning, automated planning and execution technologies is proposed to endow assistive robots with intelligent features in order to reason at different levels of abstraction, understand specific health-related needs and decide how to act inorder to perform personalized assistive tasks.
Abstract: Socially assistive robotics aims at providing users with continuous support and personalized assistance, through appropriate social interactions. The design of robots capable of supporting people in heterogeneous tasks, raises several challenges among which the most relevant are the need to realise intelligent and continuous behaviours, robustness and flexibility of services and, furthermore, the ability to adapt to different contexts and needs. Artificial intelligence plays a key role in realizing cognitive capabilities like e.g., learning, context reasoning or planning that are highly needed in socially assistive robots. The integration of several of such capabilities is an open problem. This paper proposes a novel “cognitive approach” integrating ontology-based knowledge reasoning, automated planning and execution technologies. The core idea is to endow assistive robots with intelligent features in order to reason at different levels of abstraction, understand specific health-related needs and decide how to act in order to perform personalized assistive tasks. The paper presents such a cognitive approach pointing out the contribution of different knowledge contexts and perspectives, presents detailed functioning traces to show adaptation and personalization features, and finally discusses an experimental assessment proving the feasibility of the approach.

Journal ArticleDOI
TL;DR: Using ontology enhances the cloud computing selfmotivated via an intelligent framework of SaaS and consolidating the security by providing resources access control and RDF and OWL semantic technologies in the modeling of a multiagent system are very effective in increases coordination the interoperability.
Abstract: The ability to provide massive data storage, applications, platforms plus many other services leads to make the number of clouds services providers been increased. Providing different types of services and resources by various providers implies to get a high level of complexity. This complexity leads to face many challenges related to security, reliability, discovery, service selection, and interoperability. In this review, we focus on the use of many technologies and methods for utilizing the semantic web and ontology in cloud computing and distributed system as a solution for these challenges. Cloud computing does not have an own search engine to satisfy the needs of the providers of the cloud service. Using ontology enhances the cloud computing selfmotivated via an intelligent framework of SaaS and consolidating the security by providing resources access control. The use RDF and OWL semantic technologies in the modeling of a multiagent system are very effective in increases coordination the interoperability. One of the most efficient proposed frameworks is building cloud computing marketplace that collects the consumer's requirements of cloud services provider and managing these needs and resources to provide quick and reliable services. Review Article Ageed et al.; CJAST, 39(34): 82-97, 2020; Article no.CJAST.62378 83

Journal ArticleDOI
TL;DR: An ontology building method that is tailored toward the needs of CPSs in the manufacturing domain is presented and a reusable set of ontology design patterns that have been developed with the aforementioned method are presented and illustrate their application in the considered industrial environment.
Abstract: Cyber–physical systems (CPSs) in the manufacturing domain can be deployed to support monitoring and analysis of production systems of a factory in order to improve, support, or automate processes, such as maintenance or scheduling. When a network of CPS is subject to frequent changes, the semantic interoperability between the CPSs is of special interest in order to avoid manual, tedious, and error-prone information model alignments at runtime. Ontologies are a suitable technology to enable semantic interoperability, as they allow the building of information models that lank machine-readable meaning to information, thus enabling CPSs to mutually understand the shared information. The contribution of this article is twofold. First, we present an ontology building method that is tailored toward the needs of CPSs in the manufacturing domain. For this purpose, we introduce the requirements regarding this method and discuss related research concerning ontology building. The method itself is designed to begin with ontological requirements and to yield a formal ontology. As the reuse of ontologies and other information resources (IRs) is crucial to the success of ontology building projects, we put special emphasis on how to reuse IRs in the CPS domain. Second, we present a reusable set of ontology design patterns that have been developed with the aforementioned method in an industrial use case and illustrate their application in the considered industrial environment. The contribution of this article extends the method introduced, as a postconference paper, by a detailed industrial application. Note to Practitioners —With growing digitalization in industry, the exchange and use of manufacturing-related data are becoming increasingly important to improve, support, or automate processes. Thus, it is necessary to combine information from different data sources that have been designed by different vendors and may, therefore, be heterogeneous in structure and semantics. A system that plans a maintenance worker’s daily schedule, for instance, requires information about the status of machines, production plans, and inventory, which resides in other systems, such as programmable logic controllers (PLCs) or databases. When creating such information systems, accessing, searching, and understanding the different data sources is a time-intensive and error-prone procedure due to the heterogeneities of the data sources. Even worse, this procedure has to be repeated for every newly built system and for every newly introduced data source. To allow for eased access, searching, and understanding of these heterogeneous data sources, ontology can be used to integrate all heterogeneous data sources in one schema. This article contributes a method for building such ontologies in the manufacturing domain. Furthermore, a set of ontology design patterns is presented, which can be reused when building ontologies for a domain.

Journal ArticleDOI
TL;DR: An introduction to ontologies is provided, including those developed by the Kidney Precision Medicine Project, describing how these will be used to improve the annotation of kidney-relevant data, eventually leading to new definitions of kidney disease in support of precision medicine.
Abstract: An important need exists to better understand and stratify kidney disease according to its underlying pathophysiology in order to develop more precise and effective therapeutic agents. National collaborative efforts such as the Kidney Precision Medicine Project are working towards this goal through the collection and integration of large, disparate clinical, biological and imaging data from patients with kidney disease. Ontologies are powerful tools that facilitate these efforts by enabling researchers to organize and make sense of different data elements and the relationships between them. Ontologies are critical to support the types of big data analysis necessary for kidney precision medicine, where heterogeneous clinical, imaging and biopsy data from diverse sources must be combined to define a patient's phenotype. The development of two new ontologies - the Kidney Tissue Atlas Ontology and the Ontology of Precision Medicine and Investigation - will support the creation of the Kidney Tissue Atlas, which aims to provide a comprehensive molecular, cellular and anatomical map of the kidney. These ontologies will improve the annotation of kidney-relevant data, and eventually lead to new definitions of kidney disease in support of precision medicine.

Journal ArticleDOI
TL;DR: A compact firefly algorithm (CFA), where the explicit representation of the population is replaced by a probability distribution and two compact movement operators are presented to save the memory consumption and runtime of the Population-based SIAs.
Abstract: Biomedical ontologies have gained particular relevance in the life science domain due to its prominent role in representing knowledge in this domain. However, the existing biomedical ontologies could define the same biomedical concept in different ways, which yields the biomedical ontology heterogeneous problem. To implement the inter-operability among the biomedical ontologies, it is critical to establish the semantic links between heterogenous biomedical concepts, so-called biomedical ontology matching. Since modeling the ontology matching problem is a complex and time-consuming task, swarm intelligent algorithm (SIA) becomes a state-of-the-art methodology for solving this problem. However, when addressing the biomedical ontology matching problem, the existing SIA-based matchers tend to be inefficient due to biomedical ontology’s large-scale concepts and complex semantic relationships. In this work, we propose a compact firefly algorithm (CFA), where the explicit representation of the population is replaced by a probability distribution and two compact movement operators are presented to save the memory consumption and runtime of the population-based SIAs. We exploit the anatomy track, disease and phenotype track and biodiversity and ecology track from the ontology alignment evaluation initiative (OAEI) to test CFA-based matcher’s performance. The experimental results show that CFA can improve the FA-based matcher’s memory consumption and runtime by, respectively, 68.92% and 38.97% on average, and its results significantly outperform other SIA-based matchers and OAEI’s participants.

Journal ArticleDOI
TL;DR: It is argued that the engineering of ontologies must follow a well-defined methodology, addressing practical aspects that would allow (sometimes wide) communities of experts and ontologists to reach consensus on developments and keep the evolution of ontology ‘in track’.
Abstract: The aim of this critical review paper is threefold: (a) to provide an insight on the impact of ontology engineering methodologies (OEMs) to the evolution of living and reused ontologies, (b) to update the ontology engineering (OE) community on the status and trends in OEMs and of their use in practice and (c) to propose a set of recommendations for working ontologists to consider during the life cycle of living, evolved and reused ontologies. The work outlined in this critical review paper has been motivated by the need to address critical issues on keeping ontologies alive and evolving while these are shared in wide communities. It is argued that the engineering of ontologies must follow a well-defined methodology, addressing practical aspects that would allow (sometimes wide) communities of experts and ontologists to reach consensus on developments and keep the evolution of ontologies ‘in track’. In doing so, specific collaborative and iterative tool-supported tasks and phases within a complete and evaluated ontology life cycle are necessary. This way the engineered ontologies can be considered ‘shared, commonly agreed and continuously evolved “live” conceptualizations’ of domains of discourse. Today, in the era of Linked Data and Knowledge Graphs, it is more necessary than ever not to neglect to consider the recommendations that OEMs explicitly and implicitly introduce and their implications to the evolution of living ontologies. This paper reports on the status of OEMs, identifies trends and provides recommendations based on the findings of an analysis that concerns the impact of OEMs to the status of well-known, widely used and representative ontologies.

Journal ArticleDOI
TL;DR: In this paper, a structuralist ontology of social groups centered on social structures is developed and motivated, which provides a picture that encompasses a diverse range of different social groups, while maintaining important metaphysical and normative distinctions between groups of different kinds.
Abstract: Social groups—like teams, committees, gender groups, and racial groups—play a central role in our lives and in philosophical inquiry. Here I develop and motivate a structuralist ontology of social groups centered on social structures (i.e., networks of relations that are constitutively dependent on social factors). The view delivers a picture that encompasses a diverse range of social groups, while maintaining important metaphysical and normative distinctions between groups of different kinds. It also meets the constraint that not every arbitrary collection of people is a social group. In addition, the framework provides resources for developing a broader structuralist view in social ontology.

Journal ArticleDOI
TL;DR: The OntoKin tools have been applied by a chemist to identify variations in the rate of a prompt NOx formation reaction in the combustion of ammonia as represented by four mechanisms in the literature.
Abstract: An ontology for capturing both data and the semantics of chemical kinetic reaction mechanisms has been developed. Such mechanisms can be applied to simulate and understand the behavior of chemical ...