scispace - formally typeset
Search or ask a question

Showing papers on "Ontology (information science) published in 2019"


Journal ArticleDOI
TL;DR: Since initially writing on thematic analysis in 2006, the popularity of the method as mentioned in this paper has exploded, the variety of TA approaches have expanded, and, not least, our thinking has developed a...
Abstract: Since initially writing on thematic analysis in 2006, the popularity of the method we outlined has exploded, the variety of TA approaches have expanded, and, not least, our thinking has developed a...

3,907 citations


Journal ArticleDOI
Seth Carbon1, Eric Douglass1, Nathan Dunn1, Benjamin M. Good1  +189 moreInstitutions (19)
TL;DR: GO-CAM, a new framework for representing gene function that is more expressive than standard GO annotations, has been released, and users can now explore the growing repository of these models.
Abstract: The Gene Ontology resource (GO; http://geneontology.org) provides structured, computable knowledge regarding the functions of genes and gene products. Founded in 1998, GO has become widely adopted in the life sciences, and its contents are under continual improvement, both in quantity and in quality. Here, we report the major developments of the GO resource during the past two years. Each monthly release of the GO resource is now packaged and given a unique identifier (DOI), enabling GO-based analyses on a specific release to be reproduced in the future. The molecular function ontology has been refactored to better represent the overall activities of gene products, with a focus on transcription regulator activities. Quality assurance efforts have been ramped up to address potentially out-of-date or inaccurate annotations. New evidence codes for high-throughput experiments now enable users to filter out annotations obtained from these sources. GO-CAM, a new framework for representing gene function that is more expressive than standard GO annotations, has been released, and users can now explore the growing repository of these models. We also provide the ‘GO ribbon’ widget for visualizing GO annotations to a gene; the widget can be easily embedded in any web page.

2,138 citations


Journal ArticleDOI
TL;DR: The HPO’s interoperability with other ontologies has enabled it to be used to improve diagnostic accuracy by incorporating model organism data and plays a key role in the popular Exomiser tool, which identifies potential disease-causing variants from whole-exome or whole-genome sequencing data.
Abstract: The Human Phenotype Ontology (HPO)-a standardized vocabulary of phenotypic abnormalities associated with 7000+ diseases-is used by thousands of researchers, clinicians, informaticians and electronic health record systems around the world. Its detailed descriptions of clinical abnormalities and computable disease definitions have made HPO the de facto standard for deep phenotyping in the field of rare disease. The HPO's interoperability with other ontologies has enabled it to be used to improve diagnostic accuracy by incorporating model organism data. It also plays a key role in the popular Exomiser tool, which identifies potential disease-causing variants from whole-exome or whole-genome sequencing data. Since the HPO was first introduced in 2008, its users have become both more numerous and more diverse. To meet these emerging needs, the project has added new content, language translations, mappings and computational tooling, as well as integrations with external community data. The HPO continues to collaborate with clinical adopters to improve specific areas of the ontology and extend standardized disease descriptions. The newly redesigned HPO website (www.human-phenotype-ontology.org) simplifies browsing terms and exploring clinical features, diseases, and human genes.

532 citations


Journal ArticleDOI
TL;DR: The DO’s continual integration of human disease knowledge, evidenced by the more than 200 SVN/GitHub releases/revisions, includes the addition of 2650 new disease terms, a 30% increase of textual definitions, and an expanding suite of disease classification hierarchies constructed through defined logical axioms.
Abstract: The Human Disease Ontology (DO) (http://www.disease-ontology.org), database has undergone significant expansion in the past three years. The DO disease classification includes specific formal semantic rules to express meaningful disease models and has expanded from a single asserted classification to include multiple-inferred mechanistic disease classifications, thus providing novel perspectives on related diseases. Expansion of disease terms, alternative anatomy, cell type and genetic disease classifications and workflow automation highlight the updates for the DO since 2015. The enhanced breadth and depth of the DO’s knowledgebase has expanded the DO’s utility for exploring the multi-etiology of human disease, thus improving the capture and communication of health-related data across biomedical databases, bioinformatics tools, genomic and cancer resources and demonstrated by a 6.6× growth in DO’s user community since 2015. The DO’s continual integration of human disease knowledge, evidenced by the more than 200 SVN/GitHub releases/revisions, since previously reported in our DO 2015 NAR paper, includes the addition of 2650 new disease terms, a 30% increase of textual definitions, and an expanding suite of disease classification hierarchies constructed through defined logical axioms.

353 citations


Journal ArticleDOI
TL;DR: The Sensor, Observation, Sample, and Actuator (SOSA) ontology as mentioned in this paper provides a formal but lightweight general-purpose specification for modelling the interaction between the entities involved in the acts of observation, actuation, and sampling.

245 citations


Journal ArticleDOI
TL;DR: It is concluded that self-regulation lacks coherence as a construct, and that data-driven ontologies lay the groundwork for a cumulative psychological science.
Abstract: Psychological sciences have identified a wealth of cognitive processes and behavioral phenomena, yet struggle to produce cumulative knowledge. Progress is hamstrung by siloed scientific traditions and a focus on explanation over prediction, two issues that are particularly damaging for the study of multifaceted constructs like self-regulation. Here, we derive a psychological ontology from a study of individual differences across a broad range of behavioral tasks, self-report surveys, and self-reported real-world outcomes associated with self-regulation. Though both tasks and surveys putatively measure self-regulation, they show little empirical relationship. Within tasks and surveys, however, the ontology identifies reliable individual traits and reveals opportunities for theoretic synthesis. We then evaluate predictive power of the psychological measurements and find that while surveys modestly and heterogeneously predict real-world outcomes, tasks largely do not. We conclude that self-regulation lacks coherence as a construct, and that data-driven ontologies lay the groundwork for a cumulative psychological science.

209 citations


Posted ContentDOI
01 Apr 2019-bioRxiv
TL;DR: “Brian” 2 is a complete rewrite of Brian that addresses this issue by using runtime code generation with a procedural equation-oriented approach, and enables scientists to write code that is particularly simple and concise, closely matching the way they conceptualise their models.
Abstract: To be maximally useful for neuroscience research, neural simulators must make it possible to define original models. This is especially important because a computational experiment might not only need descriptions of neurons and synapses, but also models of interactions with the environment (e.g. muscles), or the environment itself. To preserve high performance when defining new models, current simulators offer two options: low-level programming, or mark-up languages (and other domain specific languages). The first option requires time and expertise, is prone to errors, and contributes to problems with reproducibility and replicability. The second option has limited scope, since it can only describe the range of neural models covered by the ontology. Other aspects of a computational experiment, such as the stimulation protocol, cannot be expressed within this framework. “Brian” 2 is a complete rewrite of Brian that addresses this issue by using runtime code generation with a procedural equation-oriented approach. Brian 2 enables scientists to write code that is particularly simple and concise, closely matching the way they conceptualise their models, while the technique of runtime code generation automatically transforms high level descriptions of models into efficient low level code tailored to different hardware (e.g. CPU or GPU). We illustrate it with several challenging examples: a plastic model of the pyloric network of crustaceans, a closed-loop sensorimotor model, programmatic exploration of a neuron model, and an auditory model with real-time input from a microphone.

178 citations


Journal ArticleDOI
TL;DR: The ProTrip RS is a health-centric RS which is capable of suggesting the food availability through considering climate attributes based on user’s personal choice and nutritive value, and the developed food recommendation approach is evaluated for the real-time IoT-based healthcare support system.
Abstract: The recent developments of internet technology have created premium space for recommender system (RS) to help users in their daily life. An effective personalized recommendation of a travel recommender system can reduce time and travel cost of the travellers. ProTrip RS addresses the personalization problem through exploiting user interests and preferences to generate suggestions. Data considered for the recommendations include travel sequence, actions, motivations, opinions and demographic information of the user. ProTrip is completely designed to be intelligent and in addition, the ProTrip is a health-centric RS which is capable of suggesting the food availability through considering climate attributes based on user’s personal choice and nutritive value. A novel functionality of ProTrip supports travellers with long-term diseases and followers of strict diet. The ProTrip is built on the pillars of ontological knowledge base and tailored filtering mechanisms. The gap between heterogeneous user profiles and descriptions is bridged using semantic ontologies. The effectiveness of recommendations is enhanced through a hybrid model of blended filtering approaches, and results prove that the proposed ProTrip to be a proficient system. The developed food recommendation approach is evaluated for the real-time IoT-based healthcare support system. We also present a detailed case study on the food recommendation-based health management. The proposed system is evaluated on real-time dataset, and analysis of the results shows improved accuracy and efficiency compared to existing models.

176 citations


Journal ArticleDOI
TL;DR: The PomBase database has undergone a complete redevelopment, resulting in a more fully integrated, better-performing service that provides a rich set of modular, reusable tools that can be deployed to create new, or enhance existing, organism-specific databases.
Abstract: PomBase (www.pombase.org), the model organism database for the fission yeast Schizosaccharomyces pombe, has undergone a complete redevelopment, resulting in a more fully integrated, better-performing service. The new infrastructure supports daily data updates as well as fast, efficient querying and smoother navigation within and between pages. New pages for publications and genotypes provide routes to all data curated from a single source and to all phenotypes associated with a specific genotype, respectively. For ontology-based annotations, improved displays balance comprehensive data coverage with ease of use. The default view now uses ontology structure to provide a concise, non-redundant summary that can be expanded to reveal underlying details and metadata. The phenotype annotation display also offers filtering options to allow users to focus on specific areas of interest. An instance of the JBrowse genome browser has been integrated, facilitating loading of and intuitive access to, genome-scale datasets. Taken together, the new data and pages, along with improvements in annotation display and querying, allow users to probe connections among different types of data to form a comprehensive view of fission yeast biology. The new PomBase implementation also provides a rich set of modular, reusable tools that can be deployed to create new, or enhance existing, organism-specific databases.

167 citations


Journal ArticleDOI
TL;DR: More autonomous end-to-end solutions need to be experimentally tested and developed while incorporating natural language ontology and dictionaries to automate complex task decomposition and leveraging big data advancements to improve perception algorithms for robotics.
Abstract: The emergence of the Internet of things and the widespread deployment of diverse computing systems have led to the formation of heterogeneous multi-agent systems (MAS) to complete a variety of tasks. Motivated to highlight the state of the art on existing MAS while identifying their limitations, remaining challenges, and possible future directions, we survey recent contributions to the field. We focus on robot agents and emphasize the challenges of MAS sub-fields including task decomposition, coalition formation, task allocation, perception, and multi-agent planning and control. While some components have seen more advancements than others, more research is required before effective autonomous MAS can be deployed in real smart city settings that are less restrictive than the assumed validation environments of MAS. Specifically, more autonomous end-to-end solutions need to be experimentally tested and developed while incorporating natural language ontology and dictionaries to automate complex task decomposition and leveraging big data advancements to improve perception algorithms for robotics.

156 citations


Proceedings ArticleDOI
TL;DR: This paper examines ImageNet, a large-scale ontology of images that has spurred the development of many modern computer vision methods, and considers three key factors within the person subtree of ImageNet that may lead to problematic behavior in downstream computer vision technology.
Abstract: Computer vision technology is being used by many but remains representative of only a few. People have reported misbehavior of computer vision models, including offensive prediction results and lower performance for underrepresented groups. Current computer vision models are typically developed using datasets consisting of manually annotated images or videos; the data and label distributions in these datasets are critical to the models' behavior. In this paper, we examine ImageNet, a large-scale ontology of images that has spurred the development of many modern computer vision methods. We consider three key factors within the "person" subtree of ImageNet that may lead to problematic behavior in downstream computer vision technology: (1) the stagnant concept vocabulary of WordNet, (2) the attempt at exhaustive illustration of all categories with images, and (3) the inequality of representation in the images within concepts. We seek to illuminate the root causes of these concerns and take the first steps to mitigate them constructively.

Journal ArticleDOI
TL;DR: Because post qualitative inquiry uses an ontology of immanence from poststructuralism as well as transcendental empiricism, it cannot be a social science research methodology with preexisting resea...
Abstract: Because post qualitative inquiry uses an ontology of immanence from poststructuralism as well as transcendental empiricism, it cannot be a social science research methodology with preexisting resea...

Journal ArticleDOI
TL;DR: This study reviews ontology research mainly published in the Scopus database from 2007 to 2017 with the combination of scientometric analysis and critical review to provide an in-depth understanding of existing ontologyResearch and indicates the emerging trends in this research domain.

Proceedings ArticleDOI
25 Jul 2019
TL;DR: Li et al. as discussed by the authors proposed a two-view KG embedding model, JOIE, which employs both cross-view and intra-view modeling that learn on multiple facets of the knowledge base.
Abstract: Many large-scale knowledge bases simultaneously represent two views of knowledge graphs (KGs): an ontology view for abstract and commonsense concepts, and an instance view for specific entities that are instantiated from ontological concepts. Existing KG embedding models, however, merely focus on representing one of the two views alone. In this paper, we propose a novel two-view KG embedding model, JOIE, with the goal to produce better knowledge embedding and enable new applications that rely on multi-view knowledge. JOIE employs both cross-view and intra-view modeling that learn on multiple facets of the knowledge base. The cross-view association model is learned to bridge the embeddings of ontological concepts and their corresponding instance-view entities. The intra-view models are trained to capture the structured knowledge of instance and ontology views in separate embedding spaces, with a hierarchy-aware encoding technique enabled for ontologies with hierarchies. We explore multiple representation techniques for the two model components and investigate with nine variants of JOIE. Our model is trained on large-scale knowledge bases that consist of massive instances and their corresponding ontological concepts connected via a (small) set of cross-view links. Experimental results on public datasets show that the best variant of JOIE significantly outperforms previous models on instance-view triple prediction task as well as ontology population on ontology-view KG. In addition, our model successfully extends the use of KG embeddings to entity typing with promising performance.

Journal ArticleDOI
TL;DR: It is argued that journalism studies, and particularly research focused on automated journalism, has much to learn from Human-Machine Communication (HMC), an emerging conceptual framework and empirically grounded research domain that has formed in response to the growing number of technologies designed to function as message sources, rather than as message channels.
Abstract: In this article, we argue that journalism studies, and particularly research focused on automated journalism, has much to learn from Human-Machine Communication (HMC), an emerging conceptual framew...

Journal ArticleDOI
TL;DR: This work proposes an ontology and latent Dirichlet allocation (OLDA)-based topic modeling and word embedding approach for sentiment classification, which achieves accuracy of 93%, which shows that the proposed approach is effective for sentiment Classification.
Abstract: Social networks play a key role in providing a new approach to collecting information regarding mobility and transportation services. To study this information, sentiment analysis can make decent observations to support intelligent transportation systems (ITSs) in examining traffic control and management systems. However, sentiment analysis faces technical challenges: extracting meaningful information from social network platforms, and the transformation of extracted data into valuable information. In addition, accurate topic modeling and document representation are other challenging tasks in sentiment analysis. We propose an ontology and latent Dirichlet allocation (OLDA)-based topic modeling and word embedding approach for sentiment classification. The proposed system retrieves transportation content from social networks, removes irrelevant content to extract meaningful information, and generates topics and features from extracted data using OLDA. It also represents documents using word embedding techniques, and then employs lexicon-based approaches to enhance the accuracy of the word embedding model. The proposed ontology and the intelligent model are developed using Web Ontology Language and Java, respectively. Machine learning classifiers are used to evaluate the proposed word embedding system. The method achieves accuracy of 93%, which shows that the proposed approach is effective for sentiment classification.

Journal ArticleDOI
TL;DR: The comprehensive survey in this paper gives an overview of the research in progress using ontology to achieve personalization in recommender systems in the e-learning domain.
Abstract: In recent years there has been an enormous increase in learning resources available online through massive open online courses and learning management systems. In this context, personalized resource recommendation has become an even more significant challenge, thereby increasing research in that direction. Recommender systems use ontology, artificial intelligence, among other techniques to provide personalized recommendations. Ontology is a way to model learners and learning resources, among others, which helps to retrieve details. This, in turn, generates more relevant materials to learners. Ontologies have benefits of reusability, reasoning ability, and supports inference mechanisms, which helps to provide enhanced recommendations. The comprehensive survey in this paper gives an overview of the research in progress using ontology to achieve personalization in recommender systems in the e-learning domain.

Journal ArticleDOI
TL;DR: This article presents the systematic development process of an OWL-based manufacturing resource capability ontology (MaRCO), which has been developed to describe the capabilities of manufacturing resources, and provides details of the model’s content and structure.
Abstract: Today’s highly volatile production environments call for adaptive and rapidly responding production systems that can adjust to the required changes in processing functions, production capacity and dispatching of orders. There is a desire to support such system adaptation and reconfiguration with computer-aided decision support systems. In order to bring automation to reconfiguration decision making in a multi-vendor resource environment, a common formal resource model, representing the functionalities and constraints of the resources, is required. This paper presents the systematic development process of an OWL-based manufacturing resource capability ontology (MaRCO), which has been developed to describe the capabilities of manufacturing resources. As opposed to other existing resource description models, MaRCO supports the representation and automatic inference of combined capabilities from the representation of the simple capabilities of co-operating resources. Resource vendors may utilize MaRCO to describe the functionality of their offerings in a comparable manner, while the system integrators and end users may use these descriptions for the fast identification of candidate resources and resource combinations for a specific production need. This article presents the step-by-step development process of the ontology by following the five phases of the ontology engineering methodology: feasibility study, kickoff, refinement, evaluation, and usage and evolution. Furthermore, it provides details of the model’s content and structure.

Journal ArticleDOI
TL;DR: Integrating automation with artificial intelligence will enable scientists to spend more time identifying important problems and communicating critical insights, accelerating discovery and development of materials for emerging and future technologies.
Abstract: Accelerating materials research by integrating automation with artificial intelligence is increasingly recognized as a grand scientific challenge to discover and develop materials for emerging and future technologies. While the solid state materials science community has demonstrated a broad range of high throughput methods and effectively leveraged computational techniques to accelerate individual research tasks, revolutionary acceleration of materials discovery has yet to be fully realized. This perspective review presents a framework and ontology to outline a materials experiment lifecycle and visualize materials discovery workflows, providing a context for mapping the realized levels of automation and the next generation of autonomous loops in terms of scientific and automation complexity. Expanding autonomous loops to encompass larger portions of complex workflows will require integration of a range of experimental techniques as well as automation of expert decisions, including subtle reasoning about data quality, responses to unexpected data, and model design. Recent demonstrations of workflows that integrate multiple techniques and include autonomous loops, combined with emerging advancements in artificial intelligence and high throughput experimentation, signal the imminence of a revolution in materials discovery.

DissertationDOI
01 Jan 2019
Abstract: “LET US MAKE םדא”: AN EDENIC MODEL OF PERSONAL ONTOLOGY by Marla A. Samaan Nedelcu Adviser: Richard M. Davidson ABSTRACT OF GRADUATE STUDENT RESEARCH DissertationOF GRADUATE STUDENT RESEARCH Dissertation Andrews University Seventh-day Adventist Theological Seminary Title: “LET US MAKE םדא”: AN EDENIC MODEL OF PERSONAL ONTOLOGY Name of researcher: Marla A. Samaan Nedelcu Name and degree of faculty adviser: Richard M. Davidson, Ph.D. Date completed: April 2019 Personal ontology studies human constitution and human nature, an increasingly debated topic in Christian theology. Historically, the most prominent models of personal ontology in Christian theology have been substance dualist models. More recently, physicalist models have offered prominent alternatives. This dissertation studies the conflict of interpretations between these two major model groupings. By applying a canonical theology, it then presents an Edenic model of personal ontology that can address the current conflict of interpretations. To achieve this end, the dissertation briefly analyzes substance dualism and physicalism according to the rubrics of constitution and nature, using a model methodology. It then compares the advantages and challenges each offers, and asks whether a model based solely on the normative source of the biblical canon might prove beneficial to the current debate. This question is explored next through a close reading of the Eden narrative (Gen 1-3), which is the biblical pericope that is most foundational to a study of personal ontology. Utilizing the final-form canonical approach and phenomenological-exegetical analysis, this reading delivers answers to the questions of constitution and nature and reveals an Edenic model of personal ontology. In short, the Edenic model highlights both the physicality and the uniqueness of human ontology. It points to a human constitution that is physical, and yet it does not compromise humans’ unique identity or place in God’s creation. This is because the text shows the image of God to be the mark of human identity. This imago Dei is manifested in every function of human nature (all of which are physically constituted), and enables humans to fulfill God’s commission to them. Next, we compare the Edenic model with substance dualism and physicalism, using the same two rubrics of constitution and nature, to see which models may have higher explanatory powers in dealing with current questions of personal ontology. We see that a model of personal ontology that arises from the Eden narrative emphasizes both human physicality and human uniqueness. Such a twin emphasis proves helpful in the current debate in Christian theology, whereas substance dualism emphasizes human identity, and physicalism often highlights human physicality more than human identity. The dissertation ends by encouraging Christian theologians to explore further the new questions about personal ontology that are being raised, but to do so within these twin parameters and on the basis of a model that arises from Scripture. This approach will not only have implications for a study of personal ontology, but likely for an array of Christian beliefs and practices. Andrews University Seventh-day Adventist Theological Seminary “LET US MAKE םדא”: AN EDENIC MODEL OF PERSONAL ONTOLOGY A Dissertation Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy

Journal ArticleDOI
TL;DR: This paper reinterprets the traditional VA pipeline to encompass model-development workflows and introduces necessary definitions, rules, syntaxes, and visual notations for formulating VIS4ML and makes use of semantic web technologies for implementing it in the Web Ontology Language (OWL).
Abstract: While many VA workflows make use of machine-learned models to support analytical tasks, VA workflows have become increasingly important in understanding and improving Machine Learning (ML) processes. In this paper, we propose an ontology (VIS4ML) for a subarea of VA, namely “VA-assisted ML”. The purpose of VIS4ML is to describe and understand existing VA workflows used in ML as well as to detect gaps in ML processes and the potential of introducing advanced VA techniques to such processes. Ontologies have been widely used to map out the scope of a topic in biology, medicine, and many other disciplines. We adopt the scholarly methodologies for constructing VIS4ML, including the specification, conceptualization, formalization, implementation, and validation of ontologies. In particular, we reinterpret the traditional VA pipeline to encompass model-development workflows. We introduce necessary definitions, rules, syntaxes, and visual notations for formulating VIS4ML and make use of semantic web technologies for implementing it in the Web Ontology Language (OWL). VIS4ML captures the high-level knowledge about previous workflows where VA is used to assist in ML. It is consistent with the established VA concepts and will continue to evolve along with the future developments in VA and ML. While this ontology is an effort for building the theoretical foundation of VA, it can be used by practitioners in real-world applications to optimize model-development workflows by systematically examining the potential benefits that can be brought about by either machine or human capabilities. Meanwhile, VIS4ML is intended to be extensible and will continue to be updated to reflect future advancements in using VA for building high-quality data-analytical models or for building such models rapidly.

Journal ArticleDOI
TL;DR: ROBOT supports automation of a wide range of ontology development tasks, focusing on OBO conventions, and packages common high-level ontologyDevelopment functionality into a convenient library, and makes it easy to configure, combine, and execute individual tasks in comprehensive, automated workflows.
Abstract: Ontologies are invaluable in the life sciences, but building and maintaining ontologies often requires a challenging number of distinct tasks such as running automated reasoners and quality control checks, extracting dependencies and application-specific subsets, generating standard reports, and generating release files in multiple formats. Similar to more general software development, automation is the key to executing and managing these tasks effectively and to releasing more robust products in standard forms. For ontologies using the Web Ontology Language (OWL), the OWL API Java library is the foundation for a range of software tools, including the Protege ontology editor. In the Open Biological and Biomedical Ontologies (OBO) community, we recognized the need to package a wide range of low-level OWL API functionality into a library of common higher-level operations and to make those operations available as a command-line tool. ROBOT (a recursive acronym for “ROBOT is an OBO Tool”) is an open source library and command-line tool for automating ontology development tasks. The library can be called from any programming language that runs on the Java Virtual Machine (JVM). Most usage is through the command-line tool, which runs on macOS, Linux, and Windows. ROBOT provides ontology processing commands for a variety of tasks, including commands for converting formats, running a reasoner, creating import modules, running reports, and various other tasks. These commands can be combined into larger workflows using a separate task execution system such as GNU Make, and workflows can be automatically executed within continuous integration systems. ROBOT supports automation of a wide range of ontology development tasks, focusing on OBO conventions. It packages common high-level ontology development functionality into a convenient library, and makes it easy to configure, combine, and execute individual tasks in comprehensive, automated workflows. This helps ontology developers to efficiently create, maintain, and release high-quality ontologies, so that they can spend more time focusing on development tasks. It also helps guarantee that released ontologies are free of certain types of logical errors and conform to standard quality control checks, increasing the overall robustness and efficiency of the ontology development lifecycle.

Journal ArticleDOI
TL;DR: In this article, the authors build upon previous assertions that the ocean provides a fertile environment for reconceptualising understandings of space, time, movement and experiences of being in a transformative...
Abstract: This article builds upon previous assertions that the ocean provides a fertile environment for reconceptualising understandings of space, time, movement and experiences of being in a transformative...

Journal ArticleDOI
TL;DR: A domain ontology to formalize safety risk knowledge in metro construction to support safety risk identification and indicates that the SRI-Onto possesses the necessary and essential criteria to serve the purpose of knowledge sharing and reuse.

Journal ArticleDOI
TL;DR: ManuService ontology provides a module-based, reconfigurable, privacy-enhanced and standardised approach to modelling customised manufacturing service requests and forms the basis for collaborative service-oriented business interactions, intelligent and secure service provision in cloud manufacturing environment.
Abstract: The ever-increasing distributed, networked and crowd-sourced cloud environment imposes the need of a service-oriented product data model for explicit representation of service requests in global manufacturing-service networks. The work in this paper aims to develop such a description framework for products based on semantic web technologies to facilitate the make-to-individual production strategy in a cloud manufacturing environment. A brief discussion on the requirements of a product data model in cloud manufacturing and research on product data modelling is given in the first part. A systematic ontology development methodology is then proposed and elaborated. The ontology called ManuService has been developed, consisting of all necessary concepts for description of products in a service-oriented business environment. These concepts include product specifications, quality constraints, manufacturing processes, organisation information, cost expectations, logistics requirements, and etcetera. ManuService ontology provides a module-based, reconfigurable, privacy-enhanced and standardised approach to modelling customised manufacturing service requests. An industrial case is presented to demonstrate possible applications using ManuService ontology. Comprehensive discussions are given thereafter, including a pilot application of a software package for semantic-based product design and a semantic web-based module for intelligent knowledge-based decision-making based on ManuService. ManuService forms the basis for collaborative service-oriented business interactions, intelligent and secure service provision in cloud manufacturing environment.

Journal ArticleDOI
TL;DR: The main objective of ViSEAGO package is to carry out a data mining of biological functions and establish links between genes involved in the study to facilitate functional Gene Ontology (GO) analysis of complex experimental design with multiple comparisons of interest.
Abstract: The main objective of ViSEAGO package is to carry out a data mining of biological functions and establish links between genes involved in the study. We developed ViSEAGO in R to facilitate functional Gene Ontology (GO) analysis of complex experimental design with multiple comparisons of interest. It allows to study large-scale datasets together and visualize GO profiles to capture biological knowledge. The acronym stands for three major concepts of the analysis: Visualization, Semantic similarity and Enrichment Analysis of Gene Ontology. It provides access to the last current GO annotations, which are retrieved from one of NCBI EntrezGene, Ensembl or Uniprot databases for several species. Using available R packages and novel developments, ViSEAGO extends classical functional GO analysis to focus on functional coherence by aggregating closely related biological themes while studying multiple datasets at once. It provides both a synthetic and detailed view using interactive functionalities respecting the GO graph structure and ensuring functional coherence supplied by semantic similarity. ViSEAGO has been successfully applied on several datasets from different species with a variety of biological questions. Results can be easily shared between bioinformaticians and biologists, enhancing reporting capabilities while maintaining reproducibility. ViSEAGO is publicly available on https://bioconductor.org/packages/ViSEAGO .


Journal ArticleDOI
TL;DR: It is found that no existing ontology covers the breadth of human behaviour change and identifies the need for an intervention ontology.
Abstract: Ontologies are classification systems specifying entities, definitions and inter-relationships for a given domain, with the potential to advance knowledge about human behaviour change. A scoping review was conducted to: (1) identify what ontologies exist related to human behaviour change, (2) describe the methods used to develop these ontologies and (3) assess the quality of identified ontologies. Using a systematic search, 2,303 papers were identified. Fifteen ontologies met the eligibility criteria for inclusion, developed in areas such as cognition, mental disease and emotions. Methods used for developing the ontologies were expert consultation, data-driven techniques and reuse of terms from existing taxonomies, terminologies and ontologies. Best practices used in ontology development and maintenance were documented. The review did not identify any ontologies representing the breadth and detail of human behaviour change. This suggests that advancing behavioural science would benefit from the development of a behaviour change intervention ontology.

Journal ArticleDOI
TL;DR: This work introduces the largest, reproducible and detailed experimental survey of OM measures and THE AUTHORS models reported in the literature, based on the evaluation of both families of methods on a same software platform, with the aim of elucidating what is the state of the problem.

Journal ArticleDOI
TL;DR: This work will systematically search for projects that fulfill a set of inclusion criteria and compare them with each other with respect to the scope of their ontology, what types of cognitive capabilities are supported by the use of ontologies, and which is their application domain.
Abstract: Within the next decades, robots will need to be able to execute a large variety of tasks autonomously in a large variety of environments. To relax the resulting programming effort, a knowledge-enabled approach to robot programming can be adopted to organize information in re-usable knowledge pieces. However, for the ease of reuse, there needs to be an agreement on the meaning of terms. A common approach is to represent these terms using ontology languages that conceptualize the respective domain. In this work, we will review projects that use ontologies to support robot autonomy. We will systematically search for projects that fulfill a set of inclusion criteria and compare them with each other with respect to the scope of their ontology, what types of cognitive capabilities are supported by the use of ontologies, and which is their application domain.